CA2619281C - Transforming a submitted image of a person based on a condition of the person - Google Patents
Transforming a submitted image of a person based on a condition of the person Download PDFInfo
- Publication number
- CA2619281C CA2619281C CA2619281A CA2619281A CA2619281C CA 2619281 C CA2619281 C CA 2619281C CA 2619281 A CA2619281 A CA 2619281A CA 2619281 A CA2619281 A CA 2619281A CA 2619281 C CA2619281 C CA 2619281C
- Authority
- CA
- Canada
- Prior art keywords
- image
- transformation
- parameter
- person
- submitted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000001131 transforming effect Effects 0.000 title description 13
- 230000009466 transformation Effects 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 45
- 230000004075 alteration Effects 0.000 claims abstract description 10
- 239000013598 vector Substances 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 7
- 238000002059 diagnostic imaging Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 24
- 230000000694 effects Effects 0.000 description 7
- 208000037063 Thinness Diseases 0.000 description 6
- 206010048828 underweight Diseases 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 235000005911 diet Nutrition 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 206010033307 Overweight Diseases 0.000 description 3
- 208000022531 anorexia Diseases 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 206010061428 decreased appetite Diseases 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 210000000496 pancreas Anatomy 0.000 description 3
- 206010033645 Pancreatitis Diseases 0.000 description 2
- 230000032683 aging Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000037213 diet Effects 0.000 description 2
- 230000000378 dietary effect Effects 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 235000005686 eating Nutrition 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 206010020710 Hyperphagia Diseases 0.000 description 1
- 208000008589 Obesity Diseases 0.000 description 1
- 241000577979 Peromyscus spicilegus Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000036621 balding Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000037396 body weight Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000002316 cosmetic surgery Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000036449 good health Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000002547 new drug Substances 0.000 description 1
- 235000020824 obesity Nutrition 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 235000020830 overeating Nutrition 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 230000036561 sun exposure Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 208000001072 type 2 diabetes mellitus Diseases 0.000 description 1
- 208000016261 weight loss Diseases 0.000 description 1
- 230000004580 weight loss Effects 0.000 description 1
Classifications
-
- G06T3/18—
Abstract
Apparatuses, computer media, and methods for altering a submitted image of a person. The submitted image is transformed in accordance with associated data regarding the person's condition. Global data may be processed by a statistical process to obtain cluster information, and the transformation parameter is then determined from cluster information. The transformation parameter is then applied to a portion of the submitted image to render a transformed image. A transformation parameter may include a texture alteration parameter, a hair descriptive parameter, or a reshaping parameter. An error measure may be determined that gauges a discrepancy between a transformed image and an actual image. A transformation model is subsequently reconfigured with a modified model in order to reduce the error measure. Also, the transformation model may be trained to reduce an error measure for the transformed image.
Description
Patent Application 005222.00531 TRANSFORMING A SUBMITTED IMAGE OF A PERSON BASED ON A CONDITION
OF THE PERSON
FIELD OF THE INVENTION
[01] This invention relates to altering a submitted image of a person. More particularly, the invention provides a platform for transforming the image in accordance with a submitted image and associated data regarding the person's condition.
BACKGROUND OF THE INVENTION
OF THE PERSON
FIELD OF THE INVENTION
[01] This invention relates to altering a submitted image of a person. More particularly, the invention provides a platform for transforming the image in accordance with a submitted image and associated data regarding the person's condition.
BACKGROUND OF THE INVENTION
[02] Excessive body weight is a major cause of many medical illnesses. With today's life style, people are typically exercising less and eating more. Needless to say, this life style is not conducive to good health. For example, it is acknowledged that type-2 diabetes is trending to epidemic proportions. Obesity appears to be a major contributor to this trend.
[03] On the other hand, a smaller proportion of the population experiences from being underweight. However, the effects of being underweight may be even more divesting to the person than to another person being overweight. In numerous related cases, people eat too little as a result of a self-perception problem. Anorexia is one affliction that is often associated with being grossly underweight.
[04] While being overweight or underweight may have organic causes, often such afflictions are the result of psychological issues. If one can objectively view the effect of being underweight or underweight, one may be motivated to change one's life style, e.g., eating in a healthier fashion or exercising more. Viewing a predicted image of one's body if one continues one's current life style may motivate the person to live in a healthier manner.
Patent Application 005222.00531
Patent Application 005222.00531
[05] The above discussion underscores a market need to provide a computing platform for transforming a submitted image in order to project the image in accordance with a specified condition of a person.
BRIEF SUMMARY OF THE INVENTION
BRIEF SUMMARY OF THE INVENTION
[06] Embodiments of invention provide apparatuses, computer media, and methods for altering a submitted image of a person. The submitted image is transformed in accordance with associated data regarding the person's condition.
[07] With an aspect of the invention, a submitted image and associated data of a person's condition is obtained. A transformation parameter is determined and applied to a portion of the submitted image to render a transformed image.
[08] With another aspect of the invention, an error measure is determined that gauges a discrepancy between a transformed image and an actual image. A transformation model is reconfigured with a modified model in order to reduce the error measure.
[09] With another aspect of the invention, a transformation parameter includes a deformation vector. A mesh with a plurality of vertices is formed that overlays a portion of a submitted image. The deformation vector is applied to a vertex to obtain a transformed vertex to transform the mesh. A transformed image is rendered from the transformed mesh.
[10] With another aspect of the invention, a transformation model is trained to reduce an error measure for the transformed image.
[11] With another aspect of the invention, global data is processed by a statistical process to obtain cluster information. A transformation parameter is then determined from cluster information.
[12] With another aspect of the invention, a transformation parameter includes a texture alteration parameter, a hair descriptive parameter, or a reshaping parameter. The transformation parameter is determined and subsequently applied to a portion of the submitted image.
[13] With another aspect of the invention, a client-server configuration enables a requester to provide a submitted image with associated data about a person. The server returns a transformed image to the requester.
[13a] In one aspect, there is provided a method for processing a submitted image of a person, the method comprising: (a) receiving the submitted image and associated data, the associated data being indicative of a condition of the person; (b) obtaining, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
(c) applying the transformation parameter to the portion of the submitted image; (d) rendering a transformed image from the transformation parameter; (e) determining a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by: (e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances;
and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and (f) modifying, based on analyzing the error measure, a model parameter to reconfigure the transformation model.
[13b] In another aspect, there is provided a computer-readable medium having computer-executable instructions to perform the above method.
[13c] In another aspect, there is provided an apparatus for processing a submitted image of a person, comprising: a database for receiving the submitted image and associated data, the associated data being indicative of a condition of the person; a transformation control module configured to obtain, from a transformation model, a transformation parameter that is associated with a portion of the submitted image; an image transformation module configured to apply the transformation parameter to the portion of the submitted image and render a transformed image from the transformation parameter; an error analysis module configured to determine a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by: (e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data; (e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and wherein the error analysis module is further configured to modify, based on analyzing the error measure, a model parameter to reconfigure the transformation model that is utilized by the transformation control module.
BRIEF DESCRIPTION OF THE DRAWINGS
[13a] In one aspect, there is provided a method for processing a submitted image of a person, the method comprising: (a) receiving the submitted image and associated data, the associated data being indicative of a condition of the person; (b) obtaining, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
(c) applying the transformation parameter to the portion of the submitted image; (d) rendering a transformed image from the transformation parameter; (e) determining a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by: (e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances;
and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and (f) modifying, based on analyzing the error measure, a model parameter to reconfigure the transformation model.
[13b] In another aspect, there is provided a computer-readable medium having computer-executable instructions to perform the above method.
[13c] In another aspect, there is provided an apparatus for processing a submitted image of a person, comprising: a database for receiving the submitted image and associated data, the associated data being indicative of a condition of the person; a transformation control module configured to obtain, from a transformation model, a transformation parameter that is associated with a portion of the submitted image; an image transformation module configured to apply the transformation parameter to the portion of the submitted image and render a transformed image from the transformation parameter; an error analysis module configured to determine a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by: (e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data; (e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and wherein the error analysis module is further configured to modify, based on analyzing the error measure, a model parameter to reconfigure the transformation model that is utilized by the transformation control module.
BRIEF DESCRIPTION OF THE DRAWINGS
[14] The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
[15] Figure 1 shows an architecture for transforming a submitted image of a person in accordance with an embodiment of the invention.
[16] Figure 2 shows a training process for configuring a transformation process that alters a submitted image of a person in accordance with an embodiment of the invention.
[17] Figure 3 shows a process that modifies a model parameter by analyzing an error measure between a transformed image and an actual image in accordance with an embodiment of the invention.
[18] Figure 4 shows a client-server configuration for transforming a submitted image of a person in accordance with an embodiment of the invention.
- 3a -1191 Figure 5 shows a mesh that is superimposed in a face image in accordance with an embodiment of the image.
1201 Figure 6 shows a set of points for altering a face image in accordance with an embodiment of the invention.
- 3b -Patent Application 005222.00531 [21] Figure 7 shows controlling points for face alteration in accordance with an embodiment of the invention.
[22] Figure 8 shows a transformation of points on a mesh in accordance with an embodiment of the invention.
[23] Figure 9 shows a resulting error from transforming points on a mesh in accordance with an embodiment of the invention.
[24] Figure 10 shows visual results for altering a face image in accordance with an embodiment of the invention.
[25] Figure 11 shows additional visual results for altering a face image in accordance with an embodiment of the invention.
[26] Figure 12 shows a flow diagram for altering a face image in accordance with an embodiment of the invention.
[27] Figure 13 shows an architecture of a computer system used in altering a face image in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[28] Figure 1 shows system 100 for transforming a submitted image of a person in accordance with an embodiment of the invention. (Fi gures 10 and 11 show examples of reshaped (transformed) images, in which the face is either fattened or thinned.) The submitted image from interface 101 is registered by picture registration module 103 so that a person is associated with the submitted image. In addition, associated data is entered from interface 101 that provides information about the person. For example, the associated data may be indicative of a health condition (e.g., anorexia or overweight family history), age, current Patent Application 005222.00531 weight, height, sex, ethnic group (e.g., Caucasian of English extraction or Asian of Chinese extraction) and dietary information.
[29] With embodiments of the invention, system 100 may transform (e.g., reshape) a submitted image of a person for different objectives. For example, as will be discussed in greater detail, system 100 may thin or fatten the face of the person to show the effects of one's diet. Also, system 100 may provide guidance to patients in determining the benefits of cosmetic surgery or may project the effects of aging on a person (e.g., in support of a missing person's investigation. Embodiments of the invention also support other forecasting-health scenarios.
Other scenarios include the evolution of face appearance while smoking and the evolution of stains on the face resulting from sun exposure. Embodiments of the invention can also forecast the effect of a drug taken for some illness. While photographic images can be used, other types of images (e.g., medical imaging including MRI, x-ray, ultrasound, and 3D) may be analyzed for different affected body organs (e.g., heart, lungs, kidney, and liver).
[30] With an embodiment of the invention, system 100 transforms a portion of the submitted image in accordance with the associated data provided from interface 101. The portion may be specified as the head, torso, or entire body of a person.
[31] With an embodiment of the invention, system 100 may be trained through training module 105 to configure a transformation model as will be discussed. After training, a picture (corresponding to a submitted image) and associated data is provided to database 107.
Database 107 accesses a search model and model parameters that best match the submitted image. For example, a search model may include a mesh having points (vertices) as selected points of the face (e.g., shown in Figure 5). The mesh may vary based on the associated data, e.g., the ethnic group or the sex of the person.
[32] Search module 115 obtains the image and the search model from database 107 and places the vertices on the portion of the image to form a mesh. As shown in Figure 5, an exemplary Patent Application 005222.00531 mesh is formed for the face of the person. The vertices may be placed differently on the image based on the search model, which may depend on the ethnic group and the sex of the person. Search module 115 provides the image and the associated mesh to image transformation module 117.
[33] In order for image transformation module 117 to transform the portion of the submitted image, transformation control module 113 determines vertex vectors (deformation vectors) for transforming the vertices of the mesh to form a transformed mesh. (As will be discussed with Figure 5, the mesh is associated with corresponding texture from the picture where the alteration is taking place. When the mesh has been transformed, computer graphics software includes the associated texture to render the transformed image. Also, as will be discussed, Figure 8 shows vertices that are transformed in accordance with determined deformation vectors.) The transformed image may be provided to a user through interface 101, printer 121, or communications channel 119.
[34] Transformation control module 113 determines the deformation vectors from entry data (as may be contained in the associated data provided by a doctor) in accordance with an embodiment of the invention. (Embodiments of the invention may also include changes in texture, pattern, color and any other image characteristic.) For example, entry data may include specific information about a patient, e.g., the patient's weight loss during a period of time, the caloric input of the patient, and other dietary information. Also, as shown in Figure 1, transformation control module 113 may be provided model parameters by training modules 105. In addition, the patient may be associated to a cluster by statistical analysis module 111. Module 111 may determine the associated cluster from the associated data from doctor that may include the age, weight, height, and ethnic group of the patient. A
plurality of clusters may be formed based on the values of different attributes such age, weight, and ethnic group. A population may be assigned to the plurality of clusters based on selected attributes.
Patent Application 005222.00531 1351 With an embodiment of the invention, system 100 is adaptive so that the transformation parameters for the transformation model may be modified in order to reduce an error measure between the transformed image and an actual image. For example, system 100 may provide a transformed image that predicts (projects) the image of a person's face after one year using the associated data from a doctor. The transformed image may be compared with the actual image (if one is available) after one year to determine an error measure, and a model parameter may be subsequently modified in order to reduce the error for images that are submitted to system 100. (As will be discussed, Figure 9 provides an approach for determining an error measure.) For example, the deformation factor w (as discussed with EQs. 4A-4D) may be modified. The above error analysis may be implemented within one of the modules as shown in Figure 1 (e.g., module 117) or may be implemented with a separate module (e.g., an error analysis module not shown in Figure 1).
1361 Embodiments of the invention also support training module 105 that configures transformation models and search models in order to obtain a transformed images that have an acceptable error with respect to actual data (e.g., an actual image). For example, a submitted image, associated data, and corresponding actual image are provided to training module 105. The submitted image is transformed and compared to the actual image. Model parameters for the transformation model are then adjusted to minimize an error measure. In order to train system 100, the process can be repeated a number of times until an acceptable error is obtained.
1371 With embodiments of the invention, search module 115 may use a search model in which a search function of an Active Appearance Model (AAM) determines the vertices of the mesh (as will be discussed). A transformation model may be represented as a set of equations (e.g., EQs. 1-5B.) The set of equations may be specified by the model parameters (e.g., the constants contained in EQs. 1-5B.) Transformation control module 113 uses the transformation model to determine a deformation vector (that transforms a corresponding vertex of the mesh). The deformation vector comprises a weight value A, a scale factor s, a Patent Application 005222.00531 deformation factor w, and a direction vector ii as expressed in EQs. 1-5B and as will be later discussed.
[38] With system 100 one can introduce images (photos or medical-specific images) in order to automatically forecast an evolution of a person's condition. Moreover, the results provided by system 100 can be improved by introducing feedback from experts (e.g., doctors nutritionist, surgeons) if improvement is desired.
[39] Figure 2 shows training module 105 for configuring a transformation process that alters a submitted image of a person in accordance with an embodiment of the invention.
Transform module 205 transforms an image of training picture 201 in accordance with input user data 203 that specifies a given condition affecting. For example, a user may specify a degree of thinning for a person. Comparator 207 compares the transformed image with an image from a corresponding actual picture 209, which shows a person being affected by the given condition to determine an error measure. (An example of determining an error measure is discussed with Figure 9.) This operation may be repeated a plurality of times to better determine the accuracy of transform module 205. (Typically, the greater the number of training pictures (with corresponding actual pictures), the greater the accuracy of transformation.) When the accuracy (as gauged by the error measure) is determined, adjustment module 211 adjusts model parameters for transforming a submitted image.
[40] Figure 3 shows process 300 that modifies a model parameter by analyzing an error measure between a transformed image and an actual image in accordance with an embodiment of the invention. System 100 executes process 300 to update model parameters after system 100 has been trained by process 105 (as shown in Figure 2).
[41] With embodiments of the invention, the execution of process 300 may be distributed over a plurality of modules as shown in Figure 1. In step 301, a submitted image and associated Patent Application 005222.00531 data is entered and stored in database 107. In step 303, database 107 provides the appropriate search model and the submitted image to search module 115 to obtain the associated mesh.
[42] In step 305, transformation control module 113 determines transformation parameters (e.g., deformation vectors) from cluster data and specific data about the person in accordance with the selected transformation model as identified by database 107. Image transformation module 117 subsequently processes the transformation parameters, submitted parameter, and mesh in step 307.
[43] Even though system 100 may have been previously trained with training module 105, system 100 can subsequently update model parameters through error analysis process 309. Image transformation module 117 transforms the submitted image to obtain a transformed image as discussed above. If an actual image of the person is available at a time corresponding to the projected time of the transformed image, error analysis process 309 can compare the actual image with the transformed image. (Typically, the transformed image is stored in database 107 and later retrieved when the actual image is available. As an example, the results of every Nth submitted image may be evaluated with respect to the actual image that is available after the projected time.) Error analysis process 309 then adjusts the model parameters in order to reduce an error measure (e.g., the error measure illustrated with Figure 9).
[44] Figure 4 shows client-server configuration 400 for transforming a submitted image of a person in accordance with an embodiment of the invention. While system 100 may operate in a stand-alone configuration, configuration enables requester (client) 401 to request that server 403 process submitted image 405 in accordance with associated data 407 to obtain transformed image 409. Server 403 is typically trained before processing submitted image 405. With embodiments of the invention, server 403 includes database 107 and modules 111-117 as shown in Figure 1. However, because of privacy concerns, requester 401 may restrict information that identifies the person whose image is being submitted. Moreover, server 403 may not store submitted image 405 or transformed image 409.
Patent Application 005222.00531 [45] Figure 5 shows a mesh that is superimposed in a face image in accordance with an embodiment of the image. As will be discussed, an algorithm fattens or thins the face image in accordance with an embodiment of the invention. Points along the face, neck, and image boundary are determined in order to form the mesh. As will be further discussed, the algorithm alters the facial contour and then reshapes (transforms) the area around the neck.
(Points 536-545 will be discussed in a later discussion.) The altered image is rendered by using the points as vertices of the mesh. While a mesh is one example for reshaping an image, other embodiments of the invention may change other characteristics of an image to forecast the evolution of a person.
[46] This mesh is associated to its corresponding texture from the picture where the alteration is taking place. The corners and four points along each side of the picture (as shown in Figure 15 are also considered as part of the mesh. Computer graphics software API
(Application Programming Interface) is used to render the altered image (e.g., as shown in Figures 10-11).
OpenGL API is an example of computer graphics software that may be used to render the altered image.
[47] Figure 6 shows a set of points (including points 600, 606, 618, and 631 which will be discussed in further detail) for altering a face image in accordance with an embodiment of the invention. (Please note that Figure 6 shows a plurality of points, which correspond to the vertices of the mesh.) Points 600, 606, 618, and 631 are only some of the plurality of points.
An embodiment of the invention uses the search function of a software technique called Active Appearance Model (AAM), which utilizes a trained model. (Information about AAM
is available at http://www2.imm.dtu.dki-aam and has been utilized by other researchers.) However, points 600, 606, 618, and 631 may be determined with other approaches, e.g., a manual process that is performed by medical practitioner manually entering the points. With an embodiment of the invention, the trained model is an AMF file, which is obtained from the training process. For the training the AAM, a set of images with faces is needed. These images may belong to the same person or different people. Training is typically dependent Patent Application 005222.00531 on the desired degree of accuracy and the degree of universality of the population that is covered by the model. With an exemplary embodiment, one typically processes at least five images with the algorithm that is used. During the training process, the mesh is manually deformed on each image. Once all images are processed, the AAM algorithms are executed over the set of points and images, and a global texture/shape model is generated and stored in an AMF file. The AMF file permits an automatic search in future images not belonging to the training set. With an exemplary embodiment, one uses the AAM API to generate Appearance Model Files (AMF). Embodiments of the invention also support inputting the plurality of points through an input device as entered by a user. A mesh is superimposed on the image at points (e.g., the set of points shown in Figure 6) as determined by the trained process.
[48] Figure 6 also shows the orientation of the x and y coordinates of the points as shown in Figures 5-7.
[49] Figure 7 shows controlling points 706-731 for face alteration in accordance with an embodiment of the invention. (Points 706, 718, and 731 correspond to points 606, 618, and 631 respectively as shown in Figure 6.) Points 706-731, which correspond to points around the cheeks and chin of the face, are relocated (transformed) for fattening or thinning a face image to a desired degree. With an embodiment of the invention, only a proper subset (points 706-731) of the plurality of points (as shown in Figure 6 are relocated. (With a proper subset, only some, and not all, of the plurality points are included.) [50] In the following discussion that describes the determination of the deformation vectors for reshaping the face image, index i = 6 to index i = 31 correspond to points 306 to points 731, respectively. The determined deformation vectors are added to points 306 to points 731 to re-position the point, forming a transformed mesh. A reshaped image is consequently rendered using the transformed mesh.
Patent Application 005222.00531 [51] In accordance with embodiments of the invention, deformation vector correspond to a product of four elements (factors):
= ii=s=w= A (EQ.!) where A is the weight value factor, s is the scale factor, w is the deformation factor, and ii is the direction vector. In accordance with an embodiment of the invention:
= Weight value factor [Al: It determines the strength of the thinning and fattening that we wan to apply.
A> 0 fattening (EQ. 2A) A<0 thinning (EQ. 2B) A=0 no change (EQ. 2C) = Scale factor Is]. It is the value of the width of the face divided by B.
One uses this factor to make this vector calculation independent of the size of the head we are working with. The value of B will influence how the refined is the scale of the deformation. It will give the units to the weight value that will be applied externally.
x,, - xal s =l (EQ. 3) = Deformation factor [w]. It is calculated differently for different parts of cheeks and chin. One uses a different equation depending on which part of the face one is processing:
I E [6-131 W = __ 1 IX, Xci I -I-- (EQ. 4A) 3 ix, ¨x131 3 I E 114-181 W 2 IX, Xri 2 + I (EQ. 4B) Ix ,3 - x,8 ie [19-23] w 1 2 IX, XI2 ci +1 (EQ. 4C) Ix. ¨x2,1 Patent Application 005222.00531 E [24 ¨31] w, = , Ix, xc, + ¨ (EQ. 4D) 3 kõ -x31 3 = Direction vector [ ii]: It indicates the sense of the deformation. One calculates the direction vector it the ratio between: the difference (for each coordinate) between the center and our point, and the absolute distance between this center and our point. One uses two different centers in this process: center C2 (point 653 as shown in Figure 6) for the points belonging to the jaw and center Cl (point 653 as shown in Figure 6) for the points belonging to the cheeks.
E [6¨i3}8z, [24-311 = x, ¨x (EQ. 5A) Ix, -x1 I
i E [14 ¨ 23] x ¨ xe, = _________________________________ ' (EQ. 5B) ' Ix, - xr, [52] Neck point-coordinates xi are based on the lower part of the face, where E [36 ¨45] j [14 ¨23] x, = (xi, yi +neck _height) (EQ. 6) neck _height = Y18- Y 6 (EQ. 7) where y18 and yo are the y-coordinates of points 618 and 600, respectively, as shown in Figure 6. Referring back to Figure 5, index i=36 to i=-45 correspond to points 536 to 545, respectively. Index j=14 to j=23 correspond to points 714 to 723, respectively, (as shown in Figure 3) on the lower part of the face, from which points 536 to 545 on the neck are determined. (In an embodiment of the invention, points 536 to 545 are determined from points 714 to 723 before points 714 to 723 are relocated in accordance with EQs. 1-5.) [53] The deformation vector (v,.) applied at points 536 to 545 has two components:
d _ neck ¨ (0, d _ neck ) (EQ. 8) Patent Application 005222.00531 when x, < Yd_õeck, = (x, -x,8)22 (EQ. 9A) 10.(x24 -x13) when X, x4, yak, (x, -x,)2 = _______ (EQ. 9B) 10{x24 ¨ X,3)2 [54] Figure 8 shows a transformation of points (vertices) on a mesh in accordance with an embodiment of the invention. Points 716-720 are a subset of vertices shown in Figure 7.
Deformation vectors 856-860 are determined by image transformation module 117 in accordance with EQs. 1-5B. Transformed points (transformed vertices) 816-820 are obtained by transforming points 716-720 with corresponding deformation vectors 856-860.
[55] Figure 9 shows a resulting error from transforming points on a mesh in accordance with an embodiment of the invention. (Embodiments of the invention support other criteria for determining an error measure. For example, an error measure can account for the color, texture, pattern, or shape change of the image.) Transformed points (transformed vertices) 816-820 correspond to points that are shown in Figure 8. If an actual image is available, actual vertices 916-920 can be determined from a search function as supported by search module 115. Subsequently, distances (di) 956-960 for each vertex pair consisting of a transformed point and an associated actual point is obtained. One can determine a square error for the transformed image by:
square _error = a, (actual _vertex - transformed _vertex)2 (EQ. 10) Each weight a, is adjusted to reflect the relative importance of the vertex pair. (If a vertex pair is not included when determining the square error, the corresponding weight is set to zero. Thus, some or all of the vertices shown in Figure 7 may be included in the error analysis.) The least square error may be determined by error analysis module 309 (as shown Patent Application 005222.00531 in Figure 3) by adjusting model parameters (e.g., constants in EQs. 1- 5B) that corresponds to reduce the square error to a minimum.
[56] Figure 10 shows visual results for altering a face image in accordance with an embodiment of the invention. Images 1001 to 1005 correspond to A = +100, A = +50, and A =
respectively, which correspond to decreasing degrees of fattening.
[57] With an embodiment of the invention, A = +100 corresponds to a maximum degree of fattening and A = -100 corresponds to a maximum degree of thinning. The value of A is selected to provide the desired degree of fattening or thinning. For example, if a patient were afflicted anorexia, the value of A would have a negative value that would depend on the degree of affliction and on the medical history and body type of the patient.
As another example, a patient may be over-eating or may have an unhealthy diet with many empty calories. In such a case, A would have a positive value. A medical practitioner may be able to gauge the value of A based on experience. However, embodiments of invention may support an automated implementation for determining the value of A. For example, an expert system may incorporate knowledge based on information provided by experienced medical practitioners.
[58] Figure 11 shows additional visual results for altering a face image in accordance with an embodiment of the invention. Images 1101-1105, corresponding to A = 0, A= -50 and A= -50 respectively, show the continued reduced sequencing of the fattening (increased thinning). When A = 0 (image 1101), the face is shown as it really appears.
With A = -50 (image 1103), the face is shows thinning. As A becomes more negative, the effects of thinning is increased.
[59] With embodiments of the invention, medical imaging may be processed in order to determine effects of treatment on an organ. For example, a patient is being treated for pancreatitis (inflammation of pancreas). The doctor is prescribing the patient a drug and Patent Application 005222.00531 wants to compare the evolution of the patient's condition with expected results. The doctor uses ultrasound (or MRI) images to view the pancreas. A mesh is also utilized to track the contour of the pancreas to determine how the pancreas evolves. Feedback from the doctor and the evolution of the patient's condition are utilized to improve future predictions.
Moreover, this approach may be extended so that pharmacologists can evaluate the tests of a new drug with the help of experts.
[60] Figure 12 shows flow diagram 1200 for altering a face image in accordance with an embodiment of the invention. In step 1201, points are located on the image of the face and neck in order form a mesh. Points may be determined by a trained process or may be entered through an input device by a medical practitioner. In step 1203, reshaping parameters (e.g., a weight value factor A) are obtained. The reshaping factors may be entered by the medical practitioner or may be determined by a process (e.g. an expert system) from information about the person associated with the face image.
[61] In step 1205 deformation vectors are determined and applied to points (e.g. points 706-731 as shown in Figure 7) on the face. For example, as discussed above, EQs. 1-5.
are used to determine the relocated points. In step 1207 deformation vectors are determined (e.g., using EQs. 6-9) and applied to points (e.g., points 536-545 as shown in Figure 5) on the neck. A
transformed mesh is generated from which a reshaped image is rendered using computer graphics software in step 1209.
[62] While Figures 5-12 illustrate embodiments of the invention for fattening and thinning a person's face, embodiments of the invention support other types of transformations. For example, not only may vertices of a mesh be transformed to reshape the face, texture components (e.g., wrinkling of the skin associated with aging) may also be transformed.
Also, hair attributes (e.g., graying and balding) may be included when forming a transformed image by adding artificial synthetic elements. Other image transformations that may be considered are: texture, pattern and color. Moreover, slight perspective changes may be Patent Application 005222.00531 applied to some of the objects in the images (e.g., face) to rectify the point of view in which the picture has been taken and the point of view in which the transformation model was trained. More than one image may be evaluated at a time if those images give different views from the same face, organ or object (e.g., one can evaluate the evolution of a face from a frontal and a side perspective).
[63] Figure 13 shows computer system 1 that supports an alteration of a face image in accordance with an embodiment of the invention. Elements of the present invention may be implemented with computer systems, such as the system 1. Computer system 1 includes a central processor 10, a system memory 12 and a system bus 14 that couples various system components including the system memory 12 to the central processor unit 10.
System bus 14 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The structure of system memory 12 is well known to those skilled in the art and may include a basic input/output system (BIOS) stored in a read only memory (ROM) and one or more program modules such as operating systems, application programs and program data stored in random access memory (RAM).
[64] Computer 1 may also include a variety of interface units and drives for reading and writing data. In particular, computer 1 includes a hard disk interface 16 and a removable memory interface 20 respectively coupling a hard disk drive 18 and a removable memory drive 22 to system bus 14. Examples of removable memory drives include magnetic disk drives and optical disk drives. The drives and their associated computer-readable media, such as a floppy disk 24 provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for computer 1. A single hard disk drive 18 and a single removable memory drive 22 are shown for illustration purposes only and with the understanding that computer 1 may include several of such drives. Furthermore, computer 1 may include drives for interfacing with other types of computer readable media.
Patent Application 005222.00531 [65] A user can interact with computer 1 with a variety of input devices.
Figure 13 shows a serial port interface 26 coupling a keyboard 28 and a pointing device 30 to system bus 14.
Pointing device 28 may be implemented with a mouse, track ball, pen device, or similar device. Of course one or more other input devices (not shown) such as a joystick, game pad, satellite dish, scanner, touch sensitive screen or the like may be connected to computer 1.
[66] Computer 1 may include additional interfaces for connecting devices to system bus 14.
Figure 7 shows a universal serial bus (USB) interface 32 coupling a video or digital camera 34 to system bus 14. An IEEE 1394 interface 36 may be used to couple additional devices to computer 1. Furthermore, interface 36 may configured to operate with particular manufacture interfaces such as FireWire developed by Apple Computer and ilink developed by Sony. Input devices may also be coupled to system bus 114 through a parallel port, a game port, a PCI board or any other interface used to couple and input device to a computer.
1671 Computer 1 also includes a video adapter 40 coupling a display device 42 to system bus 14.
Display device 42 may include a cathode ray tube (CRT), liquid crystal display (LCD), field emission display (FED), plasma display or any other device that produces an image that is viewable by the user. Additional output devices, such as a printing device (not shown), may be connected to computer 1.
[68] Sound can be recorded and reproduced with a microphone 44 and a speaker 66. A sound card 48 may be used to couple microphone 44 and speaker 46 to system bus 14.
One skilled in the art will appreciate that the device connections shown in Figure 7 are for illustration purposes only and that several of the peripheral devices could be coupled to system bus 14 via alternative interfaces. For example, video camera 34 could be connected to interface 36 and pointing device 30 could be connected to USB interface 32.
Patent Application 005222.00531 1691 Computer 1 can operate in a networked environment using logical connections to one or more remote computers or other devices, such as a server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant. Computer 1 includes a network interface 50 that couples system bus 14 to a local area network (LAN) 52. Networking environments are commonplace in offices, enterprise-wide computer networks and home computer systems.
[70] A wide area network (WAN) 54, such as the Internet, can also be accessed by computer 1.
Figure 7 shows a modem unit 56 connected to serial port interface 26 and to WAN 54.
Modem unit 56 may be located within or external to computer 1 and may be any type of conventional modem such as a cable modem or a satellite modem. LAN 52 may also be used to connect to WAN 54. Figure 13 shows a router 58 that may connect LAN 52 to WAN 54 in a conventional manner.
[71] It will be appreciated that the network connections shown are exemplary and other ways of establishing a communications link between the computers can be used. The existence of any of various well-known protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP
and the like, is presumed, and computer 1 can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Furthermore, any of various conventional web browsers can be used to display and manipulate data on web pages.
1721 The operation of computer 1 can be controlled by a variety of different program modules.
Examples of program modules are routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. The present invention may also be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCS, minicomputers, mainframe computers, personal digital assistants and the like. Furthermore, the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices
- 3a -1191 Figure 5 shows a mesh that is superimposed in a face image in accordance with an embodiment of the image.
1201 Figure 6 shows a set of points for altering a face image in accordance with an embodiment of the invention.
- 3b -Patent Application 005222.00531 [21] Figure 7 shows controlling points for face alteration in accordance with an embodiment of the invention.
[22] Figure 8 shows a transformation of points on a mesh in accordance with an embodiment of the invention.
[23] Figure 9 shows a resulting error from transforming points on a mesh in accordance with an embodiment of the invention.
[24] Figure 10 shows visual results for altering a face image in accordance with an embodiment of the invention.
[25] Figure 11 shows additional visual results for altering a face image in accordance with an embodiment of the invention.
[26] Figure 12 shows a flow diagram for altering a face image in accordance with an embodiment of the invention.
[27] Figure 13 shows an architecture of a computer system used in altering a face image in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[28] Figure 1 shows system 100 for transforming a submitted image of a person in accordance with an embodiment of the invention. (Fi gures 10 and 11 show examples of reshaped (transformed) images, in which the face is either fattened or thinned.) The submitted image from interface 101 is registered by picture registration module 103 so that a person is associated with the submitted image. In addition, associated data is entered from interface 101 that provides information about the person. For example, the associated data may be indicative of a health condition (e.g., anorexia or overweight family history), age, current Patent Application 005222.00531 weight, height, sex, ethnic group (e.g., Caucasian of English extraction or Asian of Chinese extraction) and dietary information.
[29] With embodiments of the invention, system 100 may transform (e.g., reshape) a submitted image of a person for different objectives. For example, as will be discussed in greater detail, system 100 may thin or fatten the face of the person to show the effects of one's diet. Also, system 100 may provide guidance to patients in determining the benefits of cosmetic surgery or may project the effects of aging on a person (e.g., in support of a missing person's investigation. Embodiments of the invention also support other forecasting-health scenarios.
Other scenarios include the evolution of face appearance while smoking and the evolution of stains on the face resulting from sun exposure. Embodiments of the invention can also forecast the effect of a drug taken for some illness. While photographic images can be used, other types of images (e.g., medical imaging including MRI, x-ray, ultrasound, and 3D) may be analyzed for different affected body organs (e.g., heart, lungs, kidney, and liver).
[30] With an embodiment of the invention, system 100 transforms a portion of the submitted image in accordance with the associated data provided from interface 101. The portion may be specified as the head, torso, or entire body of a person.
[31] With an embodiment of the invention, system 100 may be trained through training module 105 to configure a transformation model as will be discussed. After training, a picture (corresponding to a submitted image) and associated data is provided to database 107.
Database 107 accesses a search model and model parameters that best match the submitted image. For example, a search model may include a mesh having points (vertices) as selected points of the face (e.g., shown in Figure 5). The mesh may vary based on the associated data, e.g., the ethnic group or the sex of the person.
[32] Search module 115 obtains the image and the search model from database 107 and places the vertices on the portion of the image to form a mesh. As shown in Figure 5, an exemplary Patent Application 005222.00531 mesh is formed for the face of the person. The vertices may be placed differently on the image based on the search model, which may depend on the ethnic group and the sex of the person. Search module 115 provides the image and the associated mesh to image transformation module 117.
[33] In order for image transformation module 117 to transform the portion of the submitted image, transformation control module 113 determines vertex vectors (deformation vectors) for transforming the vertices of the mesh to form a transformed mesh. (As will be discussed with Figure 5, the mesh is associated with corresponding texture from the picture where the alteration is taking place. When the mesh has been transformed, computer graphics software includes the associated texture to render the transformed image. Also, as will be discussed, Figure 8 shows vertices that are transformed in accordance with determined deformation vectors.) The transformed image may be provided to a user through interface 101, printer 121, or communications channel 119.
[34] Transformation control module 113 determines the deformation vectors from entry data (as may be contained in the associated data provided by a doctor) in accordance with an embodiment of the invention. (Embodiments of the invention may also include changes in texture, pattern, color and any other image characteristic.) For example, entry data may include specific information about a patient, e.g., the patient's weight loss during a period of time, the caloric input of the patient, and other dietary information. Also, as shown in Figure 1, transformation control module 113 may be provided model parameters by training modules 105. In addition, the patient may be associated to a cluster by statistical analysis module 111. Module 111 may determine the associated cluster from the associated data from doctor that may include the age, weight, height, and ethnic group of the patient. A
plurality of clusters may be formed based on the values of different attributes such age, weight, and ethnic group. A population may be assigned to the plurality of clusters based on selected attributes.
Patent Application 005222.00531 1351 With an embodiment of the invention, system 100 is adaptive so that the transformation parameters for the transformation model may be modified in order to reduce an error measure between the transformed image and an actual image. For example, system 100 may provide a transformed image that predicts (projects) the image of a person's face after one year using the associated data from a doctor. The transformed image may be compared with the actual image (if one is available) after one year to determine an error measure, and a model parameter may be subsequently modified in order to reduce the error for images that are submitted to system 100. (As will be discussed, Figure 9 provides an approach for determining an error measure.) For example, the deformation factor w (as discussed with EQs. 4A-4D) may be modified. The above error analysis may be implemented within one of the modules as shown in Figure 1 (e.g., module 117) or may be implemented with a separate module (e.g., an error analysis module not shown in Figure 1).
1361 Embodiments of the invention also support training module 105 that configures transformation models and search models in order to obtain a transformed images that have an acceptable error with respect to actual data (e.g., an actual image). For example, a submitted image, associated data, and corresponding actual image are provided to training module 105. The submitted image is transformed and compared to the actual image. Model parameters for the transformation model are then adjusted to minimize an error measure. In order to train system 100, the process can be repeated a number of times until an acceptable error is obtained.
1371 With embodiments of the invention, search module 115 may use a search model in which a search function of an Active Appearance Model (AAM) determines the vertices of the mesh (as will be discussed). A transformation model may be represented as a set of equations (e.g., EQs. 1-5B.) The set of equations may be specified by the model parameters (e.g., the constants contained in EQs. 1-5B.) Transformation control module 113 uses the transformation model to determine a deformation vector (that transforms a corresponding vertex of the mesh). The deformation vector comprises a weight value A, a scale factor s, a Patent Application 005222.00531 deformation factor w, and a direction vector ii as expressed in EQs. 1-5B and as will be later discussed.
[38] With system 100 one can introduce images (photos or medical-specific images) in order to automatically forecast an evolution of a person's condition. Moreover, the results provided by system 100 can be improved by introducing feedback from experts (e.g., doctors nutritionist, surgeons) if improvement is desired.
[39] Figure 2 shows training module 105 for configuring a transformation process that alters a submitted image of a person in accordance with an embodiment of the invention.
Transform module 205 transforms an image of training picture 201 in accordance with input user data 203 that specifies a given condition affecting. For example, a user may specify a degree of thinning for a person. Comparator 207 compares the transformed image with an image from a corresponding actual picture 209, which shows a person being affected by the given condition to determine an error measure. (An example of determining an error measure is discussed with Figure 9.) This operation may be repeated a plurality of times to better determine the accuracy of transform module 205. (Typically, the greater the number of training pictures (with corresponding actual pictures), the greater the accuracy of transformation.) When the accuracy (as gauged by the error measure) is determined, adjustment module 211 adjusts model parameters for transforming a submitted image.
[40] Figure 3 shows process 300 that modifies a model parameter by analyzing an error measure between a transformed image and an actual image in accordance with an embodiment of the invention. System 100 executes process 300 to update model parameters after system 100 has been trained by process 105 (as shown in Figure 2).
[41] With embodiments of the invention, the execution of process 300 may be distributed over a plurality of modules as shown in Figure 1. In step 301, a submitted image and associated Patent Application 005222.00531 data is entered and stored in database 107. In step 303, database 107 provides the appropriate search model and the submitted image to search module 115 to obtain the associated mesh.
[42] In step 305, transformation control module 113 determines transformation parameters (e.g., deformation vectors) from cluster data and specific data about the person in accordance with the selected transformation model as identified by database 107. Image transformation module 117 subsequently processes the transformation parameters, submitted parameter, and mesh in step 307.
[43] Even though system 100 may have been previously trained with training module 105, system 100 can subsequently update model parameters through error analysis process 309. Image transformation module 117 transforms the submitted image to obtain a transformed image as discussed above. If an actual image of the person is available at a time corresponding to the projected time of the transformed image, error analysis process 309 can compare the actual image with the transformed image. (Typically, the transformed image is stored in database 107 and later retrieved when the actual image is available. As an example, the results of every Nth submitted image may be evaluated with respect to the actual image that is available after the projected time.) Error analysis process 309 then adjusts the model parameters in order to reduce an error measure (e.g., the error measure illustrated with Figure 9).
[44] Figure 4 shows client-server configuration 400 for transforming a submitted image of a person in accordance with an embodiment of the invention. While system 100 may operate in a stand-alone configuration, configuration enables requester (client) 401 to request that server 403 process submitted image 405 in accordance with associated data 407 to obtain transformed image 409. Server 403 is typically trained before processing submitted image 405. With embodiments of the invention, server 403 includes database 107 and modules 111-117 as shown in Figure 1. However, because of privacy concerns, requester 401 may restrict information that identifies the person whose image is being submitted. Moreover, server 403 may not store submitted image 405 or transformed image 409.
Patent Application 005222.00531 [45] Figure 5 shows a mesh that is superimposed in a face image in accordance with an embodiment of the image. As will be discussed, an algorithm fattens or thins the face image in accordance with an embodiment of the invention. Points along the face, neck, and image boundary are determined in order to form the mesh. As will be further discussed, the algorithm alters the facial contour and then reshapes (transforms) the area around the neck.
(Points 536-545 will be discussed in a later discussion.) The altered image is rendered by using the points as vertices of the mesh. While a mesh is one example for reshaping an image, other embodiments of the invention may change other characteristics of an image to forecast the evolution of a person.
[46] This mesh is associated to its corresponding texture from the picture where the alteration is taking place. The corners and four points along each side of the picture (as shown in Figure 15 are also considered as part of the mesh. Computer graphics software API
(Application Programming Interface) is used to render the altered image (e.g., as shown in Figures 10-11).
OpenGL API is an example of computer graphics software that may be used to render the altered image.
[47] Figure 6 shows a set of points (including points 600, 606, 618, and 631 which will be discussed in further detail) for altering a face image in accordance with an embodiment of the invention. (Please note that Figure 6 shows a plurality of points, which correspond to the vertices of the mesh.) Points 600, 606, 618, and 631 are only some of the plurality of points.
An embodiment of the invention uses the search function of a software technique called Active Appearance Model (AAM), which utilizes a trained model. (Information about AAM
is available at http://www2.imm.dtu.dki-aam and has been utilized by other researchers.) However, points 600, 606, 618, and 631 may be determined with other approaches, e.g., a manual process that is performed by medical practitioner manually entering the points. With an embodiment of the invention, the trained model is an AMF file, which is obtained from the training process. For the training the AAM, a set of images with faces is needed. These images may belong to the same person or different people. Training is typically dependent Patent Application 005222.00531 on the desired degree of accuracy and the degree of universality of the population that is covered by the model. With an exemplary embodiment, one typically processes at least five images with the algorithm that is used. During the training process, the mesh is manually deformed on each image. Once all images are processed, the AAM algorithms are executed over the set of points and images, and a global texture/shape model is generated and stored in an AMF file. The AMF file permits an automatic search in future images not belonging to the training set. With an exemplary embodiment, one uses the AAM API to generate Appearance Model Files (AMF). Embodiments of the invention also support inputting the plurality of points through an input device as entered by a user. A mesh is superimposed on the image at points (e.g., the set of points shown in Figure 6) as determined by the trained process.
[48] Figure 6 also shows the orientation of the x and y coordinates of the points as shown in Figures 5-7.
[49] Figure 7 shows controlling points 706-731 for face alteration in accordance with an embodiment of the invention. (Points 706, 718, and 731 correspond to points 606, 618, and 631 respectively as shown in Figure 6.) Points 706-731, which correspond to points around the cheeks and chin of the face, are relocated (transformed) for fattening or thinning a face image to a desired degree. With an embodiment of the invention, only a proper subset (points 706-731) of the plurality of points (as shown in Figure 6 are relocated. (With a proper subset, only some, and not all, of the plurality points are included.) [50] In the following discussion that describes the determination of the deformation vectors for reshaping the face image, index i = 6 to index i = 31 correspond to points 306 to points 731, respectively. The determined deformation vectors are added to points 306 to points 731 to re-position the point, forming a transformed mesh. A reshaped image is consequently rendered using the transformed mesh.
Patent Application 005222.00531 [51] In accordance with embodiments of the invention, deformation vector correspond to a product of four elements (factors):
= ii=s=w= A (EQ.!) where A is the weight value factor, s is the scale factor, w is the deformation factor, and ii is the direction vector. In accordance with an embodiment of the invention:
= Weight value factor [Al: It determines the strength of the thinning and fattening that we wan to apply.
A> 0 fattening (EQ. 2A) A<0 thinning (EQ. 2B) A=0 no change (EQ. 2C) = Scale factor Is]. It is the value of the width of the face divided by B.
One uses this factor to make this vector calculation independent of the size of the head we are working with. The value of B will influence how the refined is the scale of the deformation. It will give the units to the weight value that will be applied externally.
x,, - xal s =l (EQ. 3) = Deformation factor [w]. It is calculated differently for different parts of cheeks and chin. One uses a different equation depending on which part of the face one is processing:
I E [6-131 W = __ 1 IX, Xci I -I-- (EQ. 4A) 3 ix, ¨x131 3 I E 114-181 W 2 IX, Xri 2 + I (EQ. 4B) Ix ,3 - x,8 ie [19-23] w 1 2 IX, XI2 ci +1 (EQ. 4C) Ix. ¨x2,1 Patent Application 005222.00531 E [24 ¨31] w, = , Ix, xc, + ¨ (EQ. 4D) 3 kõ -x31 3 = Direction vector [ ii]: It indicates the sense of the deformation. One calculates the direction vector it the ratio between: the difference (for each coordinate) between the center and our point, and the absolute distance between this center and our point. One uses two different centers in this process: center C2 (point 653 as shown in Figure 6) for the points belonging to the jaw and center Cl (point 653 as shown in Figure 6) for the points belonging to the cheeks.
E [6¨i3}8z, [24-311 = x, ¨x (EQ. 5A) Ix, -x1 I
i E [14 ¨ 23] x ¨ xe, = _________________________________ ' (EQ. 5B) ' Ix, - xr, [52] Neck point-coordinates xi are based on the lower part of the face, where E [36 ¨45] j [14 ¨23] x, = (xi, yi +neck _height) (EQ. 6) neck _height = Y18- Y 6 (EQ. 7) where y18 and yo are the y-coordinates of points 618 and 600, respectively, as shown in Figure 6. Referring back to Figure 5, index i=36 to i=-45 correspond to points 536 to 545, respectively. Index j=14 to j=23 correspond to points 714 to 723, respectively, (as shown in Figure 3) on the lower part of the face, from which points 536 to 545 on the neck are determined. (In an embodiment of the invention, points 536 to 545 are determined from points 714 to 723 before points 714 to 723 are relocated in accordance with EQs. 1-5.) [53] The deformation vector (v,.) applied at points 536 to 545 has two components:
d _ neck ¨ (0, d _ neck ) (EQ. 8) Patent Application 005222.00531 when x, < Yd_õeck, = (x, -x,8)22 (EQ. 9A) 10.(x24 -x13) when X, x4, yak, (x, -x,)2 = _______ (EQ. 9B) 10{x24 ¨ X,3)2 [54] Figure 8 shows a transformation of points (vertices) on a mesh in accordance with an embodiment of the invention. Points 716-720 are a subset of vertices shown in Figure 7.
Deformation vectors 856-860 are determined by image transformation module 117 in accordance with EQs. 1-5B. Transformed points (transformed vertices) 816-820 are obtained by transforming points 716-720 with corresponding deformation vectors 856-860.
[55] Figure 9 shows a resulting error from transforming points on a mesh in accordance with an embodiment of the invention. (Embodiments of the invention support other criteria for determining an error measure. For example, an error measure can account for the color, texture, pattern, or shape change of the image.) Transformed points (transformed vertices) 816-820 correspond to points that are shown in Figure 8. If an actual image is available, actual vertices 916-920 can be determined from a search function as supported by search module 115. Subsequently, distances (di) 956-960 for each vertex pair consisting of a transformed point and an associated actual point is obtained. One can determine a square error for the transformed image by:
square _error = a, (actual _vertex - transformed _vertex)2 (EQ. 10) Each weight a, is adjusted to reflect the relative importance of the vertex pair. (If a vertex pair is not included when determining the square error, the corresponding weight is set to zero. Thus, some or all of the vertices shown in Figure 7 may be included in the error analysis.) The least square error may be determined by error analysis module 309 (as shown Patent Application 005222.00531 in Figure 3) by adjusting model parameters (e.g., constants in EQs. 1- 5B) that corresponds to reduce the square error to a minimum.
[56] Figure 10 shows visual results for altering a face image in accordance with an embodiment of the invention. Images 1001 to 1005 correspond to A = +100, A = +50, and A =
respectively, which correspond to decreasing degrees of fattening.
[57] With an embodiment of the invention, A = +100 corresponds to a maximum degree of fattening and A = -100 corresponds to a maximum degree of thinning. The value of A is selected to provide the desired degree of fattening or thinning. For example, if a patient were afflicted anorexia, the value of A would have a negative value that would depend on the degree of affliction and on the medical history and body type of the patient.
As another example, a patient may be over-eating or may have an unhealthy diet with many empty calories. In such a case, A would have a positive value. A medical practitioner may be able to gauge the value of A based on experience. However, embodiments of invention may support an automated implementation for determining the value of A. For example, an expert system may incorporate knowledge based on information provided by experienced medical practitioners.
[58] Figure 11 shows additional visual results for altering a face image in accordance with an embodiment of the invention. Images 1101-1105, corresponding to A = 0, A= -50 and A= -50 respectively, show the continued reduced sequencing of the fattening (increased thinning). When A = 0 (image 1101), the face is shown as it really appears.
With A = -50 (image 1103), the face is shows thinning. As A becomes more negative, the effects of thinning is increased.
[59] With embodiments of the invention, medical imaging may be processed in order to determine effects of treatment on an organ. For example, a patient is being treated for pancreatitis (inflammation of pancreas). The doctor is prescribing the patient a drug and Patent Application 005222.00531 wants to compare the evolution of the patient's condition with expected results. The doctor uses ultrasound (or MRI) images to view the pancreas. A mesh is also utilized to track the contour of the pancreas to determine how the pancreas evolves. Feedback from the doctor and the evolution of the patient's condition are utilized to improve future predictions.
Moreover, this approach may be extended so that pharmacologists can evaluate the tests of a new drug with the help of experts.
[60] Figure 12 shows flow diagram 1200 for altering a face image in accordance with an embodiment of the invention. In step 1201, points are located on the image of the face and neck in order form a mesh. Points may be determined by a trained process or may be entered through an input device by a medical practitioner. In step 1203, reshaping parameters (e.g., a weight value factor A) are obtained. The reshaping factors may be entered by the medical practitioner or may be determined by a process (e.g. an expert system) from information about the person associated with the face image.
[61] In step 1205 deformation vectors are determined and applied to points (e.g. points 706-731 as shown in Figure 7) on the face. For example, as discussed above, EQs. 1-5.
are used to determine the relocated points. In step 1207 deformation vectors are determined (e.g., using EQs. 6-9) and applied to points (e.g., points 536-545 as shown in Figure 5) on the neck. A
transformed mesh is generated from which a reshaped image is rendered using computer graphics software in step 1209.
[62] While Figures 5-12 illustrate embodiments of the invention for fattening and thinning a person's face, embodiments of the invention support other types of transformations. For example, not only may vertices of a mesh be transformed to reshape the face, texture components (e.g., wrinkling of the skin associated with aging) may also be transformed.
Also, hair attributes (e.g., graying and balding) may be included when forming a transformed image by adding artificial synthetic elements. Other image transformations that may be considered are: texture, pattern and color. Moreover, slight perspective changes may be Patent Application 005222.00531 applied to some of the objects in the images (e.g., face) to rectify the point of view in which the picture has been taken and the point of view in which the transformation model was trained. More than one image may be evaluated at a time if those images give different views from the same face, organ or object (e.g., one can evaluate the evolution of a face from a frontal and a side perspective).
[63] Figure 13 shows computer system 1 that supports an alteration of a face image in accordance with an embodiment of the invention. Elements of the present invention may be implemented with computer systems, such as the system 1. Computer system 1 includes a central processor 10, a system memory 12 and a system bus 14 that couples various system components including the system memory 12 to the central processor unit 10.
System bus 14 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The structure of system memory 12 is well known to those skilled in the art and may include a basic input/output system (BIOS) stored in a read only memory (ROM) and one or more program modules such as operating systems, application programs and program data stored in random access memory (RAM).
[64] Computer 1 may also include a variety of interface units and drives for reading and writing data. In particular, computer 1 includes a hard disk interface 16 and a removable memory interface 20 respectively coupling a hard disk drive 18 and a removable memory drive 22 to system bus 14. Examples of removable memory drives include magnetic disk drives and optical disk drives. The drives and their associated computer-readable media, such as a floppy disk 24 provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for computer 1. A single hard disk drive 18 and a single removable memory drive 22 are shown for illustration purposes only and with the understanding that computer 1 may include several of such drives. Furthermore, computer 1 may include drives for interfacing with other types of computer readable media.
Patent Application 005222.00531 [65] A user can interact with computer 1 with a variety of input devices.
Figure 13 shows a serial port interface 26 coupling a keyboard 28 and a pointing device 30 to system bus 14.
Pointing device 28 may be implemented with a mouse, track ball, pen device, or similar device. Of course one or more other input devices (not shown) such as a joystick, game pad, satellite dish, scanner, touch sensitive screen or the like may be connected to computer 1.
[66] Computer 1 may include additional interfaces for connecting devices to system bus 14.
Figure 7 shows a universal serial bus (USB) interface 32 coupling a video or digital camera 34 to system bus 14. An IEEE 1394 interface 36 may be used to couple additional devices to computer 1. Furthermore, interface 36 may configured to operate with particular manufacture interfaces such as FireWire developed by Apple Computer and ilink developed by Sony. Input devices may also be coupled to system bus 114 through a parallel port, a game port, a PCI board or any other interface used to couple and input device to a computer.
1671 Computer 1 also includes a video adapter 40 coupling a display device 42 to system bus 14.
Display device 42 may include a cathode ray tube (CRT), liquid crystal display (LCD), field emission display (FED), plasma display or any other device that produces an image that is viewable by the user. Additional output devices, such as a printing device (not shown), may be connected to computer 1.
[68] Sound can be recorded and reproduced with a microphone 44 and a speaker 66. A sound card 48 may be used to couple microphone 44 and speaker 46 to system bus 14.
One skilled in the art will appreciate that the device connections shown in Figure 7 are for illustration purposes only and that several of the peripheral devices could be coupled to system bus 14 via alternative interfaces. For example, video camera 34 could be connected to interface 36 and pointing device 30 could be connected to USB interface 32.
Patent Application 005222.00531 1691 Computer 1 can operate in a networked environment using logical connections to one or more remote computers or other devices, such as a server, a router, a network personal computer, a peer device or other common network node, a wireless telephone or wireless personal digital assistant. Computer 1 includes a network interface 50 that couples system bus 14 to a local area network (LAN) 52. Networking environments are commonplace in offices, enterprise-wide computer networks and home computer systems.
[70] A wide area network (WAN) 54, such as the Internet, can also be accessed by computer 1.
Figure 7 shows a modem unit 56 connected to serial port interface 26 and to WAN 54.
Modem unit 56 may be located within or external to computer 1 and may be any type of conventional modem such as a cable modem or a satellite modem. LAN 52 may also be used to connect to WAN 54. Figure 13 shows a router 58 that may connect LAN 52 to WAN 54 in a conventional manner.
[71] It will be appreciated that the network connections shown are exemplary and other ways of establishing a communications link between the computers can be used. The existence of any of various well-known protocols, such as TCP/IP, Frame Relay, Ethernet, FTP, HTTP
and the like, is presumed, and computer 1 can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Furthermore, any of various conventional web browsers can be used to display and manipulate data on web pages.
1721 The operation of computer 1 can be controlled by a variety of different program modules.
Examples of program modules are routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. The present invention may also be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCS, minicomputers, mainframe computers, personal digital assistants and the like. Furthermore, the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices
- 19 -that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[73] In an embodiment of the invention, central processor unit 10 obtains a face image from digital camera 34. A user may view the face image on display device 42 and enter points (e.g., points 606-631 as shown in Figure 6) to form a mesh that is subsequently altered by central processor 10 as discussed above. The user may identify the points with a pointer device (e.g. mouse 30) that is displayed on display device 42, which overlays the mesh over the face image. With embodiments of the invention, a face image may be stored and retrieved from hard disk drive 18 or removable memory drive 22 or obtained from an external server (not shown) through LAN 52 or WAN 54.
[74] As can be appreciated by one skilled in the art, a computer system (e.g., computer 1 as shown in Figure 13) with an associated computer-readable medium containing instructions for controlling the computer system may be utilized to implement the exemplary embodiments that are disclosed herein. The computer system may include at least one computer such as a microprocessor, a cluster of microprocessors, a mainframe, and networked workstations.
[75] While the invention has been described with respect to specific examples including presently preferred modes of carrying out the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the scope of the invention as set forth in the appended claims.
[73] In an embodiment of the invention, central processor unit 10 obtains a face image from digital camera 34. A user may view the face image on display device 42 and enter points (e.g., points 606-631 as shown in Figure 6) to form a mesh that is subsequently altered by central processor 10 as discussed above. The user may identify the points with a pointer device (e.g. mouse 30) that is displayed on display device 42, which overlays the mesh over the face image. With embodiments of the invention, a face image may be stored and retrieved from hard disk drive 18 or removable memory drive 22 or obtained from an external server (not shown) through LAN 52 or WAN 54.
[74] As can be appreciated by one skilled in the art, a computer system (e.g., computer 1 as shown in Figure 13) with an associated computer-readable medium containing instructions for controlling the computer system may be utilized to implement the exemplary embodiments that are disclosed herein. The computer system may include at least one computer such as a microprocessor, a cluster of microprocessors, a mainframe, and networked workstations.
[75] While the invention has been described with respect to specific examples including presently preferred modes of carrying out the invention, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the scope of the invention as set forth in the appended claims.
- 20 -
Claims (21)
1. A method for processing a submitted image of a person, the method comprising:
(a) receiving the submitted image and associated data, the associated data being indicative of a condition of the person;
(b) obtaining, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
(c) applying the transformation parameter to the portion of the submitted image;
(d) rendering a transformed image from the transformation parameter;
(e) determining a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by:
(e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and (f) modifying, based on analyzing the error measure, a model parameter to reconfigure the transformation model.
(a) receiving the submitted image and associated data, the associated data being indicative of a condition of the person;
(b) obtaining, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
(c) applying the transformation parameter to the portion of the submitted image;
(d) rendering a transformed image from the transformation parameter;
(e) determining a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by:
(e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and (f) modifying, based on analyzing the error measure, a model parameter to reconfigure the transformation model.
2. The method of claim 1, the transformation parameter comprising a deformation vector and (b) comprising:
(b)(i) forming a mesh that overlays the portion of the submitted image, the mesh having a plurality of vertices; and (b)(ii) determining the deformation vector from the transformation model.
(b)(i) forming a mesh that overlays the portion of the submitted image, the mesh having a plurality of vertices; and (b)(ii) determining the deformation vector from the transformation model.
3. The method of claim 2, (c) comprising:
(c)(i) applying the deformation vector to one of the plurality of vertices to obtain a transformed vertex.
(c)(i) applying the deformation vector to one of the plurality of vertices to obtain a transformed vertex.
4. The method of claim 3, (d) comprising:
(d)(i) in response to (c)(i), rendering the transformed image from the mesh.
(d)(i) in response to (c)(i), rendering the transformed image from the mesh.
5. The method of claim 1, further comprising:
(g) training the transformation model to reduce the error measure.
(g) training the transformation model to reduce the error measure.
6. The method of claim 1, wherein:
the transformation parameter comprises a texture alteration parameter;
(b) comprises determining the texture alteration parameter from the associated data for the person; and (c) comprises applying the texture alteration parameter to the portion of the submitted image.
the transformation parameter comprises a texture alteration parameter;
(b) comprises determining the texture alteration parameter from the associated data for the person; and (c) comprises applying the texture alteration parameter to the portion of the submitted image.
7. The method of claim 1, wherein:
the transformation parameter comprises a hair descriptive parameter;
(b) comprises determining the hair descriptive parameter from the associated data for the person; and (c) comprises applying the hair descriptive parameter to the portion of the submitted image.
the transformation parameter comprises a hair descriptive parameter;
(b) comprises determining the hair descriptive parameter from the associated data for the person; and (c) comprises applying the hair descriptive parameter to the portion of the submitted image.
8. The method of claim 1, the portion of the submitted image comprising a face portion of the person.
9. The method of claim 1, wherein:
the transformation parameter comprises a reshaping parameter;
(b) comprises determining the reshaping parameter from the associated data for the person; and (c) comprises applying the reshaping parameter to the portion of the submitted image.
the transformation parameter comprises a reshaping parameter;
(b) comprises determining the reshaping parameter from the associated data for the person; and (c) comprises applying the reshaping parameter to the portion of the submitted image.
10. The method of claim 1, the portion of the submitted image comprising a torso portion of the person.
11. The method of claim 1, the transformation model including at least one of:
texture characteristics of the submitted image;
pattern characteristics of the submitted image;
color characteristics of the submitted image.
texture characteristics of the submitted image;
pattern characteristics of the submitted image;
color characteristics of the submitted image.
12. The method of claim 1, the submitted image comprising at least one of:
a photographic image;
medical imaging.
a photographic image;
medical imaging.
13. The method of claim 1, further comprising:
(g) receiving feedback from a user to modify the model parameter; and (h) repeating (g) until the error measure is not greater than a desired error amount.
(g) receiving feedback from a user to modify the model parameter; and (h) repeating (g) until the error measure is not greater than a desired error amount.
14. A computer-readable medium having computer-executable instructions to perform the method of claims 1-13.
15. An apparatus for processing a submitted image of a person, comprising:
a database for receiving the submitted image and associated data, the associated data being indicative of a condition of the person;
a transformation control module configured to obtain, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
an image transformation module configured to apply the transformation parameter to the portion of the submitted image and render a transformed image from the transformation parameter;
an error analysis module configured to determine a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by:
(e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and wherein the error analysis module is further configured to modify, based on analyzing the error measure, a model parameter to reconfigure the transformation model that is utilized by the transformation control module.
a database for receiving the submitted image and associated data, the associated data being indicative of a condition of the person;
a transformation control module configured to obtain, from a transformation model, a transformation parameter that is associated with a portion of the submitted image;
an image transformation module configured to apply the transformation parameter to the portion of the submitted image and render a transformed image from the transformation parameter;
an error analysis module configured to determine a square error measure that gauges a discrepancy between the transformed image and actual data, the actual data being indicative of an actual image of the person when affected by the condition, by:
(e)(i) measuring a distance for a vertex pair, the vertex pair comprising a transformed vertex and an actual vertex of an associated actual point of the actual data;
(e)(ii) repeating (e)(i) for one or more other vertex pairs to obtain a plurality of distances; and (e)(iii) determining a square error from a weighted sum of the squared plurality of distances; and wherein the error analysis module is further configured to modify, based on analyzing the error measure, a model parameter to reconfigure the transformation model that is utilized by the transformation control module.
16. The apparatus of claim 15, further comprising a search module configured to match an image model to a portion of the submitted image to obtain modeled data for the submitted image.
17. The apparatus of claim 15, further comprising:
a training module for configuring the transformation model to reduce the error measure.
a training module for configuring the transformation model to reduce the error measure.
18. The apparatus of claim 16, wherein:
the transformation parameter comprises a deformation vector;
the search module is configured to form a mesh that overlays the portion of the submitted image, the mesh having a plurality of vertices; and the transformation control module is configured to determine the deformation vector from the transformation model.
the transformation parameter comprises a deformation vector;
the search module is configured to form a mesh that overlays the portion of the submitted image, the mesh having a plurality of vertices; and the transformation control module is configured to determine the deformation vector from the transformation model.
19. The apparatus of claim 18, wherein:
the image transformation module is configured to apply the deformation vector to one of the plurality of vertices to obtain a transformed vertex.
the image transformation module is configured to apply the deformation vector to one of the plurality of vertices to obtain a transformed vertex.
20. The apparatus of claim 15, further comprising:
a statistical analysis module configured to associate the person to a cluster based on global data, the global data being determined from the associated data for the person;
and the transformation control module configured to determine the transformation parameter by utilizing cluster information.
a statistical analysis module configured to associate the person to a cluster based on global data, the global data being determined from the associated data for the person;
and the transformation control module configured to determine the transformation parameter by utilizing cluster information.
21. The apparatus of claim 16, further comprising:
a data structure configured to store image model information and to provide the image model information to the search module.
a data structure configured to store image model information and to provide the image model information to the search module.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/671,751 US7792379B2 (en) | 2007-02-06 | 2007-02-06 | Transforming a submitted image of a person based on a condition of the person |
US11/671,751 | 2007-02-06 |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2619281A1 CA2619281A1 (en) | 2008-08-06 |
CA2619281C true CA2619281C (en) | 2015-12-22 |
Family
ID=39401029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2619281A Active CA2619281C (en) | 2007-02-06 | 2008-01-25 | Transforming a submitted image of a person based on a condition of the person |
Country Status (3)
Country | Link |
---|---|
US (2) | US7792379B2 (en) |
EP (1) | EP1956549B1 (en) |
CA (1) | CA2619281C (en) |
Families Citing this family (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8200506B2 (en) * | 2006-12-19 | 2012-06-12 | Accenture Global Services Limited | Integrated health management platform |
US8675951B2 (en) * | 2007-05-11 | 2014-03-18 | Three Pixels Wide Pty Ltd. | Method and system for generating a 3D model |
JP4930433B2 (en) * | 2008-04-01 | 2012-05-16 | セイコーエプソン株式会社 | Image processing apparatus, image processing method, and image processing program |
AU2008358849A1 (en) * | 2008-06-30 | 2010-01-07 | Accenture Global Services Limited | Gaming system |
US8704832B2 (en) | 2008-09-20 | 2014-04-22 | Mixamo, Inc. | Interactive design, synthesis and delivery of 3D character motion data through the web |
US8159504B2 (en) * | 2008-10-16 | 2012-04-17 | At&T Intellectual Property I, L.P. | System and method for presenting an avatar |
US8982122B2 (en) | 2008-11-24 | 2015-03-17 | Mixamo, Inc. | Real time concurrent design of shape, texture, and motion for 3D character animation |
US8659596B2 (en) | 2008-11-24 | 2014-02-25 | Mixamo, Inc. | Real time generation of animation-ready 3D character models |
US20100259547A1 (en) | 2009-02-12 | 2010-10-14 | Mixamo, Inc. | Web platform for interactive design, synthesis and delivery of 3d character motion data |
US8928672B2 (en) | 2010-04-28 | 2015-01-06 | Mixamo, Inc. | Real-time automatic concatenation of 3D animation sequences |
US9355329B2 (en) * | 2011-05-09 | 2016-05-31 | Catherine G. McVey | Image analysis for determining characteristics of pairs of individuals |
US9098898B2 (en) * | 2011-05-09 | 2015-08-04 | Catherine Grace McVey | Image analysis for determining characteristics of individuals |
US9552637B2 (en) | 2011-05-09 | 2017-01-24 | Catherine G. McVey | Image analysis for determining characteristics of groups of individuals |
CA2872841C (en) | 2011-05-09 | 2019-08-06 | Catherine Grace Mcvey | Image analysis for determining characteristics of animals and humans |
US10049482B2 (en) | 2011-07-22 | 2018-08-14 | Adobe Systems Incorporated | Systems and methods for animation recommendations |
US10748325B2 (en) | 2011-11-17 | 2020-08-18 | Adobe Inc. | System and method for automatic rigging of three dimensional characters for facial animation |
US20130215113A1 (en) * | 2012-02-21 | 2013-08-22 | Mixamo, Inc. | Systems and methods for animating the faces of 3d characters using images of human faces |
US9747495B2 (en) | 2012-03-06 | 2017-08-29 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
JP2013188341A (en) * | 2012-03-14 | 2013-09-26 | Sony Corp | Image processing device, image processing method, and program |
FR2996658B1 (en) * | 2012-10-10 | 2015-12-25 | Biomouv | METHOD AND DEVICE FOR SIMULATION OF EVOLUTION OF THE MORPHOLOGY OF A USER |
US9614724B2 (en) | 2014-04-21 | 2017-04-04 | Microsoft Technology Licensing, Llc | Session-based device configuration |
US9639742B2 (en) | 2014-04-28 | 2017-05-02 | Microsoft Technology Licensing, Llc | Creation of representative content based on facial analysis |
US9773156B2 (en) | 2014-04-29 | 2017-09-26 | Microsoft Technology Licensing, Llc | Grouping and ranking images based on facial recognition data |
US9430667B2 (en) | 2014-05-12 | 2016-08-30 | Microsoft Technology Licensing, Llc | Managed wireless distribution network |
US9384335B2 (en) | 2014-05-12 | 2016-07-05 | Microsoft Technology Licensing, Llc | Content delivery prioritization in managed wireless distribution networks |
US9384334B2 (en) | 2014-05-12 | 2016-07-05 | Microsoft Technology Licensing, Llc | Content discovery in managed wireless distribution networks |
US10111099B2 (en) | 2014-05-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Distributing content in managed wireless distribution networks |
US9874914B2 (en) | 2014-05-19 | 2018-01-23 | Microsoft Technology Licensing, Llc | Power management contracts for accessory devices |
US10037202B2 (en) | 2014-06-03 | 2018-07-31 | Microsoft Technology Licensing, Llc | Techniques to isolating a portion of an online computing service |
US9367490B2 (en) | 2014-06-13 | 2016-06-14 | Microsoft Technology Licensing, Llc | Reversible connector for accessory devices |
US9460493B2 (en) | 2014-06-14 | 2016-10-04 | Microsoft Technology Licensing, Llc | Automatic video quality enhancement with temporal smoothing and user override |
US9373179B2 (en) | 2014-06-23 | 2016-06-21 | Microsoft Technology Licensing, Llc | Saliency-preserving distinctive low-footprint photograph aging effect |
US9717006B2 (en) | 2014-06-23 | 2017-07-25 | Microsoft Technology Licensing, Llc | Device quarantine in a wireless network |
EP3433818A1 (en) | 2016-03-21 | 2019-01-30 | The Procter and Gamble Company | Systems and methods for providing customized product recommendations |
US10559111B2 (en) | 2016-06-23 | 2020-02-11 | LoomAi, Inc. | Systems and methods for generating computer ready animation models of a human head from captured data images |
US10062198B2 (en) | 2016-06-23 | 2018-08-28 | LoomAi, Inc. | Systems and methods for generating computer ready animation models of a human head from captured data images |
CN108153501B (en) * | 2016-12-06 | 2023-10-31 | 依视路国际集团(光学总公司) | Image processing method and system |
US10614623B2 (en) | 2017-03-21 | 2020-04-07 | Canfield Scientific, Incorporated | Methods and apparatuses for age appearance simulation |
US10621771B2 (en) | 2017-03-21 | 2020-04-14 | The Procter & Gamble Company | Methods for age appearance simulation |
WO2018222812A1 (en) | 2017-05-31 | 2018-12-06 | The Procter & Gamble Company | System and method for guiding a user to take a selfie |
JP6849825B2 (en) | 2017-05-31 | 2021-03-31 | ザ プロクター アンド ギャンブル カンパニーThe Procter & Gamble Company | Systems and methods for determining apparent skin age |
US10198845B1 (en) | 2018-05-29 | 2019-02-05 | LoomAi, Inc. | Methods and systems for animating facial expressions |
US11551393B2 (en) | 2019-07-23 | 2023-01-10 | LoomAi, Inc. | Systems and methods for animation generation |
JP2022512262A (en) | 2019-11-21 | 2022-02-03 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Image processing methods and equipment, image processing equipment and storage media |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4276570A (en) * | 1979-05-08 | 1981-06-30 | Nancy Burson | Method and apparatus for producing an image of a person's face at a different age |
CA2087523C (en) * | 1990-07-17 | 1997-04-15 | Mark Andrew Shackleton | Method of processing an image |
US6006126A (en) * | 1991-01-28 | 1999-12-21 | Cosman; Eric R. | System and method for stereotactic registration of image scan data |
US6400996B1 (en) * | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
US7242988B1 (en) * | 1991-12-23 | 2007-07-10 | Linda Irene Hoffberg | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US7006881B1 (en) * | 1991-12-23 | 2006-02-28 | Steven Hoffberg | Media recording device with remote graphic user interface |
US6850252B1 (en) * | 1999-10-05 | 2005-02-01 | Steven M. Hoffberg | Intelligent electronic appliance system and method |
US6430430B1 (en) * | 1999-04-29 | 2002-08-06 | University Of South Florida | Method and system for knowledge guided hyperintensity detection and volumetric measurement |
US6792418B1 (en) * | 2000-03-29 | 2004-09-14 | International Business Machines Corporation | File or database manager systems based on a fractal hierarchical index structure |
CA2425089A1 (en) * | 2000-09-14 | 2002-03-21 | Philipp Lang | Assessing condition of a joint and cartilage loss |
US7062093B2 (en) * | 2000-09-27 | 2006-06-13 | Mvtech Software Gmbh | System and method for object recognition |
EP1255368A1 (en) * | 2001-04-30 | 2002-11-06 | Siemens Information and Communication Networks S.p.A. | Method to perform link adaptation in enhanced cellular communication systems with several modulation and coding schemes |
FR2831698A1 (en) * | 2001-10-30 | 2003-05-02 | Koninkl Philips Electronics Nv | MEDICAL IMAGING STATION WITH TRAJECTORY EXTRACTION FUNCTION WITHIN A BRANCHED OBJECT |
AU2003214595A1 (en) * | 2002-03-07 | 2003-09-16 | Yechezkal Evan Spero | Enhanced vision for driving |
US6819790B2 (en) * | 2002-04-12 | 2004-11-16 | The University Of Chicago | Massive training artificial neural network (MTANN) for detecting abnormalities in medical images |
US20030234871A1 (en) * | 2002-06-25 | 2003-12-25 | Squilla John R. | Apparatus and method of modifying a portrait image |
US7027054B1 (en) * | 2002-08-14 | 2006-04-11 | Avaworks, Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US6919892B1 (en) * | 2002-08-14 | 2005-07-19 | Avaworks, Incorporated | Photo realistic talking head creation system and method |
US7309315B2 (en) * | 2002-09-06 | 2007-12-18 | Epoch Innovations, Ltd. | Apparatus, method and computer program product to facilitate ordinary visual perception via an early perceptual-motor extraction of relational information from a light stimuli array to trigger an overall visual-sensory motor integration in a subject |
AU2003277348A1 (en) * | 2003-10-10 | 2005-05-26 | Russ Edward Brucks | System and method for assessment of health risks and visualization of weight loss and muscle gain |
US7213766B2 (en) * | 2003-11-17 | 2007-05-08 | Dpd Patent Trust Ltd | Multi-interface compact personal token apparatus and methods of use |
US7374536B1 (en) * | 2004-04-16 | 2008-05-20 | Taylor Colin R | Method for analysis of pain images |
US7124041B1 (en) * | 2004-09-27 | 2006-10-17 | Siemens Energy & Automotive, Inc. | Systems, methods, and devices for detecting circuit faults |
WO2006058099A1 (en) * | 2004-11-23 | 2006-06-01 | Eastman Kodak Company | Automated radiograph classification using anatomy information |
US7854705B2 (en) * | 2004-12-16 | 2010-12-21 | Olga Pawluczyk | Ex vivo verification of biopsy tissue samples |
US7358502B1 (en) * | 2005-05-06 | 2008-04-15 | David Appleby | Devices, systems, and methods for imaging |
-
2007
- 2007-02-06 US US11/671,751 patent/US7792379B2/en active Active
-
2008
- 2008-01-25 CA CA2619281A patent/CA2619281C/en active Active
- 2008-02-05 EP EP08250418.4A patent/EP1956549B1/en active Active
-
2010
- 2010-08-05 US US12/851,076 patent/US8014589B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
US20080187246A1 (en) | 2008-08-07 |
CA2619281A1 (en) | 2008-08-06 |
US7792379B2 (en) | 2010-09-07 |
EP1956549A2 (en) | 2008-08-13 |
US20110064331A1 (en) | 2011-03-17 |
US8014589B2 (en) | 2011-09-06 |
EP1956549B1 (en) | 2014-04-02 |
EP1956549A3 (en) | 2012-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2619281C (en) | Transforming a submitted image of a person based on a condition of the person | |
Mollemans et al. | Predicting soft tissue deformations for a maxillofacial surgery planning system: from computational strategies to a complete clinical validation | |
Koch et al. | Simulating facial surgery using finite element models | |
DeCarlo et al. | An anthropometric face model using variational techniques | |
KR20210021039A (en) | Image processing methods, devices, electronic devices and computer-readable storage media | |
JP4717427B2 (en) | Operation method and control apparatus of magnetic resonance tomography apparatus | |
US8013852B2 (en) | Anthropometry-based skeleton fitting | |
Duan et al. | 3D face reconstruction from skull by regression modeling in shape parameter spaces | |
Seo et al. | Synthesizing animatable body models with parameterized shape modifications | |
JP2022517769A (en) | 3D target detection and model training methods, equipment, equipment, storage media and computer programs | |
US20080174795A1 (en) | Reshaping an image to thin or fatten a face | |
US10770175B2 (en) | System and method for segmentation and visualization of medical image data | |
US20220375621A1 (en) | Digital twin | |
CN111096835A (en) | Orthosis design method and system | |
JP2020074876A (en) | Vertebra posture estimation device | |
Danckaers et al. | Posture normalisation of 3D body scans | |
CN112837427A (en) | Processing method, device and system of variable human body model and storage medium | |
CN110148209B (en) | Human body model generation method, image processing device and device with storage function | |
US8180175B2 (en) | Reshaping a camera image | |
JP2004102359A (en) | Image processing device, method and program | |
Komaritzan et al. | Inside humans: Creating a simple layered anatomical model from human surface scans | |
US20220198696A1 (en) | System for determining body measurement from images | |
Cheng et al. | Facial morphology prediction after complete denture restoration based on principal component analysis | |
Danckaers et al. | Adaptable digital human models from 3D body scans | |
CN114092634A (en) | Channel and acupoint positioning method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |