CA2284348C - A method of creating 3 d facial models starting from face images - Google Patents

A method of creating 3 d facial models starting from face images Download PDF

Info

Publication number
CA2284348C
CA2284348C CA002284348A CA2284348A CA2284348C CA 2284348 C CA2284348 C CA 2284348C CA 002284348 A CA002284348 A CA 002284348A CA 2284348 A CA2284348 A CA 2284348A CA 2284348 C CA2284348 C CA 2284348C
Authority
CA
Canada
Prior art keywords
model
face
texture
face image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA002284348A
Other languages
French (fr)
Other versions
CA2284348A1 (en
Inventor
Gianluca Francini
Mauro Quaglia
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telecom Italia SpA
Original Assignee
Telecom Italia Lab SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telecom Italia Lab SpA filed Critical Telecom Italia Lab SpA
Publication of CA2284348A1 publication Critical patent/CA2284348A1/en
Application granted granted Critical
Publication of CA2284348C publication Critical patent/CA2284348C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The method allows the creation of 3-D facial models, which can be used, for instance, for the avatar implementation, video-communication applications, video games, video productions, and for the creation of advanced man-machine interfaces.
At least one image of a human face is provided together with a 3D facial model (M) having a vertex structure and comprising a number of surfaces chosen within the set formed by a face surface (V), surfaces of the right eye (OD) and left eye (OS), respectively, and surfaces of the upper teeth (DS) and lower teeth (DI), respectively.
Among the vertices of the structure of the model (M) and on such at least one face image, respective sets of homologous points are chosen. The model structure (M) is then modified in such a way that the above respective sets of homologous points are made to coincide.

Description

IMAGES
This invention concerns the technique for the creation of 3-D facial models, which can be used for instance for the implementation of so-called avatars (anthropomorphous models) to be used in virtual environments, video-communication applications, video games, TV productions, and creation of advanced man-machine interfaces.
There are already some known technical solutions for the creation of a 3D
model starting from the photograph of a person's face.
The main drawback of such known embodiments is that the structure of the generated model does not allow a subsequent animation. This is due to the fact that the model (usually generated as a ."wire frame" model, i. e. starting from a mesh structure, as will also be seen in the sequel), cannot exactly fit the profile in the mouth region, thus preventing reproduction of lip movements. This also applies to other s CA 02284348 1999-10-O1 significant parts of the face, such as eyes and nose.
This invention aims at providing a method which allows the creation of facial models that can appear realistic both in static conditions and in animation conditions, in particular for instance as far as the opening and closing of eyelids and the possibility of simulating eye rotation are concerned.
According to the invention, this aim is attained through a method having the characteristics specifically mentioned in the appended claims.
Substantially the method according to the invention is based on the adaptation of a basic model of a face - typically a human face - having the physiognomy characteristics of the photographed person. The basic model (or "template") is represented by a structure, preferably of the type called "wire frame", formed by a plurality of surfaces chosen out of a set of five surfaces, namely:
- face - right eye and left eye, and - upper teeth and lower teeth The eye surtaces are separated from those of the face so as to allow, among other things, creation of opening and closing movements of eyelids, and a slight translation simulating the actual eye rotation. Similarly, it is possible to pertorm the aniri~ation of the model, as far as the speech is concerned, through the animation of the surtaces representing the upper and lower teeth.
The; invention will be now described by way of a non-limiting example, with reference to the drawings attached hereto, in which:
- Figures 1A and 1 B represent the typical look of the models used in the embodiment of the invention, represented in the wire frame mode (Figure 2A) and in the solid mode (Figure 2B), respectively, - Figure 2 represents the same model as shown in Figure 1 in rear view, also in this case both in the wire frame mode (figure 2A) and in the solid mode (Figure 2B), - Figures 3A to 31 represent a set of tables which identify the feature points of a face according to the present state of the MPEG-4 standard, which face can be used for the embodiment of the invention, - Figure 4 schematically shows one of the phases of the method according to the invention, - Figure 5 schematically shows another phase of the method according to the invention.
- Figure 6 depicts, in three parts denoted by 6A, 6B and 6C respectively, the evolution of the model within a method according to the invention, - Figure 7, which also comprises three parts, represents in part 7A a photograph .. 3 highlighting the feature points used for the calibration in a possible embodiment of the method according to the invention, and in parts 7B and 7C two views of the resulting model, complete with texture, - Figure 8 depicts, in the form of a block diagram, the structure of a system which can be used for carrying out the invention, - Figure 9 is a flow chart concerning a possible embodiment of the method according to the invention, - Figures 10 and 11 exemplify the application of a so-called texture within the present invention.
Figures 1 and 2 show a basic model M of human face, which can be used in a possible embodiment of the invention. Model M is here represented both in the wire frame mode and in the solid mode. The latter differs from the wire frame essentially by the background painting of the triangles of the wire frame. The model M here represented is formed by five surfaces, namely:
- face V, formed - in the embodiment illustrated herein - by 360 vertices and triangles, right eye OD and left eye OS, each consisting of 26 vertices and 37 triangles, - upper teeth DS and lower teeth DI, each consisting of 70 vertices and 42 triangles.
It will be appreciated in particular that model M is a hollow structure, which may practically be assimilated to a sort of mask, the shape of which is designed to reproduce the features of the modelled face. Of course, though con-esponding to an embodiment of the invention being preferred at present, the number of vertices and triangles to which reference has been previously made has a merely exemplary character and must in no case be regarded as a limitation case of the scope of the invention.
These considerations also apply to the choice of using five different surfaces to implement the basic model. As a matter of fact, the number of such surfaces might be smaller (for the implementation of simpler models) or larger (for the implementation of more detailed and sophisticated models), depending on the application requirements. The important feature is the choice of using, as the basic model, a model comprising a plurality of surfaces and in particular surfaces that, depending on the type of face to be modelled (for instance a human face), correspond to shapes which are substantially known in general terms and have a relative arrangement, which as a whole, also is already known.
As a matter of fact, although the typology of the human face is practically infinite, it is known that the surface of the face has a general bowl-like look, that the eyelids have generally just a "eyelid" surface, which is at least marginally convex, that the dental arches have an arc shape, etc. It is then known that the eyelids are located in the medium-upper region of the face surface, whereas the teeth surfaces are located in the lower region.
Furthermore, the fact of using distinct surtaces for the creation of the model allows applying to the model separation conditions, as those which make it possible to avoid, for instance, the interference of the teeth surtaces, so as to accurately model the congruency effect of the dental arches.
This characteristic might be even better appreaated in the rear views of figure 2.
The method according to the invention is substantially based on the solution of:
- taking an image (typically a front photograph) of the face to be modelled, and - modifying the model or template through a series of geometric transformations so that its projection coincides with a set of points identified on the photograph assumed as a starting image.
For this adaptation, use is made of respective sets of points which have been chosen in correspondence with as many so called "feature points°: such points are defined in the section "Face and body animation" of the ISO/IEC standard 14496-.(MPEG-4) and are represented in figures 3A to 3H.
In particular, in an embodiment of the invention being preferred at present, the method according to the invention is implemented by using the feature points identified in the MPEG-4 standard (as defined at the filing date of this invention) by the following indexes: 11.4, 2.1, 10.9, 10.10, 8.4, 8.1, 8.3, 8.2, 2.2, 2.3, 9.3, 9.2, 9.1, 4.1, 3.12, 3.8, 3.10, 3.14, 3.11, 3.13, 3.7, and 3.9. Each of such indexes corresponds with a vertex of the model structure.
Figure 4 synthesises the method according to the invention, so as this can be performed through the system shown in figure 8.
Such a system, denoted by 1 as a whole, inGudes a pick-up unit 2, for instance a digital camera or a functionally equivalent unit, such as a conventional camera capable of producing photographs which, after development and print, may be subjected to a scanning process. Starting from a subject L, unit 2 can therefore generate a plane image I of the face to be modelled: this image is in practice an image of the type shown in figure 7A.
The image I so obtained is in the form of a digitised image, i.e. if a sequence of data that represent pixel by pixel the information (brightness, chromatic characteristics, etc.) relating to the same image.
Such a sequence of data is provided to a processing system 3 (essentially a computer) which performs - according to principles well known to a specialist, once the S , criteria of the embodiment of the invention described in detail in the following have been set forth - the operations listed below:
- identification and extraction of the feature points of the image 1, designed to be used for processing model M, - reading from a memory or a similar support 4, associated to the pnxessor, of the data corresponding to the starting model, which data have been previously stored and are read also in this case according to well known modalities, - execution of the processing operations typical of the method according to the invention, as better described in the sequel, and - generation of the processed output model, also in this case in the form of digital data representative of the 3-D model; such data can be transferred to and loaded into another processing system (for instance an animation system) and/or downloaded into a storage support 5 (floppy disc, CD-ROM, etc.) for their subsequent use.
The operation of adaptation of the starting model M, previously described, to image I is based on a virtual optical projection of model M and image I, respectively, performed in a system the focus of which lies in the origin O of a three-dimensional Cartesian space x, y, z in which model M is placed in the positive half space) along the Z axis and image I is placed in the negative half space (see the diagram of Figure 4).
It will be appreciated that the fine adaptation of model M to image I is based on the assumption that model M is on the whole oriented, with regard to the plane XY
of the above-described system, in a generally mirror-like position with regard to image I. Hence, model M is placed with a front orientation, if one requires adaptation to a front image I. On the contrary model M will be for instance laterally oriented, if it is required to achieve adaptation to a side image of the head of the person represented in image I.
This also substantially applies to the distance a between origin O and the centre of model M and distance ~, between origin O and the plane of image I.
To simplify the calibration process and avoid the introduction of unknown values by the user, at least distance a is set to an arbitrary value (for instance 170 cm), determined in advance by calculating the average of a set of possible cases. It must be still considered that value a depends not only on the distance of the subject from camera 2 at the time when image I was taken, but also on the parameters of the same camera.
Substantially, the method according to the invention consists of a series of geometrical transformations aimed at making the projection of the set of feature points of the model M of interest coincide with the homologous set of homologous points identified on image 1.

Let then (x,.~, y,,~, z,.~) be the space co-ordinates of the vertex of model M
associated to feature point ij (for instance, the left end of the face) and (X,.j, Y,,j) be the co-ordinates in image 1 of the same feature point (referred to a local system on the plane of image I, with the origin coinciding with the upper angle of the image, in a possible embodiment).
After starting the process (step 100 in the flow chart of Figure 9), the first operational step (101 in Figure 9) is the computation of value ~,.
Let Xo , Yo be the oo-ordinates of the centre 'of the face taken in image I.
These co-ordinates are obtained by exploiting the four points placed at the end of the face (for instance, with reference to the present release of MPEG-4 standard, points 10.9 and 10.10: right end and left end of the face, and 11.4, 2.1: top of head and tip of chin). The following relation will then apply:
Xo - X 10.9 '~' X loco ~ Yo - Y 1.4 -I- Yz.1 (I) Distance ~, is computed in such a way as to make the width of the projection of the model coincide with the width of the face in the photograph, according to the following relation:
Xlo.9 - Xo (I~
Xlo.9 Subsequently (step 102) the position of model M along the Y axis is modified so that its projection is vertically in register with the contents of image I.
A value ,~y, computed according to relation:
a y _ _ Zz.1 (Y 1.4 - Yz.1 ) _ Yz.1 (III ) 211.4 + Z2.1 is added to each vertex.
In this way the model is scaled vertically. After this operation, the size of its projection coincides with the area of the head reproduced in image I.
In a subsequent step 103, each co-ordinate Y of the vertices of model M is multiplied by a coefficient c computed as follows:
~ - _ Zza ' (Yz.1 - Yo) (IV) ' ~ ' Yz.1 At this point (step 104) a global transformation is performed in the vertical direction on the model in order to make the position of some characteristic features of the face (for instance, the eyebrows) coincide with those of the person. The model is substantially altered along the Y axis, as shown in Figure 5.
Preferably, the global transformation is a non-linear transformation, preferably of second order, and most preferably it is based on a parabolic law, in particular of the type corresponding to a generic parabola (y = az2 + bz +c) passing in the three points of the plane YZ:
~Yi.4 - Yo)'Zn.4 Y».4 - a, (Y4.i - Yo ) ' Z4a ~1'4.r -~Yz.~ - Yo ) ' Zz.~
~Y z.r - ,'~, s In particular in Figure 5, the model shown in a recumbent position, so in a horizontal direction, corresponds to the model before the transformation according to the parabolic function previou$ly described, whereas the model shown in a vertical position is the result of said transformation.
Thereafter (step 105, with an essentially cyclic structure, defined by a choice step 106, that finds out whether the sequence can be considered as complete) a series of transformations (translations, scalings and affine transforms) designed to correctly position the individual features characteristic of the face is performed.
Preferably the operations involved are the following:
is - the eyelids and the contour of the eyes are adapted by means of two translations and four affine transforms;
- the nose is first vertically adapted through scaling and then deformed through two affine transforms;
- the mouth is modified by applying four affine transforms;
- the region between the nose basis and the upper end of the mouth is translated and scaled; and - the region between the lower end of the mouth and the tip of the chin is translated and scaled.
Preferably the adopted affine transforms correspond to a transform that may 2s be set out according to a relation of the type:
X, _ ~tX + ~2y + ~3 Y~ = C~X + CsY + Ce where __ (x~ ~-xs' )(Y~ - Yz ) - (x~ ~'xz ~ )~Y~ - Y3 ) c' ~Y~ - Yz )~x~ - x3 ) - ~Y~ - Ys Ux~ - xz ) (x~ ~ -xz ~ )(x~ - xs ) - (x~ ~-x3 ~ Ox~ - xz ) ~ -3p ~Y~ - Yz )~x~ - xs ) - (Y~ - Ys )(xi - xz ) ~9 - Xt ~ - ~txt - ~zYt (v,'-.v3' )(v, - vz ) - (.v,'-.v2' )(v1 - y3 ) (Y1 Y2 )(x1 . x3 ) - (Y( Y3 )(x~ xz ) _ (Yf'-Yz' )(x~ - xa ) - (Yi'-Ys' )(x~ - xz ) (Y~ - Yz )(x~ - x3 ) - (Y~ - Ys )(x~ - xz ) C8 - y1 ~ - C4X 1 - C6yf The described formulas express a planar transformatin driven by the S displacement of three points:
- (x,, y,), (xz, yz), (x3, Ya) are the co-ordinates of such points before the transformation, - (x,', y,'), (xz', y2 ), (xa', Ya') are the corresponding co-ordinates after the transformation.
As the last operations concerning the geometry of the model, two wire frames representing the eyes (sclera and iris) are positioned behind the eyelids, so as to allow their closing and to leave sufficient room for a displacement simulating the movements of the eyes (step 107). Standard teeth which do not interfere with the movements of the mouth (108) are then added to the model.
The sequence shown~in Figures 6A-6C represents the evolution of model M
(here represented according to the wire frame mode, to better highlight the variations) with reference to the front appearance of the basic model (Figure 6A), after the affine transforms (Figure 6B) and after completion with eyes and teeth (Figure 6C).
At this point the application of the texture to the model is performed (step 109) by associating to each vertex a bi-dimensional co-ordinate that binds it to a speck point of image I, according to a process known as "texture binding". The data relating to the texture binding are computed by simply exploiting projections parameters a and ~,, defined at the start of the calibration described at the beginning of this description.
Teeth have a standard texture, defined in advance.
In the case in which the model is created starting from several images, a further step is performed concerning the generation of the texture. Such step however is not specifically represented in the flow chart of Figure 9. As a matter of fact, the image containing the model texture is created by joining the information associated to the various points of sight.
Preferably, in order to better exploit the resolution of the image designed to contain the texture, the shape of the texture of all the triangles of the model is transformed into a right triangle of a aonstant size. The triangles so obtained are then coupled two by two in order to obtain a rectangular shape. The rectangles are then placed into the image according to a matrix an-angement so as to cover its surface.
The size of the rectangles is a function of the number of triangles of the model and of the size of the image that stores the texture of the model.

Figure 10 shows an example of image containing the texture of the various triangles. Each rectangle (the p~lygons shown are not squares, and are formed by N x N+1 pixels) contains the texture of two triangles. At the beginning the texture of the individual triangles has a generic triangle shape that has been transformed into a right triangle by means of an affine transform and a bi-linear filtering.
Figure 11 illustrates a detail of the previous Figure 10; showing the actual area of the texture used by two triangles inside the rectangle (areas defined by lines 300. For each rectangle of size NxN+1, the effective area is IYxN pixels.
It is worth noting that this process for texture generation is not specific for the models of human face, but can be applied in all the cases of creation of a 3-D
model starting from several images.
The model obtained in this way may be then represented by using different common graphic formats (among which, in addition to the MPEC-4 standard previously cited, the standards VRML 2.O and Openlnventor). All the models can be animated so as to reproduce the lip movements and the countenances. In the case in which several images of the person; taken from different' points of sight, are available, it is possible to apply the method described to the different images so as to enhance the look of the model. The resulting model is obviously oriented according to the orientation of the image.
It is evident,that; while keeping unchanged the invention principles set forth herein, the details of implementation and the embodiments can be varied considerably with regard to what has been described and illustrated, without departing from the scope of this invention, as will be defined in the following claims.

Claims (4)

1. A method of creating 3D facial models (M) starting from face images (I), which comprises the steps of:
- providing at least one face image (I);
- providing a 3-D facial model (M) having a vertex structure and comprising a number of surfaces chosen within the group formed by:
a face surface (V); right eye and left eye (OD, OS) surface; upper teeth and lower teeth (DS, DI) surface;
- choosing respective sets of homologous points among the vertices of the structure of said model (M) and on said at least one face image (I);
- modifying the vertex structure of said model (M) so as to make the respective sets of homologous points coincide, the modification of the vertex structure of said model (M) including at least one of the operations chosen within the group formed by:
- said face image (I) having a width, and said model (M) comprising a projection having a width, making the width of the projection of the model (M) coincide with the width of said face image (I), - vertically registering the projection of the model (M) with said face image (I), - performing a global, non-linear transformation of the model (M) in a vertical direction in order to make the position of at least one characteristic feature of the model (M) coincide with a homologous characteristic feature of said face image (I).
2. The method according to claim 1, which further comprises the operation of applying a texture to said modified model.
3. The method according to claim 2, wherein the operation of applying a texture includes the operations of:
- providing a plurality of said face images (I) corresponding to different points of sight of said face, - creating the texture to be applied to said model (M) by generating, for each of said face images, a respective texture information in the form of right triangles of constant size, - coupling two by two the triangles relating to the texture information derived from a plurality of images so as to obtain, as a result of the coupling, respective rectangles, and - applying said texture to said modified model in the form of a matrix of said rectangles.
4. The method according to claim 1, wherein said modification of the vertex structure of the model (M) is carried out in the form of a geometric operation performed by positioning said face image (I) and said model (M) in opposite and mirroring positions with respect to an origin (O) of a three-dimensional Cartesian system (X, Y, Z) which includes the operations of computing at least one distance parameter chosen within the group including:
- distance a between said origin (O) and a centre point of said model (M), and - distance A between said origin (O) and s plane of said face image (I) and of applying a texture to said modified model (M) through a process of texture binding performed on the basis of at least one of said distance parameters.
CA002284348A 1998-10-02 1999-10-01 A method of creating 3 d facial models starting from face images Expired - Lifetime CA2284348C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ITT098A000828 1998-10-02
IT1998TO000828A IT1315446B1 (en) 1998-10-02 1998-10-02 PROCEDURE FOR THE CREATION OF THREE-DIMENSIONAL FACIAL MODELS TO START FROM FACE IMAGES.

Publications (2)

Publication Number Publication Date
CA2284348A1 CA2284348A1 (en) 2000-04-02
CA2284348C true CA2284348C (en) 2006-06-06

Family

ID=11417076

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002284348A Expired - Lifetime CA2284348C (en) 1998-10-02 1999-10-01 A method of creating 3 d facial models starting from face images

Country Status (8)

Country Link
US (1) US6532011B1 (en)
EP (2) EP1424655B1 (en)
JP (1) JP3288353B2 (en)
AT (2) ATE286286T1 (en)
CA (1) CA2284348C (en)
DE (2) DE69922898T2 (en)
ES (2) ES2366243T3 (en)
IT (1) IT1315446B1 (en)

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6807290B2 (en) * 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
DE10018143C5 (en) * 2000-04-12 2012-09-06 Oerlikon Trading Ag, Trübbach DLC layer system and method and apparatus for producing such a layer system
TW517210B (en) * 2000-08-31 2003-01-11 Bextech Inc A method for generating speaking expression variation without distortion in 2D picture using polygon computation
US20080040227A1 (en) * 2000-11-03 2008-02-14 At&T Corp. System and method of marketing using a multi-media communication system
US6976082B1 (en) 2000-11-03 2005-12-13 At&T Corp. System and method for receiving multi-media messages
US7203648B1 (en) 2000-11-03 2007-04-10 At&T Corp. Method for sending multi-media messages with customized audio
US7035803B1 (en) 2000-11-03 2006-04-25 At&T Corp. Method for sending multi-media messages using customizable background images
US7091976B1 (en) * 2000-11-03 2006-08-15 At&T Corp. System and method of customizing animated entities for use in a multi-media communication application
US6963839B1 (en) 2000-11-03 2005-11-08 At&T Corp. System and method of controlling sound in a multi-media communication application
US6990452B1 (en) 2000-11-03 2006-01-24 At&T Corp. Method for sending multi-media messages using emoticons
US7020305B2 (en) * 2000-12-06 2006-03-28 Microsoft Corporation System and method providing improved head motion estimations for animation
JP4419320B2 (en) * 2000-12-25 2010-02-24 コニカミノルタホールディングス株式会社 3D shape data generator
KR100422471B1 (en) * 2001-02-08 2004-03-11 비쥬텍쓰리디(주) Apparatus and method for creation personal photo avatar
US20020154174A1 (en) * 2001-04-23 2002-10-24 Redlich Arthur Norman Method and system for providing a service in a photorealistic, 3-D environment
US9400921B2 (en) * 2001-05-09 2016-07-26 Intel Corporation Method and system using a data-driven model for monocular face tracking
SE519929C2 (en) * 2001-07-26 2003-04-29 Ericsson Telefon Ab L M Procedure, system and terminal for changing or updating during ongoing calls eg. avatars on other users' terminals in a mobile telecommunications system
GB2382289B (en) 2001-09-28 2005-07-06 Canon Kk Method and apparatus for generating models of individuals
US20030069732A1 (en) * 2001-10-09 2003-04-10 Eastman Kodak Company Method for creating a personalized animated storyteller for audibilizing content
US7671861B1 (en) * 2001-11-02 2010-03-02 At&T Intellectual Property Ii, L.P. Apparatus and method of customizing animated entities for use in a multi-media communication application
GB2389289B (en) * 2002-04-30 2005-09-14 Canon Kk Method and apparatus for generating models of individuals
CN1313979C (en) * 2002-05-03 2007-05-02 三星电子株式会社 Apparatus and method for generating 3-D cartoon
US7174033B2 (en) 2002-05-22 2007-02-06 A4Vision Methods and systems for detecting and recognizing an object based on 3D image data
US7257236B2 (en) 2002-05-22 2007-08-14 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
JP4357155B2 (en) * 2002-05-28 2009-11-04 株式会社セガ Animation image generation program
ITTO20020724A1 (en) * 2002-08-14 2004-02-15 Telecom Italia Lab Spa PROCEDURE AND SYSTEM FOR THE TRANSMISSION OF MESSAGES TO
EP1431810A1 (en) * 2002-12-16 2004-06-23 Agfa-Gevaert AG Method for automatic determination of colour correction data for the reproduction or digital images
US20040152512A1 (en) * 2003-02-05 2004-08-05 Collodi David J. Video game with customizable character appearance
US20040157527A1 (en) * 2003-02-10 2004-08-12 Omar Ruupak Nanyamka Novelty articles for famous persons and method for making same
EP1599828A1 (en) 2003-03-06 2005-11-30 Animetrics, Inc. Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
US7643671B2 (en) * 2003-03-24 2010-01-05 Animetrics Inc. Facial recognition system and method
US7711155B1 (en) * 2003-04-14 2010-05-04 Videomining Corporation Method and system for enhancing three dimensional face modeling using demographic classification
US7421097B2 (en) * 2003-05-27 2008-09-02 Honeywell International Inc. Face identification verification using 3 dimensional modeling
US7388580B2 (en) * 2004-05-07 2008-06-17 Valve Corporation Generating eyes for a character in a virtual environment
US8139068B2 (en) * 2005-07-29 2012-03-20 Autodesk, Inc. Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh
US20070080967A1 (en) * 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
JP4760349B2 (en) * 2005-12-07 2011-08-31 ソニー株式会社 Image processing apparatus, image processing method, and program
US7567251B2 (en) * 2006-01-10 2009-07-28 Sony Corporation Techniques for creating facial animation using a face mesh
US8047915B2 (en) 2006-01-11 2011-11-01 Lyle Corporate Development, Inc. Character for computer game and method
US8059917B2 (en) * 2007-04-30 2011-11-15 Texas Instruments Incorporated 3-D modeling
US20080298643A1 (en) * 2007-05-30 2008-12-04 Lawther Joel S Composite person model from image collection
KR100940862B1 (en) * 2007-12-17 2010-02-09 한국전자통신연구원 Head motion tracking method for 3d facial model animation from a video stream
US8131063B2 (en) 2008-07-16 2012-03-06 Seiko Epson Corporation Model-based object image processing
US8029140B2 (en) 2008-09-18 2011-10-04 Disney Enterprises, Inc. Device to produce a floating image
US8042948B2 (en) * 2008-09-18 2011-10-25 Disney Enterprises, Inc. Apparatus that produces a three-dimensional image
US8260038B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Subdivision weighting for robust object model fitting
US8260039B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Object model fitting using manifold constraints
US8204301B2 (en) 2009-02-25 2012-06-19 Seiko Epson Corporation Iterative data reweighting for balanced model learning
US8208717B2 (en) 2009-02-25 2012-06-26 Seiko Epson Corporation Combining subcomponent models for object image modeling
KR101640458B1 (en) * 2009-06-25 2016-07-18 삼성전자주식회사 Display device and Computer-Readable Recording Medium
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
ES2464341T3 (en) 2009-12-15 2014-06-02 Deutsche Telekom Ag Procedure and device to highlight selected objects in picture and video messages
US8884982B2 (en) 2009-12-15 2014-11-11 Deutsche Telekom Ag Method and apparatus for identifying speakers and emphasizing selected objects in picture and video messages
PL2337327T3 (en) 2009-12-15 2014-04-30 Deutsche Telekom Ag Method and device for highlighting selected objects in image and video messages
US20120120071A1 (en) * 2010-07-16 2012-05-17 Sony Ericsson Mobile Communications Ab Shading graphical objects based on face images
CN102129706A (en) * 2011-03-10 2011-07-20 西北工业大学 Virtual human eye emotion expression simulation method
CN102902355B (en) * 2012-08-31 2015-12-02 中国科学院自动化研究所 The space interaction method of mobile device
GB2510201B (en) * 2013-01-29 2017-05-03 Toshiba Res Europe Ltd A computer generated head
US10708545B2 (en) 2018-01-17 2020-07-07 Duelight Llc System, method, and computer program for transmitting face models based on face data points
KR101783453B1 (en) * 2015-10-05 2017-09-29 (주)감성과학연구센터 Method and Apparatus for extracting information of facial movement based on Action Unit
KR20170081544A (en) * 2016-01-04 2017-07-12 한국전자통신연구원 Apparatus and method for restoring experience items
US11430169B2 (en) 2018-03-15 2022-08-30 Magic Leap, Inc. Animating virtual avatar facial movements
US10607065B2 (en) * 2018-05-03 2020-03-31 Adobe Inc. Generation of parameterized avatars
CN108961369B (en) * 2018-07-11 2023-03-17 厦门黑镜科技有限公司 Method and device for generating 3D animation
US10817365B2 (en) 2018-11-09 2020-10-27 Adobe Inc. Anomaly detection for incremental application deployments

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994023390A1 (en) * 1993-03-29 1994-10-13 Matsushita Electric Industrial Co., Ltd. Apparatus for identifying person
US6031539A (en) * 1997-03-10 2000-02-29 Digital Equipment Corporation Facial image method and apparatus for semi-automatically mapping a face on to a wireframe topology
US6154222A (en) * 1997-03-27 2000-11-28 At&T Corp Method for defining animation parameters for an animation definition interface

Also Published As

Publication number Publication date
ES2237010T3 (en) 2005-07-16
JP2000113217A (en) 2000-04-21
ATE286286T1 (en) 2005-01-15
DE69922898T2 (en) 2005-12-15
EP0991023A3 (en) 2001-02-21
ES2366243T3 (en) 2011-10-18
CA2284348A1 (en) 2000-04-02
DE69922898D1 (en) 2005-02-03
EP0991023B1 (en) 2004-12-29
EP0991023A2 (en) 2000-04-05
JP3288353B2 (en) 2002-06-04
EP1424655A2 (en) 2004-06-02
ITTO980828A1 (en) 2000-04-02
EP1424655A3 (en) 2006-08-30
EP1424655B1 (en) 2011-04-06
IT1315446B1 (en) 2003-02-11
US6532011B1 (en) 2003-03-11
ATE504896T1 (en) 2011-04-15
DE69943344D1 (en) 2011-05-19

Similar Documents

Publication Publication Date Title
CA2284348C (en) A method of creating 3 d facial models starting from face images
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
Williams Performance-driven facial animation
JP4932951B2 (en) Facial image processing method and system
Alexander et al. Creating a photoreal digital actor: The digital emily project
US20060023923A1 (en) Method and system for a three dimensional facial recognition system
WO2002013144A1 (en) 3d facial modeling system and modeling method
JPH1011609A (en) Device and method for generating animation character
Tarini et al. Texturing faces
KR100317138B1 (en) Three-dimensional face synthesis method using facial texture image from several views
Jeong et al. Automatic generation of subdivision surface head models from point cloud data
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
CN115861525A (en) Multi-view face reconstruction method based on parameterized model
JP2001222725A (en) Image processor
Karunaratne et al. A new efficient expression generation and automatic cloning method for multimedia actors
Nagashima et al. Three-dimensional face model reproduction method using multiview images
ed eric Pighin Modeling and Animating Realistic Faces from Images
Erol Modeling and Animating Personalized Faces
CN117808943A (en) Three-dimensional cartoon face reconstruction method, device, equipment and storage medium
GB2353451A (en) Morphing of an object using the morphological behaviour of another object
Meriç Generating 3d Face Models from Photographs
KR20040067730A (en) Creation system and method of Real-3D Picture Avatar that uses range scanner

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry

Effective date: 20191001