WO2007044815A2 - Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects - Google Patents

Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects Download PDF

Info

Publication number
WO2007044815A2
WO2007044815A2 PCT/US2006/039737 US2006039737W WO2007044815A2 WO 2007044815 A2 WO2007044815 A2 WO 2007044815A2 US 2006039737 W US2006039737 W US 2006039737W WO 2007044815 A2 WO2007044815 A2 WO 2007044815A2
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
fit
deformed
source
image
Prior art date
Application number
PCT/US2006/039737
Other languages
French (fr)
Other versions
WO2007044815A3 (en
Inventor
Michael I. Miller
Original Assignee
Animetrics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Animetrics Inc. filed Critical Animetrics Inc.
Publication of WO2007044815A2 publication Critical patent/WO2007044815A2/en
Publication of WO2007044815A3 publication Critical patent/WO2007044815A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • This invention relates to object modeling and identification systems, and more particularly to the determination of 3D geometry and lighting of an object from 2D input using 3D models of candidate objects.
  • Facial identification (ID) systems typically function by attempting to match a newly captured image with an image that is archived in an image database. If the match is close enough, the system determines that a successful identification has been made. The matching takes place entirely within two dimensions, with the ID system manipulating both the captured image and the database images in 2D.
  • Most facial image databases store pictures that were captured under controlled conditions in which the subject is captured in a standard pose and under standard lighting conditions.
  • the standard pose is a head-on pose, and the standard lighting is neutral and uniform.
  • a newly captured image to be identified is obtained with a standard pose and under standard lighting conditions, it is normally possible to obtain a relatively close match between the image and a corresponding database image, if one is present in the database.
  • such systems tend to become unreliable as the image to be identified is captured under pose and lighting conditions that deviate from the standard pose and lighting. This is to be expected, because both changes in pose and changes in lighting will have a major impact on a 2D image of a three-dimensional object, such as a face.
  • Embodiments described herein employ a variety of methods to "normalize” captured facial imagery (both 2D and 3D) by means of 3D avatar representations so as to improve the performance of traditional ID systems that use a database of images captured under standard pose and lighting conditions.
  • the techniques described can be viewed as providing a "front end" to a traditional ID system, in which an available image to be identified is preprocessed before being passed to the ID system for identification.
  • the techniques can also be integrated within an ID system that uses 3D imagery, or a combination of 2D and 3D imagery.
  • the methods exploit the lifting of 2D photometric and geometric information to 3D coordinate system representations, referred to herein as avatars or model geometry.
  • lifting is taken to mean the estimation of 3D information about an object based on one or more available 2D projections (images) and/or 3D measurements.
  • Photometric lifting is taken to mean the estimation of 3D lighting information based on the available 2D and/or 3D information
  • geometric lifting is taken to mean the estimation of 3D geometrical (shape) information based on the available 2D and/or 3D information.
  • the construction of the 3D geometry from 2D photographs involves the use of a library of 3D avatars.
  • the system calculates the closest matching avatar in the library of avatars. It may then alter 3D geometry, shaping it to more closely correspond to the measured geometry in the image.
  • Photometric (lighting) information is then placed upon this 3D geometry in a manner that is consistent with the information in the image plane. In other words, the avatar is lit in such a way that a camera in the image plane would produce a photograph that approximates to the available 2D image.
  • the 3D geometry can be normalized geometrically and photometrically so that the 3D geometry appears to be in a standard pose and lit with standard lighting.
  • the resulting normalized image is then passed to the traditional ID system for identification. Since the traditional ID system is now attempting to match an image that has effectively been rotated and photometrically normalized to place it in correspondence with the standard images in the image database, the system should work effectively, and produce an accurate identification.
  • This preprocessing serves to make traditional ID systems robust to variations in pose and lighting conditions.
  • the described embodiment also works effectively with 3D matching systems, since it enables normalization of the state of the avatar model so that it can be directly and efficiently compared to standardized registered individuals in a 3D database.
  • the invention features a method of estimating a 3D shape of a target head from at least one source 2D image of the head.
  • the method involves searching a library of candidate 3D avatar models to locate a best-fit 3D avatar, for each 3D avatar model among the library of 3D avatar models computing a measure of fit between a 2D projection of that 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) unlabeled feature points in the source 2D imagery, and (ii) additional feature points generated by imposing symmetry constraints, wherein the best-fit 3D avatar is the 3D avatar model among the library of 3D avatar models that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit 3D avatar.
  • a target image illumination is estimated by generating a set of notional lightings of the best-fit 3D avatar and searching among the notional lightings of the best-fit avatar to locate a best notional lighting that has a 2D projection that yields a best measure of fit to the target image.
  • the notional lightings include a set of photometric basis functions and at least one of small and large variations from the basis functions.
  • the best-fit 3D avatar is projected and compared to a gallery of facial images, and identified with a member of the gallery if the fit exceeds a certain value.
  • the search among avatars also includes searching at least one of small and large deformations of members of the library of avatars.
  • the estimation of 3D shape of a target head can be made from a single 2D image if the surface texture of the target head is known, or if symmetry constraints on the avatar and source image are imposed.
  • the estimation of 3D shape of a target head can be made from two or more 2D images even if the surface texture of the target head is initially unknown.
  • the invention features a method of generating a normalized 3D representation of a target head from at least one source 2D projection of the head.
  • the method involves providing a library of candidate 3D avatar models, and searching among the candidate 3D avatar models and their deformations to locate a best-fit 3D avatar, the searching including, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the deformations corresponding to permanent and non-permanent features of the target head, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit; and generating a geometrically normalized 3D representation of the target head from the best-fit deformed 3D avatar by removing deformations corresponding to non-permanent features of the target head.
  • the normalized 3D representation is projected into a plane corresponding to a normalized pose, such as a face-on view, to generate a geometrically normalized image.
  • the normalized image is compared to members of a gallery of 2D facial images having a normal pose, and positively identified with a member of the gallery if a measure of fit between the normalized image and a gallery member exceeds a predetermined threshold.
  • the best-fitting avatar can be lit with normalized (such as uniform and diffuse) lighting before being projected into a normal pose so as to generate a geometrically and photometrically normalized image.
  • the invention features a method of estimating the 3D shape of a target head from source 3D feature points.
  • the method involves searching a library of avatars and their deformations to locate the deformed avatar having the best fit to the 3D feature points, and basing the estimate on the best- fit avatar.
  • Other embodiments include matching to avatar feature points and their reflections in an avatar plane of symmetry, using unlabeled source 3D feature points, and using source 3D normal feature points that specify a head surface normal direction as well as position. Comparing the best-fit deformed avatar with each gallery member, yields a positive identification of the 3D head with a member of a gallery of 3D reference representations of heads if a measure of fit exceeds a predetermined threshold.
  • the invention features a method of estimating a 3D shape of a target head from a comparison of a projection of a 3D avatar and dense imagery of at least one source 2D image of a head.
  • the invention features positively identifying at least one source image of a target head with a member of a database of candidate facial images.
  • the method involves generating a 3D avatar corresponding to the source imagery and generating a 3D avatar corresponding to each member of the database of candidate facial images using the methods described above.
  • the target head is positively identified with a member of the database of candidate facial images if a measure of fit between the source avatar corresponding to the source imagery and an avatar corresponding to a candidate facial image exceeds a predetermined threshold.
  • Fig. 1 is a flow diagram illustrating the principal steps involved in normalizing a source 2D facial image.
  • Fig. 2 illustrates photometric normalization of a source 2D facial image.
  • Fig. 3 illustrates geometric normalization of a source 2D facial image.
  • Fig. 4 illustrates performing both photometric and geometric normalization of a source 2D facial image.
  • Fig. 5 illustrates removing lighting variations by spatial filtering and symmetrization of source facial imagery.
  • a traditional photographic ID system attempts to match one or more target images of the person to be identified with an image in an image library. Such systems perform the matching in 2D using image comparison methods that are well known in the art. If the target images are captured under controlled conditions, the system will normally identify a match, if one exists, with an image in its database because the system is comparing like with like, i.e., comparing two images that were captured under similar conditions.
  • the conditions in question refer principally to the pose and shape of the subject and the photometric lighting.
  • the described embodiment converts target 2D imagery captured under uncontrolled conditions in the projective plane and converts it into a 3D avatar geometry model representation.
  • the system lifts the photometric and geometric information from 2D imagery or 3D measurements onto the 3D avatar geometry. It then uses the 3D avatar to generate geometrically and photometrically normalized representations that correspond to standard conditions under which the reference image database was captured. These standard conditions, also referred to as normal conditions, usually correspond to a head-on view of the face with a normal expression and neutral and uniform illumination.
  • a traditional ID system can use it to perform a reliable identification.
  • the methods described herein also serve to increase the accuracy of a traditional ID system even when working with target images that were previously considered close enough to "normal" to be suitable for ID via such systems.
  • a traditional ID system might have a 70% chance of performing an accurate ID with a target image pose of 30° from head-on.
  • the chance of performing an accurate ID might increase to 90%.
  • Fig. 1 The basic steps of the normalization process are illustrated in Fig. 1.
  • the target image is captured (102) under unknown pose and lighting conditions.
  • the following steps (104-110) are described in detail in U.S. Patent Application Serial Numbers 10/794,353 and 10/794,943, which are incorporated herein in their entirety.
  • the process starts with a process called jump detection, in which the system scans the target image to detect the presence of the feature points whose existence in the image plane are substantially invariant across different faces under varying lighting conditions and under varying poses (104).
  • Such features include one or more of the following: points, such as the extremity of the mouth, curves, such an eyebrow; brightness order relationships; image gradients; edges, and subareas.
  • points such as the extremity of the mouth, curves, such an eyebrow
  • brightness order relationships image gradients
  • edges and subareas.
  • the existence in the image plane of the inside and outside of a nostril is substantially invariant under face, pose, and lighting variations.
  • the system only needs about 3-100 feature points.
  • Each identified feature point corresponds to a labeled feature point in the avatar.
  • Feature points are referred to as labeled when the correspondence is known, and unlabeled when the correspondence is unknown.
  • the labeled feature points being detected are a sparse sampling of the image plane and relatively small in number, jump detection is very rapid, and can be performed in real time. This is especially useful when a moving image is being tracked.
  • the system uses the detected feature points to determine the lifted geometry by searching a library of avatars to locate the avatar whose invariant features, when projected into 2D at all possible poses, has a projection which yields the closest match to the invariant features identified in the target imagery (106).
  • the 3D lifted avatar geometry is then refined via shape deformation to improve the feature correspondence (108).
  • This 3D avatar representation may also be refined via unlabeled feature points, as well as dense imagery requiring diffusion or gradient matching along with the sparse landmark-based matching, and 3D labeled and unlabeled features.
  • step 110 the deformed avatar is lit with the normal lighting parameters and projected into 2D from an angle that corresponds to the normal pose.
  • the resulting "normalized" image is passed to the traditional ID system (1 12). Aspects of these steps that relate to the normalization process are described in detail below.
  • Geometric normalizations include the normalization of pose, as referred to above. This corresponds to rigid body motions of the selected avatar. For example, a target image that was captured from 30° clockwise from head- on has its geometry and photometry lifted to the 3D avatar geometry, from which it is normalized to a head-on view by rotating the 3D avatar geometry by 30° anticlockwise before projecting it into the image plane.
  • Geometric normalizations also include shape changes, such as facial expressions. For example, an elongated or open mouth corresponding to a smile or laugh can be normalized to a normal width, closed mouth. Such expressions are modeled by deforming the avatar so as to obtain an improved key feature match in the 2D target image (step 108). The system later "backs out” or “inverts” the deformations corresponding to the expressions so as to produce an image that has a "normal” expression. Another example of shape change corresponding to geometric normalization inverts the effects of aging. A target image of an older person can be normalized to the corresponding younger face.
  • Photometric normalization includes lighting normalizations and surface texture/color normalizations. Lighting normalization involves converting a target image taken under non-standard illumination and converting it to normal illumination. For example, a target image may be lit with a point source of red light. Photometric normalization converts the image into one that appears to be taken under neutral, uniform lighting. This is performed by illuminating the selected deformed avatar with the standard lighting before projecting it into 2D (110).
  • a second type of photometric normalization takes account of changes in the surface texture or color of the target image compared to the reference image.
  • An avatar surface is described by a set of normals N(x) which are 3D vectors representing the orientations of the faces of the model, and a reference texture called T ref (x), that is a data structure, such as a matrix having an RGB value for each polygon on the avatar.
  • Photometric normalization can involve changing the values of T re f for some of the polygons that correspond to non-standard features in the target image. For example, a beard can change the color of a region of the face from white to black. In the idealized case, this would correspond to the RGB values changing from (256, 256, 256) for white to (0,0,0) for black. In this case, photometric normalization corresponds to restoring the face to a standard, usually with no facial hair.
  • the selected avatar is deformed prior to illumination and projection into 2D.
  • Deformation denotes a variation in shape from the library avatar to a deformed avatar whose key features more closely correspond to the key features of the target image. Deformations may correspond to an overall head shape variation, or to a particular feature of a face, such as the size of the nose.
  • the normalization process distinguishes between small geometric or photometric changes performed on the library avatar and large changes.
  • a small change is one in which the geometric change (be it a shape change or deformation) or photometric change (be it a lighting change to surface texture/color change) is such that the mapping from the library avatar to the changed avatar is approximately linear.
  • Geometric transformation moves the coordinates according to the general mapping x e IR 3 h-> ⁇ (x) e R 3 .
  • the mapping approximates to an additive linear change in coordinates, so that the original value x maps approximately under the linear relationship x ⁇ R 3 h-> ⁇ (x) » x + u(x) e R 3 .
  • the lighting variation changes the values of the avatar function texture field values T(x) at each coordinate systems point x, and is generally of the multiplicative form F ref ( ⁇ ) 1 ⁇ ⁇ l' ⁇ ref ( ⁇ ) e ⁇ 3 • F° r small variation lighting the change is also linearly
  • Examples of small geometric deformations include small variations in face shape that characterize a range of individuals of broadly similar features and the effects of aging.
  • Examples of small photometric changes include small changes in lighting between the target image and the normal lighting, and small texture changes, such as variations in skin color, for example a suntan.
  • Large deformations refer to changes in geometric or photometric data that are large enough so that the linear approximations used above for small deformations cannot be used.
  • Examples of large geometric deformations include large variation in face shapes, such as a large nose compared to a small nose, and pronounced facial expressions, such as a laugh or display of surprise.
  • Examples of large photometric changes include major lighting changes such as extreme shadows, and change from indoor lighting to outdoor lighting.
  • the avatar model geometry from here on referred to as a CAD model (or by the symbol CAD) is represented by a mesh of points in 3D that are the vertices of the set of triangular polygons that approximate the surface of the avatar.
  • Each surface point x e CAD has a normal direction N(x) e R'.jc e CAD .
  • Each vertex is given a color value, called a texture T(x) e l',x e CAD , and each triangular face is colored according to an average of the color values assigned to its vertices.
  • the color values are determined from a 2D texture map that may be derived using standard texture mapping procedures, which define a bijective correspondence (1-1 and onto) from the photograph used to create the reference avatar.
  • the avatar is associated with a coordinate system that is fixed to it, and is indexed by three angular degrees of freedom (pitch, roll, and yaw), and three translational degrees of freedom of the rigid body center in three-space.
  • angular degrees of freedom pitch, roll, and yaw
  • translational degrees of freedom of the rigid body center in three-space To capture articulation of the avatar geometry, such as motion of the chin and eyes, certain subparts have their own local coordinates, which form part of the avatar description.
  • the chin can be described by cylindrical coordinates about an axis corresponding to the jaw. Texture values are represented by a color representation, such RGB values.
  • the avatar vertices are connected to form polygonal (usually triangular) facets.
  • Generating a normalized image from a single or multiple target photographs requires a bijection or correspondence between the planar coordinates of the target imagery and the 3D avatar geometry.
  • the photometric and geometric information in the measured imagery can be lifted onto the 3D avatar geometry.
  • the 3D object is manipulated and normalized, and normalized output imagery is generated from the 3D object. Normalized output imagery may be provided via OpenGL or other conventional rendering engines, or other rendering devices. Geometric and photometric lifting and normalization are now described.
  • a set of photometric basis functions representing the entire lighting sphere for each I v (p) is computed in order to represent the lighting of each avatar corresponding to the photograph, using principal components relative to the particular geometric avatars.
  • the photometric variation is lifted onto the 3D avatar geometry by varying the photometric basis functions representing illumination variability to match optimally the photographic values between the known avatar and the photographs.
  • the luminance function, L(x),x e CAD can be estimated in a closed-form least-squares solution for the photometric basis functions.
  • the color of the illuminating light can also be normalized by matching the RGB values in the textured representation of the avatar to reflect lighting spectrum variations, such as natural versus artificial light, and other physical characteristics of the lighting source.
  • neutralized, or normalized versions of the textured avatar can be generated by applying the inverse transformation specified by the geometric and lighting features to the best- fit models.
  • the system uses the normalized avatar to generate normalized photographic output in the projective plane corresponding to any desired geometric or lighting specification.
  • the desired normalized output usually corresponds to a head-on pose viewed under neutral, uniform lighting.
  • the textured lighting field T(x),x e CAD is written as a perturbation of the original reference T ref ⁇ x),x e CAD by luminance £( ⁇ ),i e CAD and color functions e' ⁇ , e' a , e'" .
  • These luminance and color functions can in general be expanded in a basis which may be computed using principal components on the CAD model by varying all possible illuminations. It may sometimes be preferable to perform the calculation analytically based on any other complete orthonormal basis defined on surfaces, such as spherical harmonics, Laplace-Beltrami functions and other functions of the derivatives.
  • luminance variations cannot be additive, as the space of measured imagery is a positive function space.
  • the photometric field T(x) is modeled as a multiplicative group acting on the reference textured object T ref according to
  • £(•) represents the luminance function indexed over the CAD model resulting from interaction of the incident light with the normal directions of the 3D avatar surface.
  • the system then uses non-linear least-squares algorithms such as gradient algorithms or Newton search to generate the minimum mean-squared error (MMSE) estimator of the lighting field parameters. It does this by solving the minimization over the
  • the non-linear least-squares algorithms such as gradient algorithms and Newton search, can be used for minimizing the least-squares equation.
  • the sparse feature points are used for the geometric lifting, and that they are defined in correspondence between points on the avatar 3D geometry and the 2D projective imagery, concentrating on extracted features associated with points, curves, or subareas in the image plane.
  • the projective geometry mapping is defined as either positive or negative z projecting along the z axis with rigid transformation of the form O, b : x H> Ox + b around object center
  • is the Sobelev norm with v satisfying smoothness constraints associated with Il V 1 11 ⁇ .
  • the norm can be associated with a differential operator L representing the smoothness enforced on the vector fields, such as the Laplacian and other forms of derivatives so that
  • Sobelev space to be a reproducing kernel Hubert space with a smoothing kernel. All of these are acceptable methods. Adding the rigid motions gives a similar minimization problem
  • Such large deformations can represent expressions, jaw motion as well as large deformation shape change, following U.S. Patent Application Serial No. 10/794,353.
  • the avatar may be deformed with small deformations only representing the large deformation according to the linear approximation x — > x + u(x), x e CAD :
  • Expressions and jaw motions can be added directly by writing the vector fields u in a basis representing the expressions as described in U.S. Patent Application Serial No. 10/794,353.
  • jaw motion corresponds to a flow of points in the jaw following a rotation around the fixed jaw axis 0 ⁇ ) : x V- > 0 ⁇ )x where O rotates the jaw points around the jaw axis ⁇ .
  • the system uses a reflective symmetry constraint in both rigid motion and deformation estimation to gain extra power.
  • the CAD model coordinates are centered at the origin such that its plane of symmetry is aligned with the yz -plane. Therefore, the reflection matrix is
  • R : x ⁇ - > Rx is the reflection of x about the plane of symmetry on the CAD model.
  • X 1 (x, , y l , z.)
  • i 1, ..., N
  • the system adds an identical set of constraints on the reflection of the original set of model points.
  • the symmetry requires that an observed feature in the projective plane matches both the corresponding point on the model (under the rigid motion) (O, b) : x ⁇ - > Ox 1 + b , as well as the reflection of the symmetric pair on the model, ORx ⁇ (l) + b .
  • the deformation, ⁇ applied to a point X 1 should be the same as that produced by the reflection of the deformation of the symmetric pair R ⁇ (x ⁇ U) ) ⁇ This amounts to augmenting the optimization to include two constraints for each feature point instead of one.
  • the rigid motion estimation reduces to the same structure as in U.S. Patent Application Serial Nos. 10/794,353 and 10/794,943 with 2N instead of N constraints and takes a similar form as the two view problem, as described therein.
  • the geometric transformations are constructed directly from the dense set of continuous pixels representing the object, in which case observed N feature points may not be delineated in the projective imagery or in the avatar template models.
  • the geometrically normalized avatar can be generated from the dense imagery directly.
  • the 3D avatar is at orientation and translation (O,b) under the Euclidean transformation x h-> Ox + b , with associated texture field T(O,b).
  • the avatar at orientation and position (O,b) the template T(O,b).
  • the given image I(p),p e [0,l] 2 as a noisy representation of the projection of the avatar template at the unknown position (O,b).
  • the problem is to estimate the rotation and translation O, b which minimizes the expression
  • the optimal rotation and translation may be computed using the techniques described above, by first performing the optimization for the rigid motion alone, and then performing the optimization for shape transformation.
  • the optimum expressions and rigid motions may be computed simultaneously by searching over their corresponding parameter spaces simultaneously.
  • the first step is to create a common coordinate system that accommodates the entire model geometry.
  • the common coordinates are in 3D, based directly on the avatar vertices.
  • the CAD model geometry could be selected by symmetry, unlabeled points, or dense imagery, or any of the above methods for geometric lifting.
  • the 3D avatar reference texture and lighting fields could be selected by symmetry, unlabeled points, or dense imagery, or any of the above methods for geometric lifting.
  • the problem of estimating the lighting fields and reference texture field becomes the MMSE of each according to
  • Image acquisition system 202 captures a 2D image 204 of the target head.
  • the system generates (206) best fitting avatar 208 by searching through a library of reference avatars, and by deforming the reference avatars to accommodate permanent or intrinsic features as well as temporary or non-intrinsic features of the target head.
  • Best- fitting generated avatar 208 is photometrically normalized (210) by applying "normal" lighting, which usually corresponds to uniform, white lighting.
  • T(x(p)) L(x(p))T ref (x(p)) .
  • best-fitting avatar 208 illuminated with normal lighting is projected into 2D to generate photometrically normalized 2D imagery 212.
  • I no ⁇ m (p) (l R (p)J G ⁇ p),I B (p)) - ( ⁇ R (x ⁇ p)), ⁇ G (x(p)), ⁇ B ⁇ x(p))) . (58)
  • the variations in the lighting across the face of a subject are gradual, resulting in large-scale variations.
  • the features of the target face cause small-scale, rapid changes in image brightness.
  • the nonlinear filtering and symmetrization of the smoothly varying part of the texture field is applied.
  • the symmetry plane of the models is used for calculating the symmetric pairs of points in the texture fields. These values are averaged, thereby creating a single texture field. This average may only be preferentially applied to the smoothly varying components of the texture field (which exhibit lighting artifacts).
  • Fig. 5 illustrates a method of removing lighting variations.
  • Local luminance values L (506) are estimated (504) from the captured source image I (502). Each measured value of the image is divided (508) by the local luminance, providing a quantity that is less dependent on lighting variations and more dependent on the features of the source object.
  • Small spatial scale variations deemed to stem from source features, are selected by high pass filter 510 and are left unchanged.
  • Large spatial scale variations, deemed to represent lighting variations are selected by low pass filter 512, and are symmetrized (514) to remove lighting artifacts. The symmetrized smoothly varying component and the rapidly varying component are added together (516) to produce an estimate of the target texture field 518.
  • the local lighting field estimates can be subtracted from the captured source image values, rather than being divided into them.
  • Image acquisition system 202 captures 2D image 302 of the target head.
  • the system generates (206) best fitting avatar 304 by searching through a library of reference avatars, and by deforming the reference avatars to accommodate permanent or intrinsic features as well as temporary or non-intrinsic features of the target head.
  • Best-fitting avatar is geometrically normalized (306) by backing out deformations corresponding to non-intrinsic and non-permanent features of the target head.
  • Geometrically normalized 2D imagery 308 is generated by projecting the geometrically normalized avatar into an image plane corresponding to a normal pose, such as a face-on view.
  • the rigid motion also carries all the texture field T(x),x e CAD of the original 3D avatar model according to
  • the rigid motion normalized avatar is now in neutral position, and can be used for 3D matching as well as to generate imagery in normalized pose position.
  • the inverse transformation is applied to every point on the 3D avatar f' : i e CAD M> ⁇ ⁇ ⁇ x) as well as to every normal by rotating the normals by the Jacobian of the mapping at every point ⁇ ⁇ x : N(x) e where D ⁇ is the Jacobian of the mapping.
  • the shape change also carries all of the surface normals as well as the associated texture field of the avatar
  • the shape normalized avatar is now in neutral position, and can be used for 3D matching as well to generate imagery in normalized pose position.
  • the photometrically normalized imagery is now generated from the geometrically normalized avatar CAD model with transformed normals and texture field as described in the photometric normalization section above.
  • the inverse of the MMSE lighting field L in the multiplicative group is applied to the texture field. Combining with the geometric normalization gives
  • Image acquisition system 202 captures target image 402 and generates (206) best- fitting avatar 404 using the methods described above. Best-fitting avatar is geometrically normalized by backing out deformations corresponding to non-intrinsic and non-permanent features of the target head (406). The geometrically normalized avatar is lit with normal lighting (406), and projected into an image plane corresponding to a normal pose, such as a face-on view. The resulting image 408 is geometrically normalized with respect to shape (expressions and temporary surface alterations) and pose, as well as photometrically normalized with respect to lighting.
  • the first step is to run the feature-based procedure for generating the selected avatar CAD model that optimally represents the measured photographic imagery. This is accomplished by defining the set of (i) labeled features, (ii) the unlabeled features, (iii) 3D labeled features, (iv) 3D unlabeled features, or (v) 3D surface normals.
  • the avatar CAD model geometry is then constructed from any combination of these, using rigid motions, symmetry, expressions, and small or large deformation geometry transformation. [0084] If given multiple sets of 2D or 3D measurements, the 3D avatar geometry can be constructed from the multiple sets of features.
  • T" orm (x) L ⁇ l OT(XOx + b),x e CAD'"" ' "' . (65)
  • T" om (x) L ⁇ ] (-)T(-)( ⁇ (x)), x e CAD* 0 TM . (66)
  • the small variation representation can be used as well.
  • the 3D avatar geometry has the correspondence p e [0,l] 2 ⁇ -> x(p) e IR 3 defined between it and the photometric information via the bijection defined by the rigid motions and shape transformation.
  • the imagery can be directly normalized in the image plane according to
  • I norn XP ⁇ 7 ⁇ r rAe- h l ⁇ p ⁇ e- lc I G (p),e-'°I B ⁇ p)) . (68)
  • Identification systems attempt to identify a newly captured image with one of the images in a database of images of ID candidates, called the registered imagery.
  • the newly captured image also called the probe, is captured with a pose and under lighting conditions that do not correspond to the standard pose and lighting conditions that characterize the images in the image database.
  • ID or matching can be performed by lifting the photometry and geometry into the 3D avatar coordinates as depicted in Fig. 4.
  • the 3D coordinate systems can be exploited directly.
  • CAD models can be generated using any combination of 2D labeled projective points, unlabeled projective points, labeled 3D points, unlabeled 3D points, unlabeled surface normals, as well as dense imagery in the projective plane.
  • dense imagery measurements the texture fields T 01130 generated using the bijections described in the previous sections are associated with the CAD models.
  • Removing symmetry involves removing the last 3 terms in the equations.
  • the 3D CAD models and correspondences between the textured imagery can be generated using any of the above geometric features in the image plane including 2D labeled projective points, unlabeled projective points, labeled 3D points, unlabeled 3D points, unlabeled surface normals, as well as dense imagery in the projective plane.
  • dense imagery measurements associated with the CAD models are the texture fields T (JAL n ) a generated using the bijections described in the previous sections. Performing ID via the texture fields amounts to lifting the measurements of the probes to the 3D avatar CAD models and computing the distance metrics between the probe measurements and the registered database of CAD models.
  • a fast version of the ID may be accomplished using the log-minimization:
  • ID can be performed by matching both the geometry and the texture features.

Abstract

Image acquisition system (202) captures target image (402) and generates (206) best-fitting avatar (404) using the methods described above. Best-fitting avatar is geometrically normalized by backing out deformations corresponding to non-intrinsic and non-permanent features of the target head (406). The geometrically normalized avatar is lit with normal lighting (406), and projected into an image plane corresponding to a normal pose, such as a face-on view. The resulting image (408) is geometrically normalized with respect to shape (expressions and temporary surface alterations) and pose, as well as photometrically normalized with respect to lighting.

Description

GENERATION OF NORMALIZED 2D IMAGERY AND ID SYSTEMS VIA 2D TO 3D LIFTING OF MULTIFEATURED OBJECTS
TECHNICAL FIELD
[0001] This invention relates to object modeling and identification systems, and more particularly to the determination of 3D geometry and lighting of an object from 2D input using 3D models of candidate objects.
BACKGROUND
[0002] Facial identification (ID) systems typically function by attempting to match a newly captured image with an image that is archived in an image database. If the match is close enough, the system determines that a successful identification has been made. The matching takes place entirely within two dimensions, with the ID system manipulating both the captured image and the database images in 2D.
[0003] Most facial image databases store pictures that were captured under controlled conditions in which the subject is captured in a standard pose and under standard lighting conditions. Typically, the standard pose is a head-on pose, and the standard lighting is neutral and uniform. When a newly captured image to be identified is obtained with a standard pose and under standard lighting conditions, it is normally possible to obtain a relatively close match between the image and a corresponding database image, if one is present in the database. However, such systems tend to become unreliable as the image to be identified is captured under pose and lighting conditions that deviate from the standard pose and lighting. This is to be expected, because both changes in pose and changes in lighting will have a major impact on a 2D image of a three-dimensional object, such as a face. SUMMARY
[0004] Embodiments described herein employ a variety of methods to "normalize" captured facial imagery (both 2D and 3D) by means of 3D avatar representations so as to improve the performance of traditional ID systems that use a database of images captured under standard pose and lighting conditions. The techniques described can be viewed as providing a "front end" to a traditional ID system, in which an available image to be identified is preprocessed before being passed to the ID system for identification. The techniques can also be integrated within an ID system that uses 3D imagery, or a combination of 2D and 3D imagery.
[0005] The methods exploit the lifting of 2D photometric and geometric information to 3D coordinate system representations, referred to herein as avatars or model geometry. As used herein, the term lifting is taken to mean the estimation of 3D information about an object based on one or more available 2D projections (images) and/or 3D measurements. Photometric lifting is taken to mean the estimation of 3D lighting information based on the available 2D and/or 3D information, and geometric lifting is taken to mean the estimation of 3D geometrical (shape) information based on the available 2D and/or 3D information.
[0006] The construction of the 3D geometry from 2D photographs involves the use of a library of 3D avatars. The system calculates the closest matching avatar in the library of avatars. It may then alter 3D geometry, shaping it to more closely correspond to the measured geometry in the image. Photometric (lighting) information is then placed upon this 3D geometry in a manner that is consistent with the information in the image plane. In other words, the avatar is lit in such a way that a camera in the image plane would produce a photograph that approximates to the available 2D image.
[0007] When used as a preprocessor for a traditional 2D ID system, the 3D geometry can be normalized geometrically and photometrically so that the 3D geometry appears to be in a standard pose and lit with standard lighting. The resulting normalized image is then passed to the traditional ID system for identification. Since the traditional ID system is now attempting to match an image that has effectively been rotated and photometrically normalized to place it in correspondence with the standard images in the image database, the system should work effectively, and produce an accurate identification. This preprocessing serves to make traditional ID systems robust to variations in pose and lighting conditions. The described embodiment also works effectively with 3D matching systems, since it enables normalization of the state of the avatar model so that it can be directly and efficiently compared to standardized registered individuals in a 3D database.
[0008] In general, in one aspect, the invention features a method of estimating a 3D shape of a target head from at least one source 2D image of the head. The method involves searching a library of candidate 3D avatar models to locate a best-fit 3D avatar, for each 3D avatar model among the library of 3D avatar models computing a measure of fit between a 2D projection of that 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) unlabeled feature points in the source 2D imagery, and (ii) additional feature points generated by imposing symmetry constraints, wherein the best-fit 3D avatar is the 3D avatar model among the library of 3D avatar models that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit 3D avatar.
[0009] Other embodiments include one or more of the following features. A target image illumination is estimated by generating a set of notional lightings of the best-fit 3D avatar and searching among the notional lightings of the best-fit avatar to locate a best notional lighting that has a 2D projection that yields a best measure of fit to the target image. The notional lightings include a set of photometric basis functions and at least one of small and large variations from the basis functions. The best-fit 3D avatar is projected and compared to a gallery of facial images, and identified with a member of the gallery if the fit exceeds a certain value. The search among avatars also includes searching at least one of small and large deformations of members of the library of avatars. The estimation of 3D shape of a target head can be made from a single 2D image if the surface texture of the target head is known, or if symmetry constraints on the avatar and source image are imposed. The estimation of 3D shape of a target head can be made from two or more 2D images even if the surface texture of the target head is initially unknown.
[0010] In general, in another aspect, the invention features a method of generating a normalized 3D representation of a target head from at least one source 2D projection of the head. The method involves providing a library of candidate 3D avatar models, and searching among the candidate 3D avatar models and their deformations to locate a best-fit 3D avatar, the searching including, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the deformations corresponding to permanent and non-permanent features of the target head, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit; and generating a geometrically normalized 3D representation of the target head from the best-fit deformed 3D avatar by removing deformations corresponding to non-permanent features of the target head.
[0011] Other embodiments include one or more of the following features. The normalized 3D representation is projected into a plane corresponding to a normalized pose, such as a face-on view, to generate a geometrically normalized image. The normalized image is compared to members of a gallery of 2D facial images having a normal pose, and positively identified with a member of the gallery if a measure of fit between the normalized image and a gallery member exceeds a predetermined threshold. The best-fitting avatar can be lit with normalized (such as uniform and diffuse) lighting before being projected into a normal pose so as to generate a geometrically and photometrically normalized image.
[0012] In general, in yet another aspect, the invention features a method of estimating the 3D shape of a target head from source 3D feature points. The method involves searching a library of avatars and their deformations to locate the deformed avatar having the best fit to the 3D feature points, and basing the estimate on the best- fit avatar.
[0013] Other embodiments include matching to avatar feature points and their reflections in an avatar plane of symmetry, using unlabeled source 3D feature points, and using source 3D normal feature points that specify a head surface normal direction as well as position. Comparing the best-fit deformed avatar with each gallery member, yields a positive identification of the 3D head with a member of a gallery of 3D reference representations of heads if a measure of fit exceeds a predetermined threshold.
[0014] In general, in still another aspect, the invention features a method of estimating a 3D shape of a target head from a comparison of a projection of a 3D avatar and dense imagery of at least one source 2D image of a head.
[0015] In general, in a further aspect, the invention features positively identifying at least one source image of a target head with a member of a database of candidate facial images. The method involves generating a 3D avatar corresponding to the source imagery and generating a 3D avatar corresponding to each member of the database of candidate facial images using the methods described above. The target head is positively identified with a member of the database of candidate facial images if a measure of fit between the source avatar corresponding to the source imagery and an avatar corresponding to a candidate facial image exceeds a predetermined threshold.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Fig. 1 is a flow diagram illustrating the principal steps involved in normalizing a source 2D facial image.
[0017] Fig. 2 illustrates photometric normalization of a source 2D facial image. [0018] Fig. 3 illustrates geometric normalization of a source 2D facial image.
[0019] Fig. 4 illustrates performing both photometric and geometric normalization of a source 2D facial image.
[0020] Fig. 5 illustrates removing lighting variations by spatial filtering and symmetrization of source facial imagery.
DETAILED DESCRIPTION
[0021] A traditional photographic ID system attempts to match one or more target images of the person to be identified with an image in an image library. Such systems perform the matching in 2D using image comparison methods that are well known in the art. If the target images are captured under controlled conditions, the system will normally identify a match, if one exists, with an image in its database because the system is comparing like with like, i.e., comparing two images that were captured under similar conditions. The conditions in question refer principally to the pose and shape of the subject and the photometric lighting. However, it is often not possible to capture target photographs under controlled conditions. For example, a target image might be captured by a security camera without the subject's knowledge, or it might be taken while the subject is fleeing the scene.
[0022] The described embodiment converts target 2D imagery captured under uncontrolled conditions in the projective plane and converts it into a 3D avatar geometry model representation. Using the terms employed herein, the system lifts the photometric and geometric information from 2D imagery or 3D measurements onto the 3D avatar geometry. It then uses the 3D avatar to generate geometrically and photometrically normalized representations that correspond to standard conditions under which the reference image database was captured. These standard conditions, also referred to as normal conditions, usually correspond to a head-on view of the face with a normal expression and neutral and uniform illumination. Once a target image is normalized, a traditional ID system can use it to perform a reliable identification. [0023] Since the described embodiment can normalize an image to match a traditional ID system's normal pose and lighting conditions exactly, the methods described herein also serve to increase the accuracy of a traditional ID system even when working with target images that were previously considered close enough to "normal" to be suitable for ID via such systems. For example, a traditional ID system might have a 70% chance of performing an accurate ID with a target image pose of 30° from head-on. However, if the target is preprocessed and normalized before being passed to the ID system, the chance of performing an accurate ID might increase to 90%.
[0024] The basic steps of the normalization process are illustrated in Fig. 1. The target image is captured (102) under unknown pose and lighting conditions. The following steps (104-110) are described in detail in U.S. Patent Application Serial Numbers 10/794,353 and 10/794,943, which are incorporated herein in their entirety.
[0025] The process starts with a process called jump detection, in which the system scans the target image to detect the presence of the feature points whose existence in the image plane are substantially invariant across different faces under varying lighting conditions and under varying poses (104). Such features include one or more of the following: points, such as the extremity of the mouth, curves, such an eyebrow; brightness order relationships; image gradients; edges, and subareas. For example, the existence in the image plane of the inside and outside of a nostril is substantially invariant under face, pose, and lighting variations. To determine the lifted geometry, the system only needs about 3-100 feature points. Each identified feature point corresponds to a labeled feature point in the avatar. Feature points are referred to as labeled when the correspondence is known, and unlabeled when the correspondence is unknown.
[0026] Since the labeled feature points being detected are a sparse sampling of the image plane and relatively small in number, jump detection is very rapid, and can be performed in real time. This is especially useful when a moving image is being tracked. [0027] The system uses the detected feature points to determine the lifted geometry by searching a library of avatars to locate the avatar whose invariant features, when projected into 2D at all possible poses, has a projection which yields the closest match to the invariant features identified in the target imagery (106). The 3D lifted avatar geometry is then refined via shape deformation to improve the feature correspondence (108). This 3D avatar representation may also be refined via unlabeled feature points, as well as dense imagery requiring diffusion or gradient matching along with the sparse landmark-based matching, and 3D labeled and unlabeled features.
[0028] In subsequent step 110, the deformed avatar is lit with the normal lighting parameters and projected into 2D from an angle that corresponds to the normal pose. The resulting "normalized" image is passed to the traditional ID system (1 12). Aspects of these steps that relate to the normalization process are described in detail below.
[0029] The described embodiment performs two kinds of normalization: geometric and photometric. Geometric normalizations include the normalization of pose, as referred to above. This corresponds to rigid body motions of the selected avatar. For example, a target image that was captured from 30° clockwise from head- on has its geometry and photometry lifted to the 3D avatar geometry, from which it is normalized to a head-on view by rotating the 3D avatar geometry by 30° anticlockwise before projecting it into the image plane.
[0030] Geometric normalizations also include shape changes, such as facial expressions. For example, an elongated or open mouth corresponding to a smile or laugh can be normalized to a normal width, closed mouth. Such expressions are modeled by deforming the avatar so as to obtain an improved key feature match in the 2D target image (step 108). The system later "backs out" or "inverts" the deformations corresponding to the expressions so as to produce an image that has a "normal" expression. Another example of shape change corresponding to geometric normalization inverts the effects of aging. A target image of an older person can be normalized to the corresponding younger face.
[0031] Photometric normalization includes lighting normalizations and surface texture/color normalizations. Lighting normalization involves converting a target image taken under non-standard illumination and converting it to normal illumination. For example, a target image may be lit with a point source of red light. Photometric normalization converts the image into one that appears to be taken under neutral, uniform lighting. This is performed by illuminating the selected deformed avatar with the standard lighting before projecting it into 2D (110).
[0032] A second type of photometric normalization takes account of changes in the surface texture or color of the target image compared to the reference image. An avatar surface is described by a set of normals N(x) which are 3D vectors representing the orientations of the faces of the model, and a reference texture called Tref(x), that is a data structure, such as a matrix having an RGB value for each polygon on the avatar. Photometric normalization can involve changing the values of Tref for some of the polygons that correspond to non-standard features in the target image. For example, a beard can change the color of a region of the face from white to black. In the idealized case, this would correspond to the RGB values changing from (256, 256, 256) for white to (0,0,0) for black. In this case, photometric normalization corresponds to restoring the face to a standard, usually with no facial hair.
[0033] As illustrated by 108 in Fig. 1 , the selected avatar is deformed prior to illumination and projection into 2D. Deformation denotes a variation in shape from the library avatar to a deformed avatar whose key features more closely correspond to the key features of the target image. Deformations may correspond to an overall head shape variation, or to a particular feature of a face, such as the size of the nose.
[0034] The normalization process distinguishes between small geometric or photometric changes performed on the library avatar and large changes. A small change is one in which the geometric change (be it a shape change or deformation) or photometric change (be it a lighting change to surface texture/color change) is such that the mapping from the library avatar to the changed avatar is approximately linear. Geometric transformation moves the coordinates according to the general mapping x e IR3 h-> φ(x) e R3 . For small geometric transformation, the mapping approximates to an additive linear change in coordinates, so that the original value x maps approximately under the linear relationship x <≡ R3 h-> φ(x) » x + u(x) e R3 . The lighting variation changes the values of the avatar function texture field values T(x) at each coordinate systems point x, and is generally of the multiplicative form Fref(χ) 1^ ^l'^ref(χ) e ^3 • F°r small variation lighting the change is also linearly
approximated by Tref(x) i→ L(x)'Tref(x) « ε (x) + Tref(x) e R3.
[0035] Examples of small geometric deformations include small variations in face shape that characterize a range of individuals of broadly similar features and the effects of aging. Examples of small photometric changes include small changes in lighting between the target image and the normal lighting, and small texture changes, such as variations in skin color, for example a suntan. Large deformations refer to changes in geometric or photometric data that are large enough so that the linear approximations used above for small deformations cannot be used.
[0036] Examples of large geometric deformations include large variation in face shapes, such as a large nose compared to a small nose, and pronounced facial expressions, such as a laugh or display of surprise. Examples of large photometric changes include major lighting changes such as extreme shadows, and change from indoor lighting to outdoor lighting.
[0037] The avatar model geometry, from here on referred to as a CAD model (or by the symbol CAD) is represented by a mesh of points in 3D that are the vertices of the set of triangular polygons that approximate the surface of the avatar. Each surface point x e CAD has a normal direction N(x) e R'.jc e CAD . Each vertex is given a color value, called a texture T(x) e l',x e CAD , and each triangular face is colored according to an average of the color values assigned to its vertices. The color values are determined from a 2D texture map that may be derived using standard texture mapping procedures, which define a bijective correspondence (1-1 and onto) from the photograph used to create the reference avatar. The avatar is associated with a coordinate system that is fixed to it, and is indexed by three angular degrees of freedom (pitch, roll, and yaw), and three translational degrees of freedom of the rigid body center in three-space. To capture articulation of the avatar geometry, such as motion of the chin and eyes, certain subparts have their own local coordinates, which form part of the avatar description. For example, the chin can be described by cylindrical coordinates about an axis corresponding to the jaw. Texture values are represented by a color representation, such RGB values. The avatar vertices are connected to form polygonal (usually triangular) facets.
[0038] Generating a normalized image from a single or multiple target photographs requires a bijection or correspondence between the planar coordinates of the target imagery and the 3D avatar geometry. As introduced above, once the correspondences are found, the photometric and geometric information in the measured imagery can be lifted onto the 3D avatar geometry. The 3D object is manipulated and normalized, and normalized output imagery is generated from the 3D object. Normalized output imagery may be provided via OpenGL or other conventional rendering engines, or other rendering devices. Geometric and photometric lifting and normalization are now described.
2D to 3D Photometric Lifting to 3D Avatar Geometries
Nonlinear Least-Square Photometric Lifting
[0039] For photometric lifting, it is assumed that the 3D model avatar geometry with surface vertices and normals is known, along with the avatar's shape and pose parameters, and its reference texture Tref{x),x e CAD . The lighting normalization involves the interaction of the known shape and normals on the surface of the CAD model. The photometric basis is defined relative to the midplane of the avatar geometry and the interaction of the normals indexed with the surface geometry and the luminance function representation. Generating a normalized image from a single or multiple target photographs requires a bijection or correspondence between the planar coordinates of the imagery I(p),p e [0,l]2 and the 3D avatar geometry, denoted p e [0,l]2 <-> x(p) e R3 ; for the correspondence between the multiple views f'(p), v = \,...,V , the multiple correspondences becomes p e [0,l]2 <H> xv(p) e l3 . A set of photometric basis functions representing the entire lighting sphere for each Iv(p) is computed in order to represent the lighting of each avatar corresponding to the photograph, using principal components relative to the particular geometric avatars. The photometric variation is lifted onto the 3D avatar geometry by varying the photometric basis functions representing illumination variability to match optimally the photographic values between the known avatar and the photographs. By working in the log-coordinates, the luminance function, L(x),x e CAD , can be estimated in a closed-form least-squares solution for the photometric basis functions. The color of the illuminating light can also be normalized by matching the RGB values in the textured representation of the avatar to reflect lighting spectrum variations, such as natural versus artificial light, and other physical characteristics of the lighting source.
[0040] Once the lighting state has been fit to the avatar geometry, neutralized, or normalized versions of the textured avatar can be generated by applying the inverse transformation specified by the geometric and lighting features to the best- fit models. The system then uses the normalized avatar to generate normalized photographic output in the projective plane corresponding to any desired geometric or lighting specification. As mentioned above, the desired normalized output usually corresponds to a head-on pose viewed under neutral, uniform lighting.
[0041] Photometric normalization is now described via the mathematical equations which describe the optimum solution. Given a reference avatar texture field, the textured lighting field T(x),x e CAD is written as a perturbation of the original reference Tref{x),x e CAD by luminance £(χ),i e CAD and color functions e'Λ , e'a , e'" . These luminance and color functions can in general be expanded in a basis which may be computed using principal components on the CAD model by varying all possible illuminations. It may sometimes be preferable to perform the calculation analytically based on any other complete orthonormal basis defined on surfaces, such as spherical harmonics, Laplace-Beltrami functions and other functions of the derivatives. In general, luminance variations cannot be additive, as the space of measured imagery is a positive function space. For representing large variation lighting, the photometric field T(x) is modeled as a multiplicative group acting on the reference textured object Tref according to
Figure imgf000014_0001
where φx are orthogonal basis functions indexed over the face, and the coefficient vectors /, = (/,Λ , /,σ , /,β ), I2 = (/2 Λ , /2 σ , /2 β ), ... represent the unknown basis function coefficients representing a different variation for each RGB within the multiplicative representation.
[0042] Here £(•) represents the luminance function indexed over the CAD model resulting from interaction of the incident light with the normal directions of the 3D avatar surface. Once the correspondence is defined between the observed photograph and the avatar representation p e [0,l]2 O x{p) e R3 , there exists a correspondence between the photograph and the RGB texture values on the avatar. In this section it is assumed that the avatar texture Tref (x) is known. In general, the overall color spectrum of the texture field may demonstrate variations as well. In this case, each RGB expansion coefficient solves for the separate channel random field variations requires solution of the minimum mean-squared error (MMSE) equations
(r (P) -I? (-W(X(P)))2 . (2)
Figure imgf000015_0001
The system then uses non-linear least-squares algorithms such as gradient algorithms or Newton search to generate the minimum mean-squared error (MMSE) estimator of the lighting field parameters. It does this by solving the minimization over the
luminance fields in the span of the bases Lc = e^"1 ' ' * , c=R,G,B. Other norms besides the 2-norm for positive functions may be used, including the Kullback-Liebler distance, Ll distance, or others. Correlation between the RGB components can be introduced via a covariance matrix between the lighting and color components.
[0043] For a lower-dimensional representation in which there is a single RGB tinting function - rather than one for each expansion coefficient - the model becomes simply
T(x) =
Figure imgf000015_0002
The MMSE corresponds to
Figure imgf000015_0003
Given the reference Tref(x) , the non-linear least-squares algorithms such as gradient algorithms and Newton search, can be used for minimizing the least-squares equation.
Fast Photometric Lifting to 3D Geometries via the Log Metric
[0044] Since the space of lighting variations is very extensive, multiplicative photometric normalization is computationally intensive. A log transformation creates a robust, computationally effective, linear least-squares formulation. Converting the
Figure imgf000016_0001
Figure imgf000017_0001
The LSE's for the lighting functions becomes fory = l, ,.,,</
Φ,(χ(p))ΦM(p))- (8) pe Σ[0,l
Figure imgf000017_0002
Small Variation Photometric Lifting to 3D Geometries
[0045] As discussed above, small variations in the texture field (corresponding, for example, to small color changes of the reference avatar) are approximately linear Tιef{x) v- > ε{x) + Tιef{x) , with the additive field modeled in the basis
ε(x) = ^T (ε' 5 εf,ε^)φt(x) . For small photometric variations, the MMSE satisfies
Figure imgf000017_0003
The LLSE' s for the images directly (rather than their log) gives
for c=R,G B, ^ (r(P)-τ;e/(x(p)))φJ(x(p)) = ∑l ε; ∑ Φ,MP))ΦM(P)) j - \,..., d pe[0,l]2 pe[0,l]2
(10)
Adding the color representation via the tinting function gives ε(x) = ∑M(tR + ε, ,tG + ε, ,tB + ε, )φ, (x) gives the color tints according to
Figure imgf000017_0004
Figure imgf000018_0001
Figure imgf000019_0001
Figure imgf000020_0001
Figure imgf000021_0001
Figure imgf000022_0001
Begin by assuming that only the sparse feature points are used for the geometric lifting, and that they are defined in correspondence between points on the avatar 3D geometry and the 2D projective imagery, concentrating on extracted features associated with points, curves, or subareas in the image plane. Given the starting imagery I(p), p e [0, 1]2 , the set of X1 — (Xj ,yJ, zJ),j = \,...,N features is defined on the candidate avatar and to a correspondence to a similar set of features in the projective imagery P1 = (pjX , pJ2) e [0, 1]2, j = 1, ..., N . The projective geometry mapping is defined as either positive or negative z projecting along the z axis with rigid transformation of the form O, b : x H> Ox + b around object center
The search for the best-
Figure imgf000023_0001
fitting avatar pose (corresponding to the optimal rotation and translation for the selected avatar) uses the invariant features as follows. Given the projective points in the image plane ppj = 1,2,..., N and a rigid transformation of the form
O,b : x
Figure imgf000023_0002
where id is the 3x3 identity matrix. As described in U.S. Patent Applications serial number 10/794,353, the cost function (a measure of the aggregate distance between the projected invariant points of the avatar and the corresponding points in the measured target image) is evaluated by exhaustively calculating the lifted Z1J = 1,...,N . Using MMSE estimation, choosing the minimum cost function, gives the lifted z-depths corresponding to:
min ∑ \\θx, + b- Z1Pg, = min ∑ (Ox1 + b)< Q1 (Ox1 + b). (22)
O,b 7 /~=1
[0053] Choosing a best-fitting predefined avatar involves the database of avatars, with CAD - a,a = \,2,... the number of total avatar models each with labeled features x",j = \,...,N . Selecting the optimum CAD model minimizes overall cost function, choosing the optimally fit CAD model.
Figure imgf000024_0001
where || v, ||^ is the Sobelev norm with v satisfying smoothness constraints associated with Il V1 11^ . The norm can be associated with a differential operator L representing the smoothness enforced on the vector fields, such as the Laplacian and other forms of derivatives so that || v, ||^=|| Lv1 ||2 ; alternatively smoothness is enforced by forcing the
Sobelev space to be a reproducing kernel Hubert space with a smoothing kernel. All of these are acceptable methods. Adding the rigid motions gives a similar minimization problem
O
+
Figure imgf000025_0001
[0055] Such large deformations can represent expressions, jaw motion as well as large deformation shape change, following U.S. Patent Application Serial No. 10/794,353. In another embodiment, the avatar may be deformed with small deformations only representing the large deformation according to the linear approximation x — > x + u(x), x e CAD :
Figure imgf000025_0002
[0056] Expressions and jaw motions can be added directly by writing the vector fields u in a basis representing the expressions as described in U.S. Patent Application Serial No. 10/794,353. In order to track such changes, the motions may be parametrically defined via an expression basis E1 , E2 , ... so that u(x) = ^ e(E( (x) .
These are defined as functions that describe how a smile, eyebrow lift and other expressions cause the invariant features to move on the face. The coefficients e, , e2, ... describing the magnitude of each expression, become the unknowns to be estimated. For example, jaw motion corresponds to a flow of points in the jaw following a rotation around the fixed jaw axis 0{γ) : x V- > 0{γ)x where O rotates the jaw points around the jaw axis γ .
2D to 3D Geometric Lifting Using Symmetry
[0057] For symmetric objects such as the face, the system uses a reflective symmetry constraint in both rigid motion and deformation estimation to gain extra power. Again the CAD model coordinates are centered at the origin such that its plane of symmetry is aligned with the yz -plane. Therefore, the reflection matrix is
simply and R : x \- > Rx is the reflection of x about the plane of
Figure imgf000026_0001
symmetry on the CAD model. Given the features X1 = (x, , yl , z.), i = 1, ..., N , the system defines σ : {\, ...,N} h-» {\,...,N} to be the permutation such that X1 and xσ{l) are symmetric pairs for all / = 1, ..., N . In order to enforce symmetry the system adds an identical set of constraints on the reflection of the original set of model points. In the case of rigid motion estimation, the symmetry requires that an observed feature in the projective plane matches both the corresponding point on the model (under the rigid motion) (O, b) : x \- > Ox1 + b , as well as the reflection of the symmetric pair on the model, ORxσ(l) + b . Similarly, the deformation, φ , applied to a point X1 should be the same as that produced by the reflection of the deformation of the symmetric pair Rφ(xσU)) ■ This amounts to augmenting the optimization to include two constraints for each feature point instead of one. The rigid motion estimation reduces to the same structure as in U.S. Patent Application Serial Nos. 10/794,353 and 10/794,943 with 2N instead of N constraints and takes a similar form as the two view problem, as described therein.
Figure imgf000027_0001
Figure imgf000028_0001
Figure imgf000029_0001
Figure imgf000030_0001
Figure imgf000031_0001
Figure imgf000032_0001
Figure imgf000033_0001
2D to 3D Geometric Lifting via Dense Imagery (Without Correspondence)
[0065] In another embodiment, as described in U.S. Patent Application Ser. No. 10/794,353, the geometric transformations are constructed directly from the dense set of continuous pixels representing the object, in which case observed N feature points may not be delineated in the projective imagery or in the avatar template models. In such cases, the geometrically normalized avatar can be generated from the dense imagery directly. Assume the 3D avatar is at orientation and translation (O,b) under the Euclidean transformation x h-> Ox + b , with associated texture field T(O,b). Define the avatar at orientation and position (O,b) the template T(O,b). Then model the given image I(p),p e [0,l]2 as a noisy representation of the projection of the avatar template at the unknown position (O,b). The problem is to estimate the rotation and translation O, b which minimizes the expression
min ∑ \\I(P) - T(θ,b)(x(p))&> (48)
where x(p) indexes through the 3D avatar template. In the situation where targets are tracked in a series of images, and in some instances when a single image only is available, knowledge of the position of the center of the target will often be available. This knowledge is incorporated as described above, by adding the prior information via the position information
min ∑ \\l{p) - T{O,b){x(p))t + (b-μ)'∑-\b - μ) . (49)
[0066] This minimization procedure is accomplished via diffusion matching as described in U.S. Patent Application Ser. No. 10/794,353. Further including annotated features give rise to jump diffusion dynamics. Shape changes and expressions corresponding to large deformations with φ : x H> φ(x) satisfying
φ = φuφ, = f VSs {x))ds + x, x e CAD are generated:
. (50)
Figure imgf000035_0001
As above in the small deformation equation, for small deformation φ : x I— » ^(JC) « x + u{x) . To represent expressions directly, the transformation can be written in the basis Ex , E2 , ... as above with the coefficients e, , e2 , ... describing the magnitude of each expression's contribution to the variables to be estimated.
[0067] The optimal rotation and translation may be computed using the techniques described above, by first performing the optimization for the rigid motion alone, and then performing the optimization for shape transformation. Alternatively, the optimum expressions and rigid motions may be computed simultaneously by searching over their corresponding parameter spaces simultaneously.
[0068] For dense matching, the symmetry constraint is applied in a similar fashion by applying the permutation to each element of the avatar according to
Figure imgf000035_0002
Photometric, Texture and Geometry Lifting
[0069] When the geometry and photometry and texture are unknown, then the lifting must be performed simultaneously. In this case, the images /v,v = l,2,... , are available and the unknowns are the CAD models with their associated bijections p e [0,l]2 <→ xv(p) e R3,v = l,...,V defined by rigid motions O\b\v = 1,2,... , along with Tιef being unknown and the unknown lighting fields Lv determining the color representations for each instance under the multiplicative model Tv = UTιef . When using such multiple views, the first step is to create a common coordinate system that accommodates the entire model geometry. The common coordinates are in 3D, based directly on the avatar vertices. To perform the photometric normalization and the texture field estimation for the multiple photographs there are multiple bijective correspondences p e [0,l]2 <-> xv(p) e E3, v = \,...,V between the CAD models and the planar images /v,v = 1,... . The first step is to estimate the CAD models geometry either from labeled points in 2D or 3D or via unlabeled points or via dense matching. This follows the above sections for choosing and shaping the geometry of the CAD model to be consistent with the geometric information in the observed imagery, and determining the bisections between the observed imagery and the fixed CAD model. For one instance, if given the projective points in the image plane p} , j = 1, 2, ... ,N
- P II2 where id is the
Figure imgf000036_0001
3x3 identity matrix, and the cost function (a measure of the aggregate distance between the projected invariant points of the avatar and the corresponding points in the measured target image) using MMSE estimation, then a best-fitting predefined avatar can be chosen from the database of avatars, with CAD", a = 1,2,... ,each with labeled features x", j = 1, ..., N . Selecting the optimum CAD model minimizes the overall cost function:
CAD = min Y (Oc1 * + b)' Q1(Ox? + b).
CAD", O, b ~{
[0070] Alternatively, the CAD model geometry could be selected by symmetry, unlabeled points, or dense imagery, or any of the above methods for geometric lifting. Given the CAD model, the 3D avatar reference texture and lighting fields
Tv = UT1 ef are obtained from the observed images by lifting the observed imagery color values to the corresponding vertices on the 3D avatar via the correspondences xv{p) e K3, v = \,...,V defined by the geometric information. The problem of estimating the lighting fields and reference texture field becomes the MMSE of each according to
Figure imgf000037_0001
with the summation over the V separate available views, each corresponding to a different target image. Alternatively, the color tinting model or the log-normalization equations as defined above are used.
Normalization of Photometry and Geometry
Photometric Normalization of 3D Avatar Texture
[0071] The basic steps of photometric normalization are illustrated in Fig. 2. Image acquisition system 202 captures a 2D image 204 of the target head. As described above, the system generates (206) best fitting avatar 208 by searching through a library of reference avatars, and by deforming the reference avatars to accommodate permanent or intrinsic features as well as temporary or non-intrinsic features of the target head. Best- fitting generated avatar 208 is photometrically normalized (210) by applying "normal" lighting, which usually corresponds to uniform, white lighting.
[0072] For the fixed avatar geometry CAD model, the lighting normalization process exploits the basic model that the texture field of the avatar CAD model has the multiplicative relationship T(x(p)) = L(x(p))Tref(x(p)) . For generating the photometrically normalized avatar CAD model with texture imagery T(x),x e CAD , the inverse of the MMSE lighting field L in the multiplicative group is applied to the texture field: L : T(x) h→ Tmm [X) = L~] (x) T(x), x e CAD . (53)
For the vector version of the lighting field this corresponds to componentwise division of each component of the lighting field (with color) into each component of the vector texture field.
Photometric Normalization of 2D Imagery
[0073] Referring again to Fig. 2, best-fitting avatar 208 illuminated with normal lighting is projected into 2D to generate photometrically normalized 2D imagery 212.
[0074] For the fixed avatar geometry CAD model, generating normalized 2D projective imagery, the lighting normalization process exploits the basic model that the image I is in bijective correspondence with the avatar with the multiplicative relationship I(p) <-> T(x(p)) = L(x(p))Tref(x(p)) ; for multiple images
P'(p) <-» Tv(x(p)) = Lv (x(p))Tref (x(p)) . Thus normalized imagery can be generated by dividing out the lighting field. For the lighting model in which each component has a lighting function according to
Figure imgf000038_0001
then the normalized imagery is generated according to the direct relationship
Figure imgf000038_0002
In a second embodiment in which there is the common lighting field with separate color components T(x) = (>Σ>'W7;>), e'c+Σ>' W7^(x), e'^>' wZj(*)) (56)
then the normalization takes the form
1
(57)
L(x(p)) v '
In a third embodiment, we view the change as small and additive, which implies that the general model becomes T(x) = ε(x) + Tιef (x) . The normalization then takes the form
Inoιm(p) = (lR(p)JG{p),IB(p)) - (εR(x{p)),εG(x(p)), εB{x(p))) . (58)
In such an embodiment the small deformation may have a single common shared basis
Nonlinear Spatial Filtering of Lighting Variations and Symmetrization
[0075] In general, the variations in the lighting across the face of a subject are gradual, resulting in large-scale variations. By contrast, the features of the target face cause small-scale, rapid changes in image brightness. In another embodiment, the nonlinear filtering and symmetrization of the smoothly varying part of the texture field is applied. For this, the symmetry plane of the models is used for calculating the symmetric pairs of points in the texture fields. These values are averaged, thereby creating a single texture field. This average may only be preferentially applied to the smoothly varying components of the texture field (which exhibit lighting artifacts).
[0076] Fig. 5 illustrates a method of removing lighting variations. Local luminance values L (506) are estimated (504) from the captured source image I (502). Each measured value of the image is divided (508) by the local luminance, providing a quantity that is less dependent on lighting variations and more dependent on the features of the source object. Small spatial scale variations, deemed to stem from source features, are selected by high pass filter 510 and are left unchanged. Large spatial scale variations, deemed to represent lighting variations, are selected by low pass filter 512, and are symmetrized (514) to remove lighting artifacts. The symmetrized smoothly varying component and the rapidly varying component are added together (516) to produce an estimate of the target texture field 518.
[0077] For the small variations in lighting, the local lighting field estimates can be subtracted from the captured source image values, rather than being divided into them.
Geometrically Normalized 3D Geometry
[0078] The basic steps of geometric normalization are illustrated in Fig. 3. Image acquisition system 202 captures 2D image 302 of the target head. As described above, the system generates (206) best fitting avatar 304 by searching through a library of reference avatars, and by deforming the reference avatars to accommodate permanent or intrinsic features as well as temporary or non-intrinsic features of the target head. Best-fitting avatar is geometrically normalized (306) by backing out deformations corresponding to non-intrinsic and non-permanent features of the target head. Geometrically normalized 2D imagery 308 is generated by projecting the geometrically normalized avatar into an image plane corresponding to a normal pose, such as a face-on view.
[0079] Given the fixed and known avatar geometry, as well as the texture field T(x) generated by lifting sparse corresponding feature points, unlabeled feature points, surface normals, or dense imagery, the system constructs normalized versions of the geometry by applying the inverse transformation.
[0080] From the rigid motion estimation O, b , the inverse transformation is applied to every point on the 3D avatar (O, b)~l : x e CAD h-ϊ θ'(x - b) , as well as to every normal by rotating the normals O,b : N(x) i-> O'N(x) . This new collection of vertex points and normals forms the new geometrically normalized avatar model CAD"0'"' = {(y, N(y)) : y = O'(x - b), N(y) = O'N(x), x e CAD) . (59)
The rigid motion also carries all the texture field T(x),x e CAD of the original 3D avatar model according to
T nom= Tφχ + b^X G CADnorm . (60)
The rigid motion normalized avatar is now in neutral position, and can be used for 3D matching as well as to generate imagery in normalized pose position.
From the shape change φ , the inverse transformation is applied to every point on the 3D avatar f' : i e CAD M> φ~\x) as well as to every normal by rotating the normals by the Jacobian of the mapping at every point φ~x : N(x) e
Figure imgf000041_0001
where Dφ is the Jacobian of the mapping. The shape change also carries all of the surface normals as well as the associated texture field of the avatar
7"! °™ (x) = T(φ(x)\ x e CAD"0'"' . (61)
The shape normalized avatar is now in neutral position, and can be used for 3D matching as well to generate imagery in normalized pose position.
For the small deformation deformations φ(x) « x + u(x) , the approximate inverse transformation is applied to every point on the 3D avatar φ~x : x e CAD H> x - u{x) . As well the normals are transformed via the Jacobian of the linearized part of the mapping Du , and the texture is transformed as above T"orι" (x) = T(x + u(x)), x e CAD"on" .
[0081] The photometrically normalized imagery is now generated from the geometrically normalized avatar CAD model with transformed normals and texture field as described in the photometric normalization section above. For normalizing the texture field photometrically, the inverse of the MMSE lighting field L in the multiplicative group is applied to the texture field. Combining with the geometric normalization gives
T"orm (JC) = I" ' {-)T{-){Ox + b),x e CAD"0"" . (62)
Adding the shape change gives the photometrically normalized texture field
T"om (x) = r! (-)T(-)(φ(x)), x e CAD"om . (63)
Geometry Unknown, Photometric Normalization
[0082] In many settings the geometric normalization must be performed simultaneously with the photometric normalization. This is illustrated in Fig. 4. Image acquisition system 202 captures target image 402 and generates (206) best- fitting avatar 404 using the methods described above. Best-fitting avatar is geometrically normalized by backing out deformations corresponding to non-intrinsic and non-permanent features of the target head (406). The geometrically normalized avatar is lit with normal lighting (406), and projected into an image plane corresponding to a normal pose, such as a face-on view. The resulting image 408 is geometrically normalized with respect to shape (expressions and temporary surface alterations) and pose, as well as photometrically normalized with respect to lighting.
[0083] In this situation, the first step is to run the feature-based procedure for generating the selected avatar CAD model that optimally represents the measured photographic imagery. This is accomplished by defining the set of (i) labeled features, (ii) the unlabeled features, (iii) 3D labeled features, (iv) 3D unlabeled features, or (v) 3D surface normals. The avatar CAD model geometry is then constructed from any combination of these, using rigid motions, symmetry, expressions, and small or large deformation geometry transformation. [0084] If given multiple sets of 2D or 3D measurements, the 3D avatar geometry can be constructed from the multiple sets of features.
The rigid motion also carries all the texture field T(x),x <= CAD of the original 3D avatar model according to T'""m(x) = T(Ox + b),x s CAD"0''"' , or alternatively T"orm{x) = T(φ(x)),x e CAD"01''" , where the normalized CAD model is
CAD"0'''" = {(y,N(y)) : y = O'(x - b),N(y) = O'N(x),x e CAD} . (64)
The texture field of the avatar can be normalized by the lighting field as above according to
T"orm (x) = L~l OT(XOx + b),x e CAD'""'"' . (65)
Adding the shape change gives the photometrically normalized texture field
T"om (x) = L~] (-)T(-)(φ(x)), x e CAD*0™ . (66)
The small variation representation can be used as well.
[0085] Once the geometry is known from the associated photographs, the 3D avatar geometry has the correspondence p e [0,l]2 <-> x(p) e IR3 defined between it and the photometric information via the bijection defined by the rigid motions and shape transformation. For generating the normalized imagery in the projective plane from the original imagery, the imagery can be directly normalized in the image plane according to
Figure imgf000043_0001
Similarly, the direct color model can be used as well
InornXP) = Υ7 } rrAe-hl\p\e-lcIG(p),e-'°IB{p)) . (68)
ID Lifting
[0086] Identification systems attempt to identify a newly captured image with one of the images in a database of images of ID candidates, called the registered imagery. Typically the newly captured image, also called the probe, is captured with a pose and under lighting conditions that do not correspond to the standard pose and lighting conditions that characterize the images in the image database.
ID Lifting Using Labeled Feature Points in the Projective Plane
[0087] Given registered imagery and probes, ID or matching can be performed by lifting the photometry and geometry into the 3D avatar coordinates as depicted in Fig. 4. Given bisections between the registered image / and the 3D avatar model geometry, and between the probe image I pιobe and its 3D avatar model geometry, the 3D coordinate systems can be exploited directly. For such a system, the registered imagery are first converted to 3D CAD models, call them CAD" , a = \,..., A , with textured model correspondences I ιeΛp) <-> T1 (x(p)),x e CAD — reg. These CAD models can be generated using any combination of 2D labeled projective points, unlabeled projective points, labeled 3D points, unlabeled 3D points, unlabeled surface normals, as well as dense imagery in the projective plane. In the case of dense imagery measurements, the texture fields T01130 generated using the bijections described in the previous sections are associated with the CAD models.
[0088] Performing ID amounts to lifting the measurements of the probes to the 3D avatar CAD models and computing the distance metrics between the probe measurements and the registered database of CAD models. Let us enumerate each of
Figure imgf000045_0001
Figure imgf000046_0001
Figure imgf000047_0001
Figure imgf000048_0001
Figure imgf000049_0001
/D= argDmino mino J lI v, ||κ 2 dt + fN{φ{f»))< κ{φ{c(f,a)),φ{c(f!)))N{φ{fn)
Figure imgf000050_0001
+∑N(Rφ(fJ a ))' K ( Rφ(c(f» )), Rφ{c{f» ))) N(Rφ(f.a ))
Figure imgf000050_0002
Removing symmetry involves removing the last 3 terms in the equations.
ID Lifting Using Textured Features
[0094] Given registered imagery and probes, ID can be performed by lifting the photometry and geometry into the 3D avatar coordinates. Assume that bijections between the registered imagery and the 3D avatar model geometry, and between the probe imagery and its 3D avatar model geometry are known. For such a system, the registered imagery is first converted to 3D CAD models CAD" , a = l,..., A with textured model correspondences 1CAD« (p) <-> T * (x(p)), x e CAD" . The 3D CAD models and correspondences between the textured imagery can be generated using any of the above geometric features in the image plane including 2D labeled projective points, unlabeled projective points, labeled 3D points, unlabeled 3D points, unlabeled surface normals, as well as dense imagery in the projective plane. In the case of dense imagery measurements, associated with the CAD models are the texture fields T (JALn)a generated using the bijections described in the previous sections. Performing ID via the texture fields amounts to lifting the measurements of the probes to the 3D avatar CAD models and computing the distance metrics between the probe measurements and the registered database of CAD models. One or more probe images Ip v robe(p), p <≡ [0, 1]2 , v = 1, ..., V in the image plane are given. Also given are the geometries for each of the CAD models CAD", a = \,...,A , together with associated texture fields TCA[)I, ,a = \,...,A . Determining the ID from the given images corresponds to choosing the CAD models with texture fields that minimize the distance to the probe:
/D (82)
Figure imgf000051_0001
with the summation over the V separate available views, each corresponding to a different version of the probe image. Performing ID using the single channel model with multiplicative color model takes the form
/D (83)
Figure imgf000051_0002
A fast version of the ID may be accomplished using the log-minimization:
ID
Figure imgf000051_0003
ID Lifting Using Geometric and Textured Features
[0095] ID can be performed by matching both the geometry and the texture features.
Figure imgf000052_0001

Claims

What is claimed is:
1. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising: providing a library of candidate 3D avatar models; and searching among the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving for each 3D avatar model among the library of 3D avatar models computing a measure of fit between a 2D projection of that 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) a correspondence between feature points in a 3D avatar and feature points in the at least one source 2D image, wherein at least one of the feature points in the at least one source 2D image is unlabeled, and (ii) a correspondence between feature points in a 3D avatar and their reflections in an avatar plane of symmetry, and feature points in the at least one source 2D image, wherein the best-fit 3D avatar is the 3D avatar model among the library of 3D avatar models that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit 3D avatar.
2. The method of claim 1, further comprising:
generating a set of notional lightings of the best-fit 3D avatar; searching among the notional lightings of the best-fit avatar to locate a best notional lighting, said searching involving for each notional lighting of the best-fit avatar computing a measure of fit between a 2D projection of the best-fit avatar under that lighting and the at least one source 2D image, wherein the best notional lighting is the lighting that yields a best measure of fit, and wherein an estimate of the lighting of the target head is derived from the best notional lighting.
3. The method of claim 2, wherein the set of notional lightings comprises a set of photometric basis functions and at least one of small and large variations from the photometric basis functions.
4. The method of claim 1, further comprising: generating a 2D projection of the best-fit avatar; comparing the 2D projection with each member of a gallery of 2D facial images;and positively identifying the target head with a member of the gallery if a measure of fit between the 2D projection and that member exceeds a pre-determined threshold.
5. The method of claim 1, further comprising: after locating the best-fit 3D avatar, searching among deformations of the best- fit 3D avatar to locate a best-fit deformed 3D avatar, said searching involving computing the measure of fit between each deformed best-fit avatar and the at least one 2D projection, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit and wherein the 3D shape of the target head is derived from the best-fit deformed 3D avatar.
6. The method of claim 5, wherein the deformations comprise at least one of small deformations and large deformations.
7. The method of claim 5, further comprising:
generating a set of notional lightings of the deformed best-fit avatar; and searching among the notional lightings of the best-fit deformed avatar to locate a best notional lighting, said searching involving for each notional lighting of the best- fit deformed avatar computing a measure of fit between a 2D projection of the best-fit deformed avatar under that lighting and the at least one source 2D image, wherein the best notional lighting is the lighting that yields a best measure of fit, and wherein an estimate of the lighting of the target head is derived from the best notional lighting.
8. The method of claim 5, further comprising:
generating a 2D projection of the best-fit deformed avatar; comparing the 2D projection with each member of a gallery of 2D facial images; and positively identifying the target head with a member of the gallery if a measure of fit between the 2D projection and that member exceeds a pre-determined threshold.
9. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
providing a library of candidate 3D avatar models; and searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) a correspondence between feature points in a deformed 3D avatar and feature points in the at least one source 2D image, wherein in at least one of the feature points in the at least one source 2D image is unlabeled, and (ii) a correspondence between feature points in a deformed 3D avatar and their reflections in an avatar plane of symmetry, and feature points in the at least one source 2D image, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit deformed 3D avatar.
10. The method of claim 9, wherein the deformations comprise at least one of small deformations and large deformations
11. The method of claim 9, wherein the at least one source 2D projection comprises a single 2D projection and a 3D surface texture of the target head is known.
12. The method of claim 9, wherein the at least one source 2D projection comprises a single 2D projection, a 3D surface texture of the target head is initially unknown, and the measure of fit is based on the degree of correspondence between feature points in the best-fit deformed 3D avatar and their reflections in the avatar plane of symmetry, and feature points in the at least one source 2D image.
13. The method of claim 9, wherein the at least one source 2D projection comprises at least two projections, and a 3D surface texture of the target head is initially unknown.
14. A method of generating a geometrically normalized 3D representation of a target head from at least one source 2D projection of the head, the method comprising:
providing a library of candidate 3D avatar models; and searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best- fit 3D avatar, said searching involving, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the deformations corresponding to permanent and non-permanent features of the target head, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit; and generating a geometrically normalized 3D representation of the target head from the best-fit deformed 3D avatar by removing deformations corresponding to non- permanent features of the target head.
15. The method of claim 14, wherein the avatar deformations comprise at least one of small deformations and large deformations.
16. The method of claim 14, further comprising generating a geometrically normalized image of the target head by projecting the normalized 3D representation into a plane corresponding to a normalized pose.
17. The method of claim 16, wherein the normalized pose corresponds to a face-on view.
18. The method of claim 16, further comprising:
comparing the normalized image of the target head with each member of a gallery of 2D facial images having the normal pose; and positively identifying the target 3D head with a member of the gallery if a measure of fit between the normalized image of the target head and that gallery member exceeds a pre-determined threshold.
19. The method of claim 14, further comprising generating a photometrically and geometrically normalized 3D representation of the target head by illuminating the normalized 3D representation with a normal lighting.
20. The method of claim 19, further comprising generating a geometrically and photometrically normalized image of the target head by projecting the geometrically and photometrically normalized 3D representation into a plane corresponding to a normalized pose.
21. The method of claim 20, wherein the normalized pose is a face-on view.
22. The method of claim 20, wherein the normal lighting corresponds to uniform, diffuse lighting.
23. A method of estimating a 3D shape of a target head from source 3D feature points of the head, the method comprising:
providing a library of candidate 3D avatar models; searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit deformed avatar, the best-fit deformed avatar having a best measure of fit to the source 3D feature points, the measure of fit being based on a correspondence between feature points in a deformed 3D avatar and the source 3D feature points, wherein the estimate of the 3D shape of the target head is derived from the best-fit deformed avatar.
24. The method of claim 23, wherein the measure if fit is based on a correspondence between feature points in a deformed 3D avatar and their reflections in an avatar plane of symmetry, and the source 3D feature points.
25. The method of claim 23, wherein at least one of the source 3D points is unlabeled.
26. The method of claim 23, wherein at least one of the source 3D feature points are normal feature points, wherein the normal feature points specify a head surface normal direction as well as a position.
27. The method of claim 23, further comprising:
comparing of the best-fit deformed avatar with each member of a gallery of 3D reference representations of heads; and positively identifying the target 3D head with a member of the gallery of 3D reference representations set if a measure of fit between the best-fit deformed avatar and that member exceeds a pre-determined threshold.
28. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
providing a library of candidate 3D avatar models; and searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit deformed avatar, the best-fit deformed avatar having a 2D projection with a best measure of fit to the at least one source 2D image, the measure of fit being based on a correspondence between dense imagery of a projected 3D avatar and dense imagery of the at least one source 2D image, wherein at least a portion of the dense imagery of the projected avatar is generated using a mirror symmetry of the candidate avatars, wherein the estimate of the 3D shape of the target head is derived from the best- fit deformed avatar.
29. A method of positively identifying at least one source image of a target head with a member of a database of candidate facial images, the method comprising:
providing a library of 3D avatar models; searching among the 3D avatar models and among deformations of the candidate 3D avatar models to locate a source best-fit deformed avatar, the source best-fit deformed avatar having a 2D projection with a best first measure of fit to the at least one source image; for each member of the database of candidate facial images, searching among the library of 3D avatar models and their deformations to locate a candidate best- fit deformed avatar having a 2D projection with a best second measure of fit to the member of the database of candidate facial images; positively identifying the target head with a member of the database of candidate facial images if a third measure of fit between the source best-fit deformed avatar and the member candidate best-fit deformed avatar exceeds a predetermined threshold.
30. The method of claim 29, wherein the first measure of fit is based at least in part on a degree of correspondence between feature points in the source best-fit deformed avatar and their reflections in the avatar plane of symmetry, and feature points in the at least one source 2D image.
31. The method of claim 29, wherein the second measure of fit is based at least in part on a degree of correspondence between feature points in the candidate best-fit deformed avatar and their reflections in the avatar plane of symmetry, and feature points in the member of the database of candidate facial images.
PCT/US2006/039737 2005-10-11 2006-10-11 Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects WO2007044815A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US72525105P 2005-10-11 2005-10-11
US60/725,251 2005-10-11
US11/482,242 US20070080967A1 (en) 2005-10-11 2006-06-29 Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US11/482,242 2006-06-29

Publications (2)

Publication Number Publication Date
WO2007044815A2 true WO2007044815A2 (en) 2007-04-19
WO2007044815A3 WO2007044815A3 (en) 2009-04-16

Family

ID=37910687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/039737 WO2007044815A2 (en) 2005-10-11 2006-10-11 Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects

Country Status (2)

Country Link
US (2) US20070080967A1 (en)
WO (1) WO2007044815A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131063B2 (en) 2008-07-16 2012-03-06 Seiko Epson Corporation Model-based object image processing
US8204301B2 (en) 2009-02-25 2012-06-19 Seiko Epson Corporation Iterative data reweighting for balanced model learning
US8208717B2 (en) 2009-02-25 2012-06-26 Seiko Epson Corporation Combining subcomponent models for object image modeling
US8260039B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Object model fitting using manifold constraints
US8260038B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Subdivision weighting for robust object model fitting
US10574883B2 (en) 2017-05-31 2020-02-25 The Procter & Gamble Company System and method for guiding a user to take a selfie
US10614623B2 (en) 2017-03-21 2020-04-07 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US10621771B2 (en) 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
US10818007B2 (en) 2017-05-31 2020-10-27 The Procter & Gamble Company Systems and methods for determining apparent skin age
US11055762B2 (en) 2016-03-21 2021-07-06 The Procter & Gamble Company Systems and methods for providing customized product recommendations

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2643865A1 (en) * 2006-02-28 2007-09-07 National Research Council Of Canada Method and system for locating landmarks on 3d models
JP4585471B2 (en) * 2006-03-07 2010-11-24 株式会社東芝 Feature point detection apparatus and method
CA2668941C (en) * 2006-11-17 2015-12-29 Thomson Licensing System and method for model fitting and registration of objects for 2d-to-3d conversion
US20080212835A1 (en) * 2007-03-01 2008-09-04 Amon Tavor Object Tracking by 3-Dimensional Modeling
JP4337064B2 (en) * 2007-04-04 2009-09-30 ソニー株式会社 Information processing apparatus, information processing method, and program
GB0707921D0 (en) * 2007-04-24 2007-05-30 Renishaw Plc Apparatus and method for surface measurement
CN101441781B (en) * 2007-11-23 2011-02-02 鸿富锦精密工业(深圳)有限公司 Curved surface overturning method
US9223469B2 (en) * 2008-08-22 2015-12-29 Intellectual Ventures Fund 83 Llc Configuring a virtual world user-interface
EP2364427B1 (en) * 2008-10-29 2020-05-13 Renishaw PLC Method for coordinate measuring system
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
ES2353099B1 (en) * 2009-07-30 2012-01-02 Fundacion Para Progreso Soft Computing METHOD AND FORENSIC IDENTIFICATION SYSTEM BY SOFT COMPUTING BASED CRANEOFACIAL SUPERPOSITION.
JP5423379B2 (en) * 2009-08-31 2014-02-19 ソニー株式会社 Image processing apparatus, image processing method, and program
JP2011090466A (en) * 2009-10-21 2011-05-06 Sony Corp Information processing apparatus, method, and program
US9244533B2 (en) * 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
KR20110070056A (en) * 2009-12-18 2011-06-24 한국전자통신연구원 Method and apparatus for easy and intuitive generation of user-customized 3d avatar with high-quality
KR20110071213A (en) * 2009-12-21 2011-06-29 한국전자통신연구원 Apparatus and method for 3d face avatar reconstruction using stereo vision and face detection unit
US20110237980A1 (en) * 2010-03-29 2011-09-29 Cranial Technologies, Inc. Assessment and classification system
US8570343B2 (en) * 2010-04-20 2013-10-29 Dassault Systemes Automatic generation of 3D models from packaged goods product images
USRE49044E1 (en) 2010-06-01 2022-04-19 Apple Inc. Automatic avatar creation
US9132352B1 (en) 2010-06-24 2015-09-15 Gregory S. Rabin Interactive system and method for rendering an object
US9053562B1 (en) * 2010-06-24 2015-06-09 Gregory S. Rabin Two dimensional to three dimensional moving image converter
US9454823B2 (en) 2010-07-28 2016-09-27 arian Medical Systems, Inc. Knowledge-based automatic image segmentation
US20130130797A1 (en) * 2010-08-24 2013-05-23 Janos Stone Systems and methods for transforming and/or generating a tangible physical structure based on user input information
US8494237B2 (en) * 2010-11-08 2013-07-23 Cranial Technologies, Inc Method and apparatus for processing digital image representations of a head shape
US8442288B2 (en) * 2010-11-08 2013-05-14 Cranial Technologies, Inc. Method and apparatus for processing three-dimensional digital mesh image representative data of three-dimensional subjects
US8711210B2 (en) * 2010-12-14 2014-04-29 Raytheon Company Facial recognition using a sphericity metric
WO2012082077A2 (en) * 2010-12-17 2012-06-21 Agency For Science, Technology And Research Pose-independent 3d face reconstruction from a sample 2d face image
US9165404B2 (en) * 2011-07-14 2015-10-20 Samsung Electronics Co., Ltd. Method, apparatus, and system for processing virtual world
US10013787B2 (en) 2011-12-12 2018-07-03 Faceshift Ag Method for facial animation
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US9852512B2 (en) 2013-03-13 2017-12-26 Electronic Scripting Products, Inc. Reduced homography based on structural redundancy of conditioned motion
US8970709B2 (en) * 2013-03-13 2015-03-03 Electronic Scripting Products, Inc. Reduced homography for recovery of pose parameters of an optical apparatus producing image data with structural uncertainty
WO2015006784A2 (en) 2013-07-12 2015-01-15 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US10295338B2 (en) 2013-07-12 2019-05-21 Magic Leap, Inc. Method and system for generating map data from an image
CN103729510B (en) * 2013-12-25 2016-09-14 合肥工业大学 Based on the interior 3 D complex model exact mirror image symmetry computational methods accumulateing conversion
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9699123B2 (en) 2014-04-01 2017-07-04 Ditto Technologies, Inc. Methods, systems, and non-transitory machine-readable medium for incorporating a series of images resident on a user device into an existing web browser session
KR102204919B1 (en) * 2014-06-14 2021-01-18 매직 립, 인코포레이티드 Methods and systems for creating virtual and augmented reality
US10852838B2 (en) 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9589178B2 (en) * 2014-09-12 2017-03-07 Htc Corporation Image processing with facial features
CN104463969B (en) * 2014-12-09 2017-09-26 广西界围信息科技有限公司 A kind of method for building up of the model of geographical photo to aviation tilt
RU2582852C1 (en) * 2015-01-21 2016-04-27 Общество с ограниченной ответственностью "Вокорд СофтЛаб" (ООО "Вокорд СофтЛаб") Automatic construction of 3d model of face based on series of 2d images or movie
CN106033621B (en) 2015-03-17 2018-08-24 阿里巴巴集团控股有限公司 A kind of method and device of three-dimensional modeling
KR20170000748A (en) 2015-06-24 2017-01-03 삼성전자주식회사 Method and apparatus for face recognition
JP6754619B2 (en) * 2015-06-24 2020-09-16 三星電子株式会社Samsung Electronics Co.,Ltd. Face recognition method and device
US11577159B2 (en) 2016-05-26 2023-02-14 Electronic Scripting Products Inc. Realistic virtual/augmented/mixed reality viewing and interactions
US9875398B1 (en) 2016-06-30 2018-01-23 The United States Of America As Represented By The Secretary Of The Army System and method for face recognition with two-dimensional sensing modality
US10452896B1 (en) * 2016-09-06 2019-10-22 Apple Inc. Technique for creating avatar from image data
US10483004B2 (en) * 2016-09-29 2019-11-19 Disney Enterprises, Inc. Model-based teeth reconstruction
US10586379B2 (en) * 2017-03-08 2020-03-10 Ebay Inc. Integration of 3D models
US20180268614A1 (en) * 2017-03-16 2018-09-20 General Electric Company Systems and methods for aligning pmi object on a model
US10977509B2 (en) * 2017-03-27 2021-04-13 Samsung Electronics Co., Ltd. Image processing method and apparatus for object detection
KR101908851B1 (en) * 2017-04-14 2018-10-17 한국 한의학 연구원 Apparatus and method for correcting facial posture
CN109145684B (en) * 2017-06-19 2022-02-18 西南科技大学 Head state monitoring method based on region best matching feature points
CN107978010B (en) * 2017-11-27 2021-03-05 浙江工商大学 Staged precise shape matching method
CN108366250B (en) * 2018-02-06 2020-03-17 深圳市鹰硕技术有限公司 Image display system, method and digital glasses
US10796468B2 (en) 2018-02-26 2020-10-06 Didimo, Inc. Automatic rig creation process
US11508107B2 (en) 2018-02-26 2022-11-22 Didimo, Inc. Additional developments to the automatic rig creation process
US11741650B2 (en) 2018-03-06 2023-08-29 Didimo, Inc. Advanced electronic messaging utilizing animatable 3D models
US11062494B2 (en) * 2018-03-06 2021-07-13 Didimo, Inc. Electronic messaging utilizing animatable 3D models
US11727656B2 (en) 2018-06-12 2023-08-15 Ebay Inc. Reconstruction of 3D model with immersive experience
US11049310B2 (en) * 2019-01-18 2021-06-29 Snap Inc. Photorealistic real-time portrait animation
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110111246B (en) 2019-05-15 2022-02-25 北京市商汤科技开发有限公司 Virtual head portrait generation method and device and storage medium
US11645800B2 (en) 2019-08-29 2023-05-09 Didimo, Inc. Advanced systems and methods for automatically generating an animatable object from various types of user input
US11182945B2 (en) 2019-08-29 2021-11-23 Didimo, Inc. Automatically generating an animatable object from various types of user input
CN110728668B (en) * 2019-10-09 2022-06-28 中国科学院光电技术研究所 Airspace high-pass filter for maintaining small target form
US11417011B2 (en) * 2020-02-11 2022-08-16 Nvidia Corporation 3D human body pose estimation using a model trained from unlabeled multi-view data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1139269A2 (en) * 2000-03-30 2001-10-04 Nec Corporation Method for matching a two-dimensional image to one of a plurality of three-dimensional candidate models contained in a database
US20020012454A1 (en) * 2000-03-09 2002-01-31 Zicheng Liu Rapid computer modeling of faces for animation
US6556196B1 (en) * 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742291A (en) * 1995-05-09 1998-04-21 Synthonics Incorporated Method and apparatus for creation of three-dimensional wire frames
US5844573A (en) * 1995-06-07 1998-12-01 Massachusetts Institute Of Technology Image compression by pointwise prototype correspondence using shape and texture information
US6226418B1 (en) * 1997-11-07 2001-05-01 Washington University Rapid convolution based large deformation image matching via landmark and volume imagery
US5990901A (en) * 1997-06-27 1999-11-23 Microsoft Corporation Model based image editing and correction
AU9663098A (en) * 1997-09-23 1999-04-12 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
US6249600B1 (en) * 1997-11-07 2001-06-19 The Trustees Of Columbia University In The City Of New York System and method for generation of a three-dimensional solid model
CA2312315A1 (en) * 1997-12-01 1999-06-10 Arsev H. Eraslan Three-dimensional face identification system
US6362833B2 (en) * 1998-04-08 2002-03-26 Intel Corporation Method and apparatus for progressively constructing a series of morphs between two-dimensional or three-dimensional models
IT1315446B1 (en) * 1998-10-02 2003-02-11 Cselt Centro Studi Lab Telecom PROCEDURE FOR THE CREATION OF THREE-DIMENSIONAL FACIAL MODELS TO START FROM FACE IMAGES.
JP4025442B2 (en) * 1998-12-01 2007-12-19 富士通株式会社 3D model conversion apparatus and method
JP3575679B2 (en) * 2000-03-31 2004-10-13 日本電気株式会社 Face matching method, recording medium storing the matching method, and face matching device
US6975750B2 (en) * 2000-12-01 2005-12-13 Microsoft Corp. System and method for face recognition using synthesized training images
GB2383915B (en) * 2001-11-23 2005-09-28 Canon Kk Method and apparatus for generating models of individuals
US7221809B2 (en) * 2001-12-17 2007-05-22 Genex Technologies, Inc. Face recognition system and method
US20030169906A1 (en) * 2002-02-26 2003-09-11 Gokturk Salih Burak Method and apparatus for recognizing objects
KR100528330B1 (en) * 2003-02-19 2005-11-16 삼성전자주식회사 Method for coating surface of inorganic powder and coated inorganic powder manufactured using the same
JP2006520054A (en) * 2003-03-06 2006-08-31 アニメトリクス,インク. Image matching from invariant viewpoints and generation of 3D models from 2D images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6556196B1 (en) * 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
US20020012454A1 (en) * 2000-03-09 2002-01-31 Zicheng Liu Rapid computer modeling of faces for animation
EP1139269A2 (en) * 2000-03-30 2001-10-04 Nec Corporation Method for matching a two-dimensional image to one of a plurality of three-dimensional candidate models contained in a database

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
'2000, In Proceedings of the Intemational Conference on Image Processing', article ZHAO ET AL.: '3D Model Enhanced Faced Recognition', pages 50 - 53 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131063B2 (en) 2008-07-16 2012-03-06 Seiko Epson Corporation Model-based object image processing
US8204301B2 (en) 2009-02-25 2012-06-19 Seiko Epson Corporation Iterative data reweighting for balanced model learning
US8208717B2 (en) 2009-02-25 2012-06-26 Seiko Epson Corporation Combining subcomponent models for object image modeling
US8260039B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Object model fitting using manifold constraints
US8260038B2 (en) 2009-02-25 2012-09-04 Seiko Epson Corporation Subdivision weighting for robust object model fitting
US11055762B2 (en) 2016-03-21 2021-07-06 The Procter & Gamble Company Systems and methods for providing customized product recommendations
US10614623B2 (en) 2017-03-21 2020-04-07 Canfield Scientific, Incorporated Methods and apparatuses for age appearance simulation
US10621771B2 (en) 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
US10574883B2 (en) 2017-05-31 2020-02-25 The Procter & Gamble Company System and method for guiding a user to take a selfie
US10818007B2 (en) 2017-05-31 2020-10-27 The Procter & Gamble Company Systems and methods for determining apparent skin age

Also Published As

Publication number Publication date
US20070080967A1 (en) 2007-04-12
US20100149177A1 (en) 2010-06-17
WO2007044815A3 (en) 2009-04-16

Similar Documents

Publication Publication Date Title
WO2007044815A2 (en) Generation of normalized 2d imagery and id systems via 2d to 3d lifting of multifeatured objects
US7221809B2 (en) Face recognition system and method
Blanz et al. Face identification across different poses and illuminations with a 3d morphable model
US8194072B2 (en) Method for synthetically relighting images of objects
US6975750B2 (en) System and method for face recognition using synthesized training images
Blanz et al. Fitting a morphable model to 3D scans of faces
US7643685B2 (en) Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
JP4718491B2 (en) Method for determining the direction of the main light source in an image
Wang et al. Face re-lighting from a single image under harsh lighting conditions
EP1496466B1 (en) Face shape recognition from stereo images
JP4379459B2 (en) Object collation method, object collation apparatus, and recording medium recording the program
JP4552431B2 (en) Image collation apparatus, image collation method, and image collation program
Fransens et al. Parametric stereo for multi-pose face recognition and 3D-face modeling
Kahraman et al. Robust face alignment for illumination and pose invariant face recognition
JP3577908B2 (en) Face image recognition system
Johnson et al. Inferring illumination direction estimated from disparate sources in paintings: an investigation into Jan Vermeer's Girl with a pearl earring
Lee et al. A practical face relighting method for directional lighting normalization
Colbry et al. Canonical face depth map: A robust 3d representation for face verification
Nishino et al. Using eye reflections for face recognition under varying illumination
Ishiyama et al. An appearance model constructed on 3-D surface for robust face recognition against pose and illumination variations
Ibikunle et al. Face recognition using line edge mapping approach
Pizarro et al. Light-invariant fitting of active appearance models
Smith et al. Single image estimation of facial albedo maps
Romeiro et al. Model-based stereo with occlusions
Toroman Utilization of 3-D Models for Enhancing Facial Recognition over Various Angles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06816720

Country of ref document: EP

Kind code of ref document: A2