WO2004081853A1 - Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery - Google Patents

Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery Download PDF

Info

Publication number
WO2004081853A1
WO2004081853A1 PCT/US2004/006604 US2004006604W WO2004081853A1 WO 2004081853 A1 WO2004081853 A1 WO 2004081853A1 US 2004006604 W US2004006604 W US 2004006604W WO 2004081853 A1 WO2004081853 A1 WO 2004081853A1
Authority
WO
WIPO (PCT)
Prior art keywords
source
ofthe
representation
projection
feature items
Prior art date
Application number
PCT/US2004/006604
Other languages
French (fr)
Inventor
Michael Miller
Original Assignee
Animetrics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Animetrics, Inc. filed Critical Animetrics, Inc.
Priority to EP04717974A priority Critical patent/EP1599828A1/en
Priority to JP2006509130A priority patent/JP2006520054A/en
Publication of WO2004081853A1 publication Critical patent/WO2004081853A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to object modeling and matching systems, and more particularly to the generation of a three-dimensional model of a target object from two- and three-dimensional input.
  • a three-dimensional (3D) model of an object In many situations, it is useful to construct a three-dimensional (3D) model of an object when only a partial description ofthe object is available. In a typical situation, one or more two- dimensional (2D) images ofthe 3D object may be available, perhaps photographs taken from different viewpoints.
  • a common method of creating a 3D model of a multi-featured object is to start with a base 3D model which describes a generic or typical example ofthe type of object being modeled, and then to add texture to the model using one or more 2D images ofthe object.
  • a 3D "avatar” i.e., an electronic 3D graphical or pictorial representation
  • a 3D "avatar” i.e., an electronic 3D graphical or pictorial representation
  • mapping onto the model a texture from one or more 2D images of the face See U.S. Patent No. 6,532,011 Bl to Francini et al., and U.S. Patent No. 6,434,278 Bl to Hashimoto.
  • the main problem with this approach is that the 3D geometry is not highly defined or tuned for the actual target object which is being generated.
  • a common variant ofthe above approach is to use a set of 3D base models and select the one that most resembles the target object before performing the texture mapping step.
  • a single parameterized base model is used, and the parameters ofthe model are adjusted to best approximate to the target. See U.S. Patent No. 6,556,196 Bl to Blanz et al. These methods serve to refine the geometry to make it fit the target, at least to some extent. However, for any target object with a reasonable range of intrinsic variability, the geometry of the model will still not be well tuned to the target. This lack of geometric fit will detract from the verisimilitude ofthe 3D model to the target object.
  • the present invention provides an automated method and system for generating an optimal 3D model of a target multifeatured object when only partial source data describing the object is available.
  • the partial source data often consists of one or more 2D projections ofthe target object or an obscuration of a single projection, but may also include 3D data, such as from a 3D camera or scanner.
  • the invention uses a set of reference 3D representations that span, to the extent practicable, the variations ofthe class of objects to which the target object belongs.
  • the invention may automatically identify feature items common to the source data and to the reference representations, and establish correspondences between them.
  • the system may identify points at the extremities ofthe eyes and mouth, or the nose profile, and establish correspondences between such features in the source data and in the reference representations. Manual identification and matching of feature items can also be incorporated if desired.
  • all possible positions (i.e., orientations and translations) for each 3D reference representation are searched to identify the position and reference representation combination whose projection most closely matches the source data. The closeness of match is determined by a measure such as the minimum mean-squared error (MMSE) between the feature items in the projection of tlie 3D representation, and the corresponding feature items in the source projection.
  • MMSE minimum mean-squared error
  • a comparison is performed in 3D between the estimated deprojected positions ofthe feature items from the 2D source projection and the corresponding feature items ofthe 3D representation.
  • the closest-fitting 3D reference representation may then be deformed to optimize the correspondence with the source projection.
  • Each point in the mesh which defines the geometry of the 3D representation is free to move during the deformation.
  • the search for the best-fitting position i.e., orientation and translation
  • the geometry of the 3D model is tailored to the target object in two ways.
  • the invention requires no information about the viewpoint from which the 2D source projection was captured, because a search is performed over all possible viewpoints, and the viewpoint is taken to be that which corresponds to the closest fit between the projected 3D representation and the 2D source data.
  • the invention comprises a method of comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects.
  • a plurality of reference 3D representations of generically similar multifeatured objects is provided, and a viewpoint-invariant search ofthe reference 3D representations is performed to locate the reference 3D representation having a 2D projection most resembling the source projection(s).
  • resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the source 2D projection(s).
  • Each reference 3D representation may be searched over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
  • the search over a range of possible 2D projections may comprise computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
  • the rigid motions may comprise pitch, roll, yaw, and translation in three dimensions.
  • Automatic camera calibration may be performed by estimation of camera parameters, such as aspect ratio and field of view, from image landmarks.
  • the optimum rigid motion may be determined by estimating a conditional mean pose or geometric registration as it relates to feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation such that the feature items are projectionally consistent with feature items in source 2D projection(s).
  • MMSE estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation are generated.
  • the rigid motion may be constrained by known 3D position information associated with the source 2D projection(s).
  • the feature items may include curves as well as points which are extracted from the source projection using dynamic programming. Further, areas as well as surfaces and or subvolumes may be used as features generated via isocontouring (such as via the Marching Cubes algorithm) or automated segmentation algorithms.
  • the feature items used in the matching process may be found automatically by using correspondences between the 2D source projection(s) and projected imagery of at least one reference 3D object.
  • the invention may further comprise the step of creating a 3D representation ofthe source 2D projection(s) by deforming the located (i.e., best-fitting) reference 3D representation so as to resemble the source multifeatured object.
  • the deformation is a large deformation diffeomorphism, which serves to preserve the geometry and topology ofthe reference 3D representation.
  • the deformation step may deform the located 3D representation so that feature items in the source 2D projection(s) align with corresponding features in the located reference 3D representation.
  • the deformation step may occur with or without rigid motions and may include affine motions.
  • the deformation step may be constrained by at least one of known 3D position information associated with the source 2D projection(s), and 3D data ofthe source object.
  • the deformation may be performed using a closed form expression.
  • the invention comprises a system for comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects.
  • the system comprises a database comprising a plurality of reference 3D representations of generically similar multifeatured objects and an analyzer for perfonning a viewpoint-invariant search ofthe reference 3D representations to locate the reference 3D representation having a 2D projection most resembling the source projection(s).
  • the analyzer determines resemblance by a degree of alignment between feature items in the 3D representation and corresponding feature items in the source 2D projection(s).
  • the analyzer may search each reference 3D representation over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
  • the analyzer searches over a range of possible 2D projections by computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
  • the rigid motions may comprise pitch, roll, yaw, and translation in three dimensions.
  • the analyzer may be configured to perform automatic camera calibration by estimating camera parameters, such as aspect ratio and field of view, from image landmarks.
  • the analyzer is configured to determine the optimum rigid motion by estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation such that the feature items are projectionally consistent with feature items in the source 2D projection(s).
  • the analyzer is further configured to generate MMSE estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation.
  • the rigid motion may be constrained by known 3D position information associated with the source 2D projection(s).
  • the analyzer is configured to extract feature items from the source projection using dynamic programming. In further embodiments, the analyzer may be configured to find feature items used in the matching process automatically by using correspondences between source imagery and projected imagery of at least one reference 3D object.
  • the invention may further comprise a deformation module for creating a 3D representation ofthe at least one source 2D projection by deforming the located (i.e., best-fitting) reference 3D representation so as to resemble the source multifeatured object.
  • the deformation module deforms the located reference 3D representation using large deformation diffeomorphism, which serves to preserve the geometry and topology ofthe reference 3D representation.
  • the deformation module may deform the located 3D representation so that feature items in the source 2D projection(s) align with corresponding features in the located reference 3D representation.
  • the deformation module may or may not use rigid motions and may use affine motions.
  • the deformation module may be constrained by at least one of known 3D position information associated with the source 2D projection(s), and 3D data of the source object.
  • the deformation module may operate in accordance with a closed form expression.
  • the invention comprises a method of comparing a source 3D object to at least one reference 3D object.
  • the method involves creating 2D representations ofthe source object and the reference object(s) and using projective geometry to characterize a correspondence between the source 3D object and a reference 3D object.
  • the correspondence may be characterized by a particular viewpoint for the 2D representation ofthe 3D source object.
  • the invention comprises a system for comparing a source 3D object to at least one reference 3D object.
  • the system comprises a projection module for creating 2D representations ofthe source object and the reference object(s) and an analyzer which uses projective geometry to characterize a correspondence between the source 3D object and a reference 3D object.
  • the above described methods and systems are used for the case when the 3D object is a face and the reference 3D representations are avatars.
  • the invention comprises a method for creating a 3D representation from at least one source 2D projection of a source multifeatured object.
  • at least one reference 3D representation of a generically similar object is provided, one ofthe provided representation(s) is located, and a 3D representation ofthe source 2D projection(s) is created by deforming the located reference representation in accordance with the source 2D projection(s) so as to resemble the source multifeatured object.
  • the source 2D projection(s) is used to locate the reference representation.
  • the set of reference representations includes more than one member, and the reference most resembling the source 2D projection(s) is located by performing a viewpoint- invariant search ofthe set of reference representations, without necessarily actually generating any projections.
  • the search may include computing a rigid motion ofthe reference representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe source projections.
  • a 3D representation ofthe source projection(s) is created by deforming the located reference representation so as to resemble the source multifeatured object.
  • the deformation may be a large deformation diffeomorphism.
  • the deformation deforms the located reference so that feature items in the source projection(s) align with corresponding feature items in the located 3D reference representation.
  • the deformation is performed in real time.
  • the invention comprises a system for creating a 3D representation from at least one source 2D projection of a source multifeatured object.
  • the system includes a database of at least one reference 3D representation of a generically similar object, and an analyzer for locating one ofthe provided representation(s).
  • the system further includes a deformation module for creating a 3D representation ofthe source 2D projection(s) by deforming the located reference representation in accordance with the source 2D projection(s) so as to resemble the source multifeatured object.
  • the analyzer uses the source 2D projection(s) to locate the reference representation.
  • the set of reference representations includes more than one member, and the analyzer locates the reference most resembling the source 2D projection(s) by performing a viewpoint-invariant search ofthe set of reference representations, without necessarily actually generating any projections.
  • the search may include computing a rigid motion ofthe reference representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe source projections.
  • the deformation module creates a 3D representation ofthe source projection(s) by deforming the located reference representation so as to resemble the source multifeatured object.
  • the deformation may be a large deformation diffeomorphism.
  • the deformation module deforms the located reference so that feature items in the source projection(s) align with corresponding feature items in the located 3D reference representation.
  • the deformation module operates in real time.
  • Figure 1 schematically illustrates the various components ofthe invention, starting with the target object, the reference objects, and yielding an optimal 3D model after performing a search and deformation.
  • Figures 2A, 2B, and 2C schematically illustrate the components of a 3D avatar.
  • Figure 3 schematically illustrates the matching of feature items in the 2D imagery.
  • Figure 4 is a block diagram showing a representative hardware environment for the present invention.
  • FIG 5 is a block diagram showing components ofthe analyzer illustrated in Figure 4.
  • Figure 6 is a block diagram showing the key functions performed by the analyzer.
  • Figure 1 illustrates the basic operation ofthe invention in the case where the 3D target multifeatured object is a face and the set of reference 3D representations are avatars.
  • the matching process starts with a set of reference 3D avatars which represent, to the extent practicable, the range of different types of heads to be matched.
  • the avatars may include faces of men and women, faces with varying quantities and types of hair, faces of different ages, and faces representing different races.
  • the reference set includes numerous (e.g., several hundred or more) avatars, though the invention works with as few as one reference object, and as many for which storage space and compute time are available. In some situations, only a single reference avatar will be used.
  • the source data ofthe target face is illustrated as a single 2D photograph ofthe target 3D face taken from an unknown viewpoint.
  • selected features to be used for matching are identified in the source photograph.
  • the features may be points, such as the extremities ofthe mouth and eyes, or curves representing a profile, an eyebrow, or other distinctive curve, or subareas such as an eyebrow, nose or cheek.
  • the corresponding features are identified in the reference avatars.
  • the selection of features in the target photograph, and the identification of corresponding features in the reference avatars may be done automatically according to the invention.
  • each 3D avatar is notionally subjected to all possible rigid motions, and the features projected into 2D.
  • the positions ofthe projected avatar features are compared to the feature positions in the target photograph.
  • the avatar for which a particular rigid motion provides the closest fit between projected features and those ofthe source photograph is selected as the best reference avatar.
  • Figure 1 illustrates the best-fitting reference avatar to be the middle one.
  • the best reference avatar is deformed to match the target photograph more closely.
  • the features ofthe photograph are reverse projected to the coordinates ofthe best reference avatar in the orientation and position corresponding to the best match.
  • the mesh points ofthe avatar are then deformed in 3D to minimize the distances between the reverse-projected features ofthe photograph and the corresponding avatar features.
  • the avatar resulting from this deformation will be a closer approximation to the target 3D face.
  • the rigid motion search and deformation steps may be repeated iteratively, e.g., until the quality of fit no longer improves appreciably.
  • the resulting 3D model is the optimal match to the target face.
  • the invention can be used effectively even when the source imagery includes only a part ofthe target face, or when the target face is partially obscured, such as, for example, by sunglasses or facial hair.
  • the approach ofthe invention is suitable for any multifeatured object, such as faces, animals, plants, or buildings. For ease of explanation, however, the ensuing description will focus on faces as an exemplary (and non-limiting) application.
  • Figures 2A, 2B, and 2C show the components of a representative avatar.
  • the geometry ofthe avatar is represented by a mesh of points in 3D which are the vertices of set of triangular polygons approximating the surface ofthe avatar.
  • Figure 2A illustrates a head-on view 202 and a side view 204 ofthe triangular polygon representation.
  • each vertex is given a color value, and each triangular face may be colored according to an average ofthe color values assigned to its vertices.
  • the color values are determined from a 2D texture map 206, illustrated in Figure 2B, which may be derived from a photograph.
  • FIG. 2C 3 the final avatar with texture is illustrated in a head-on view 208 and side view 210.
  • the avatar is associated with a coordinate system which is fixed to it, and is indexed by three angular degrees of freedom (pitch, roll, and yaw), and three translational degrees of freedom ofthe rigid body center in three-space.
  • individual features ofthe avatar such as the chin, teeth and eyes may have their own local coordinates (e.g., chin axis) which form part ofthe avatar description.
  • the present invention may be equally applied to avatars for which a different data representation is used.
  • texture values may be represented as RGB values, or using other color representations, such as HSL.
  • the data representing the avatar vertices and the relationships among the vertices may vary.
  • the mesh points may be connected to form non-triangular polygons representing the avatar surface.
  • the invention may include a conventional rendering engine for generating 2D imagery from a 3D avatar.
  • the rendering engine may be implemented in OpenGL, or in any other 3D rendering system, and allows for the rapid projection of a 3D avatar into a 2D image plane representing a camera view ofthe 3D avatar.
  • the rendering engine may also include the specification ofthe avatar lighting, allowing for the generation of 2D projections corresponding to varying illumination ofthe avatar. Lighting corresponding to a varying number of light sources of varying colors, intensities, and positions may be generated.
  • the feature items in the 2D source projection which are used for matching are selected by hand or via automated methods. These items may be points or curves.
  • suitable points may be inflection points at the lips, points on the -eyebrows, points at the extremities ofthe eyes, or extremities of nostrils, and suitable curves may include an eyebrow or lip.
  • the source projection includes a side view, the feature points corresponding to the profile are used and may include the tip ofthe nose or chin.
  • Suitable feature item curves may include distinct parts ofthe profile, such as nose, forehead, or chin.
  • a user interface is provided which allows the user to identify feature points individually or to mark groups of points delineated by a spline curve, or to select a set of points forming a line.
  • the automated detection of feature items on the 2D source projection is performed by searching for specific features of a face, such as eyeballs, nostrils, and lips.
  • the approach may use Bayesian classifiers and decision trees in which hierarchical detection probes are built from training data generated from actual avatars.
  • the detection probes are desirably stored at multiple pixel scales so that the specific parameters, such as for orientation of a feature, are only computing on finer scales if the larger-scale probes yield a positive detection.
  • the feature detection probes may be generated from image databases representing large numbers of individuals who have had their features demarcated and segregated so that the detection probes become specifically tuned to these features.
  • the automated feature detection approach may use pattern classification, Bayes nets, neural networks, or other known techniques for determining the location of features in facial images.
  • the automated detection of curve features in the source projection may use dynamic programming approaches to generate curves from a series of points so as to reduce the amount of computation required to identify an optimal curve and maximize a sequentially additive cost function.
  • a cost function represents a sequence of features such as the contrast of the profile against background, or the darkness of an eyebrow, or the crease between lips.
  • a path of N points can be thought of as consisting of a starting node x 0 and a set of vectors v 0 , v l , ... , ⁇
  • dynamic programming may be used to generate maximum (or minimum) cost paths. This reduces the complexity ofthe algorithm from K N to NK 2 where N is the length of a path andK is the total number of nodes, as dynamic programming takes advantage ofthe fact that the cost is sequentially additive, allowing a host of sub-optimal paths to be ignored. Dynamic programming techniques and systems are well- characterized in the art and can be applied as discussed herein without undue experimentation.
  • the 3D rotation and translation from the avatar coordinates to the source projection is determined. This corresponds to finding the viewpoint from which the source projection was captured. In preferred embodiments, this is achieved by calculating the position ofthe avatar in 3D space that best matches the set of selected feature items in the 2D source projection.
  • these feature items will be points, curves, or subareas and the source projection will be a photograph on which the position of these items can be measured, either manually or automatically.
  • the position calculation may be based on the computation ofthe conditional mean estimate ofthe reverse projection positions in 3D ofthe 2D feature items, followed by the computation of MMSE estimates for the rotation and translation parameters in 3D, given the estimates ofthe 3D positions ofthe feature items. Since position in 3D space is a vector parameter, the MMSE estimate for translation position is closed form; when substituted back into the squared error function, it gives an explicit function in terms of only the rotations. Since the rotations are not vector parameters, they may be calculated using non-linear gradient descent through the tangent space ofthe group or via local representation using the angular velocities of the skew-symmetric matrices.
  • the distance metrics used to measure the quality of fit between the reverse projections of feature items from the source imagery and corresponding items in the 3D avatar may be, for example, Poisson or other distance metrics which may or may not satisfy the triangle inequality.
  • the feature item matching may be performed directly, without the intermediate step of calculating the conditional mean estimate ofthe deprojected 2D features.
  • the cost function used for positioning the 3D avatar can be minimized using algorithms such as closed form quadratic optimization, iterative Newton descent or gradient methods.
  • the 3D positioning technique is first considered without deformation ofthe reference avatar.
  • a 3D reference avatar is referred to as a CAD (computer-aided design) model, or by the symbol CAD.
  • CAD computer-aided design
  • the projective geometry mapping is defined as either positive or negative z , i.e., projection occurs along the z axis.
  • p, ( ⁇ L , ⁇ L ) (for negative z-
  • the basis vectors Z Z 2 ,Z ⁇ at the tangent to the 3x3 rotation element O are defined as:
  • the norm has within it a 3x3 matrix which represents this variability in the norm.
  • the rotation/translation data may be indexed in many different ways. For example, to index according to the rotation around the center ofthe object, rather than fixed external world coordinates, the coordinates are just reparameterized by defining x ⁇ — x - x c . All ofthe techniques described herein remain the same.
  • the preferred 3D algorithm for rigid motion efficiently changes states of geometric pose for comparison to the measured imagery.
  • the preferred 3D algorithm for diffeomorphic transformation of geometry matches the geometry to target 2D image features. It should be understood, however, that other methods of performing the comparison ofthe 3D representation to the source imagery may be used, including those that do not make use of specific image features.
  • the 3D avatar may be deformed in order to improve its correspondence with the source imagery.
  • the allowed deformations are generally limited to diffeomorphisms ofthe original avatar. This serves to preserve the avatar topology, guaranteeing that the result ofthe deformation will be a face.
  • the deformations may also enforce topological constraints, such as the symmetry ofthe geometry. This constraint is especially useful in situations where parts ofthe source object are obscured, and the full geometry is inferred from partial source information.
  • Figures 3 A and 3B illustrate the effect of avatar deformation on the matching ofthe avatar to the source imagery.
  • feature points are shown as black crosses on the source image 302.
  • An example is the feature point at the left extremity ofthe left eyebrow 304.
  • the projections ofthe corresponding feature points belonging to the best-matching reference avatar with optimal rigid motion prior to deformation are shown as white crosses. It can be seen that the projected point corresponding to the left extremity ofthe left eyebrow 306 is noticeably displaced from its counterpart 304.
  • Figure 3B the same source image 302 is shown with feature points again indicated by black crosses. This time, the best-fitting avatar feature points shown as white crosses are now projected after deformation.
  • the correspondence between source feature points and avatar feature points is markedly improved, as shown, for example, by the improved proximity ofthe projected left eyebrow feature point 308 to its source counterpart 304.
  • the 3D avatar diffeomorphism calculation starts with the initial conditions for placement ofthe avatar determined by the feature item detection and computation ofthe best-fitting rigid motion and the original geometry ofthe avatar. It then proceeds by allowing all ofthe points on the avatar to move independently according to a predefined formula to minimize the distance between the deformed avatar points in 3D and the conditional mean estimates ofthe 2D landmark points reverse projected to the 3D coordinates.
  • the 3D landmark rigid motion algorithm is applied again to the source projections and feature items to find the best guess ofthe camera positions given this newly transformed avatar with its new vertex positions. Subsequently, a new diffeomorphism is generated, and this process is continued until it converges.
  • iteration may not be used, with the rigid motion calculation being performed only a single time, and just one diffeomorphism transformation applied.
  • camera orientations i.e., the viewpoint ofthe measured source projections
  • these can be used as fixed inputs to the calculation, with no rigid transformation required.
  • the goal is to construct the deformation ofthe
  • the avatar may be deformed with small deformations only and no rigid motions.
  • the measured feature items are all points from a single camera viewing which generated the projected source image in which the feature points were measured.
  • the goal is to construct the deformation ofthe CAD model constructing the mapping x -» x + u(x), x e CAD :
  • diffeomorphic deformations with no rigid motions ofthe avatar are applied.
  • the deformation of the CAD model constructing the mapping x ⁇ -» ⁇ (x),x 6 CAD is constructed:
  • mappings are repeatedly generated from the new vector field by running an iterative algorithm until convergence:
  • the deformation may be performed in real time for the case when the rigid motions (i.e., the rotation/translation) which bring the avatar into correspondence with the one or more source 2D projection are not known.
  • the rigid motions i.e., the rotation/translation
  • a similar approach to the one above is used, with the addition of an estimation ofthe rigid motions using the techniques described herein.
  • the avatar is deformed in real-time using diffeomorphic deformations.
  • the solution to the real-time deformation algorithm generates a deformation which may be used as an initial condition for the solution ofthe diffeomorphic deformation.
  • Real-time diffeomorphic deformation is accomplished by incorporating the real-time deformations solution as an initial condition and then performing a small number (in the region of 1 to 10) iterations ofthe diffeomorphic deformation calculation.
  • the deformation may include affine motions.
  • affine motions For the affine motion A x ⁇ Ax
  • A is the 3x 3 generalized linear matrix so that
  • Equation 30 the least-squares estimator A : x —. > Ax is computed:
  • the general motion u is given by: U ⁇ (p)- U(p)u(x(p)) [
  • the measured target projective image has labeled points, curves, and subregions generated by diffeomorphic image matching.
  • the projective exemplar can be transformed bijectively via the diffeomorphisms onto the target candidate photograph, thereby automatically labeling the target photographs into its constituent submanifolds. Given these submanifolds, the avatars can then be matched or transformed into the labeled photographs. Accordingly, in the image plane a deformation : x -> ⁇ (x) satisfying
  • the given diffeomorphism is applied to labeled points, curves, and areas in the template I 0 thereby labeling those points, curves, and areas in the target photograph.
  • the diffeomorphisms are used to define bijective correspondence in the 3D background space, and the matching is performed in the volume rather than in the image plane.
  • the avatar is deformed with small deformations only and no rigid motions.
  • the measured feature items are all points from a single camera viewing which generated the projected source image in which the feature points were measured.
  • the optimum selected avatar model is the one closest to the candidate in the metric. Any of a variety of distance functions may be used in the selection ofthe avatar, including the large deformation metric from the diffeomorphism mapping technique described above, the real-time metric described above, the Euclidean metric, and the similitude metric of Kendall. The technique is described herein using the real-time metric.
  • the CAD model is selected to minimize the metric based on one or several sets of features from photographs, here described for one photo:
  • the metric is selected which minimizes the distance between the measurements and the family of candidate CAD models.
  • the K matrix defining the quadratic form metric measuring the distance is computed: f is, ⁇ ⁇ , X, J iv(X j ,X 2 ) i ( j ,X fl ))
  • N exactCAD arg min min ⁇ (Ax + b - x, )' [ K ⁇ x j (Ax + b - x ); (Equation 40)
  • inexactCAD arg min min T N O ⁇ , +b -x, ' )'[(K + ⁇ 2 I) (Ax +b-x ).
  • the minimum norm is determined by the error between the CAD model feature points and the photographic feature points.
  • the present invention may also be used to match to articulated target objects.
  • the diffeomorphism and real-time mapping techniques carry the template 3D representations bijectively onto the target models, carrying all ofthe information in the template.
  • the template models are labeled with different regions corresponding to different components ofthe articulated object.
  • the articulated regions may include teeth, eyebrows, eyelids, eyes, and jaw.
  • Each of these subcomponents can be articulated during motion ofthe model according to an articulation model specifying allowed modes of articulation.
  • the mapping techniques carry these triangulated subcomponents onto the targets, thereby labeling them with their subcomponents automatically.
  • the resulting selected CAD model therefore has its constituent parts automatically labeled, thereby allowing each avatar to be articulated during motion sequences.
  • the techniques for determining the rotation/translation correspondences are unchanged.
  • the matching terms involve direct measurements in the volume, there is no need for the intermediate step to determine the dependence on the unknown z-depth via the MMSE technique. Accordingly, the best matching rigid motion corresponds to:
  • the real-time deformation corresponds to:
  • the techniques described herein also allow for the automated calibration camera parameters, such as the aspect ratio and field of view.
  • the calibration ofthe camera is determined under the assumption that there is no transformation (affine or other) ofthe avatar.
  • the techniques described herein may be used to compare a source 3D object to a single reference object.
  • 2D representations ofthe source object and the reference object are created, and the correspondence between them is characterized using mathematical optimization and projective geometry.
  • the correspondence is characterized by specifying the viewpoint from which the 2D source projection was captured.
  • the system includes a video source 402 (e.g., a video camera or a scanning device) which supplies a still input image to be analyzed.
  • the output ofthe video source 402 is digitized as a frame into an array of pixels by a digitizer 404.
  • the digitized images are transmitted along the system bus 406 over which all system components communicate, and may be stored in a mass storage device (such as a hard disc or optical storage unit) 408 as well as in main system memory 410 (specifically, within a partition defining a series of identically sized input image buffers) 412.
  • the operation ofthe illustrated system is directed by a central-processing unit ("CPU") 414.
  • CPU central-processing unit
  • the system preferably contains a graphics or image-processing board 416; this is a standard component well-known to those skilled in the art.
  • the user interacts with the system using a keyboard 418 and a position-sensing device (e.g., a mouse) 420.
  • the output of either device can be used to designate information or select particular points or areas of a screen display 420 to direct functions performed by the system.
  • the main memory 410 contains a group of modules that control the operation of the CPU
  • An operating system 424 directs the execution of low-level, basic system functions such as memory allocation, file management and operation of mass storage devices 408.
  • the analyzer 426 implemented as a series of stored instructions, directs execution ofthe primary functions performed by the invention, as discussed below; and instructions defining a user interface 428 allow straightforward interaction over screen display 422.
  • the user interface 428 generates words or graphical images on the display 422 to prompt action by the user, and accepts commands from the keyboard 418 and/or position-sensing device 420.
  • the memory 410 includes a partition 430 for storing for storing a database of 3D reference avatars, as described above.
  • each image buffer 412 defines a "raster," i.e. 1 , a regular 2D pattern of discrete pixel positions that collectively represent an image and may be used to drive (e.g., by means of image-processing board 416 or an image server) screen display 422 to display that image.
  • the content of each memory location in a frame buffer directly governs the appearance of a corresponding pixel on the display 422. It must be understood that although the modules of main memory 410 have been described separately, this is for clarity of presentation only; so long as the system performs all the necessary functions, it is immaterial how they are distributed within the system and the programming architecture thereof. Likewise, though conceptually organized as grids, pixelmaps need not actually be stored digitally in this fashion.
  • the raster pattern is usually encoded as an ordered array of pixels.
  • execution ofthe key tasks associated with the present invention is directed by the analyzer 426, which governs the operation ofthe CPU 414 and controls its interaction with main memory 410 in performing the steps necessary to match and deform reference 3D representations to match a target multifeatured object.
  • Figure 5 illustrates the components of a preferred implementation ofthe analyzer 426.
  • the projection module 502 takes a 3D model and makes a 2D projection of it onto any chosen plane. In general, an efficient projection module 502 will be required in order to create numerous projections over the space of rotations and translations for each ofthe candidate reference avatars.
  • the deformation module 504 performs one or more types of deformation on an avatar in order to make it more closely resemble the source object.
  • the deformation is performed in 3D space, with every point defining the avatar mesh being free to move in order to optimize the fit to the conditional mean estimates ofthe reverse projected feature items from the source imagery. In general, deformation is only applied to the best-fitting reference object, if more than one reference object is supplied.
  • the rendering module 506 allows for the rapid projection of a 3D avatar into 2D with the option of including the specification ofthe avatar lighting.
  • the 2D projection corresponds to the chosen lighting ofthe 3D avatar.
  • the feature detection module 508 searches for specific feature items in the 2D source projection.
  • the features may include eyes, nostrils, lips, and may incorporate probes that operate at several different pixel scales.
  • Figure 6 illustrates the functions ofthe invention performed in main memory.
  • the system examines the source imagery and automatically detects features of a face, such as eyeballs, nostrils, and lips that can be used for matching purposes, as described above.
  • the detected feature items are reverse projected into the coordinate frame ofthe candidate avatar, as described above and using equation 7.
  • the optimum rotation/translation of the candidate avatar is estimated using the techniques described above and using equations 8, 9 and 10.
  • any prior information that may be available about the position ofthe source object with respect to the available 2D projections is added into the computation, as described herein using equations 11-13.
  • this data is used to constrain the rigid motion search as shown in step 610 and as described above with reference to equations 41-43.
  • the best-fitting avatar is selected in step 612, as described above, with reference to equations 38-40.
  • the best-fitting avatar located in step 612 is deformed in step 614.
  • 3D measurements ofthe source object 610 are used to constrain the deformation 614.
  • portions ofthe source imagery 616 itself may be used to influence the deformation 614.
  • the invention provides for several different kinds of deformation which may be optionally applied to the best-fitting reference avatar in order to improve its correspondence with 'j the target object.
  • the deformations may include real-time deformation without rigid motions in which a closed form expression is found for the deformation, as described above using equations 18, 19.
  • a diffeomorphic deformation ofthe avatar with no rigid motions may be applied (equations 22-24).
  • a real time deformation with unknown rigid motion ofthe avatar may be deployed (equations 28, 29).
  • a real-time diffeomorphic deformation may be applied to the avatar by iterating the real-time deformation.
  • the avatar may be deformed using affine motions (equations 30, 31).
  • the deformation ofthe avatar may be guided by matching a projection to large numbers of feature items in the source data, including the identification of submanifolds within the avatar, as described above with reference to equation 37.
  • the deformations described above may be applied to each articulated component separately.
  • the invention enables camera parameters, such as aspect ratio and field of view to be estimated as shown in step 618 and described above, with reference to equations 44-49.
  • tlie invention is not limited to the matching of faces, but may be used for matching any multifeatured object using a database of reference 3D representations that correspond to the generic type ofthe target object to be matched.

Abstract

A method and system for characterizing features in a source multifeatured three-dimensional object, and for locating a best-matching three-dimensional object from a reference database of such objects by performing a viewpoint invariant search among the reference objects. The invention further includes the creation of a three-dimensional representation of the source object by deforming a reference object.

Description

VIEWPOINT-INVARIANT IMAGE MATCHING AND GENERATION OF THREE-DIMENSIONAL MODELS FROM TWO-DIMENSIONAL IMAGERY
RELATED APPLICATIONS
This application claims priority to and the benefits of U.S. Provisional Applications Serial Nos. 60/452,429, 60/452,430 and 60/452,431 filed on March 6, 2003 (the entire disclosures of which are hereby incorporated by reference).
FIELD OF THE INVENTION
The present invention relates to object modeling and matching systems, and more particularly to the generation of a three-dimensional model of a target object from two- and three-dimensional input.
BACKGROUND OF THE INVENTION
In many situations, it is useful to construct a three-dimensional (3D) model of an object when only a partial description ofthe object is available. In a typical situation, one or more two- dimensional (2D) images ofthe 3D object may be available, perhaps photographs taken from different viewpoints. A common method of creating a 3D model of a multi-featured object is to start with a base 3D model which describes a generic or typical example ofthe type of object being modeled, and then to add texture to the model using one or more 2D images ofthe object. For example, if the multi-featured object is a human face, a 3D "avatar" (i.e., an electronic 3D graphical or pictorial representation) would be generated by using a pre-existing, standard 3D model of a human face, and mapping onto the model a texture from one or more 2D images of the face. See U.S. Patent No. 6,532,011 Bl to Francini et al., and U.S. Patent No. 6,434,278 Bl to Hashimoto. The main problem with this approach is that the 3D geometry is not highly defined or tuned for the actual target object which is being generated.
A common variant ofthe above approach is to use a set of 3D base models and select the one that most resembles the target object before performing the texture mapping step. Alternatively, a single parameterized base model is used, and the parameters ofthe model are adjusted to best approximate to the target. See U.S. Patent No. 6,556,196 Bl to Blanz et al. These methods serve to refine the geometry to make it fit the target, at least to some extent. However, for any target object with a reasonable range of intrinsic variability, the geometry of the model will still not be well tuned to the target. This lack of geometric fit will detract from the verisimilitude ofthe 3D model to the target object.
Conventional techniques typically also require that the 2D images being used for texturing the model be acquired from known viewpoints relative to the 3D object being modeled. This usually limits the use of such approaches to situations where the model is being generated in a controlled environment in which the target object can be photographed. Alternatively, resort may be had to human intervention to align 2D images to the 3D model to be generated. See U.S. Patent Publication No. 2002/0012454 to Liu et al. This manual step places a severe limit on the speed with which a 3D model can be generated from 2D imagery. Accordingly, a need exists for an automated approach that systematically makes use of available 2D source data for a 3D object to synthesize an optimal 3D model ofthe object.
SUMMARY OF THE INVENTION
The present invention provides an automated method and system for generating an optimal 3D model of a target multifeatured object when only partial source data describing the object is available. The partial source data often consists of one or more 2D projections ofthe target object or an obscuration of a single projection, but may also include 3D data, such as from a 3D camera or scanner. The invention uses a set of reference 3D representations that span, to the extent practicable, the variations ofthe class of objects to which the target object belongs. The invention may automatically identify feature items common to the source data and to the reference representations, and establish correspondences between them. For example, if the target object is a face, the system may identify points at the extremities ofthe eyes and mouth, or the nose profile, and establish correspondences between such features in the source data and in the reference representations. Manual identification and matching of feature items can also be incorporated if desired. Next, all possible positions (i.e., orientations and translations) for each 3D reference representation are searched to identify the position and reference representation combination whose projection most closely matches the source data. The closeness of match is determined by a measure such as the minimum mean-squared error (MMSE) between the feature items in the projection of tlie 3D representation, and the corresponding feature items in the source projection. A comparison is performed in 3D between the estimated deprojected positions ofthe feature items from the 2D source projection and the corresponding feature items ofthe 3D representation. The closest-fitting 3D reference representation may then be deformed to optimize the correspondence with the source projection. Each point in the mesh which defines the geometry of the 3D representation is free to move during the deformation. The search for the best-fitting position (i.e., orientation and translation) is repeated using the deformed 3D representation, and the deformation and search may be repeated iteratively until convergence occurs or terminated at any time.
Thus the geometry of the 3D model is tailored to the target object in two ways. First, when more than one reference representation is available, the selection ofthe best-fitting reference representation from a set of references enables the optimal coarse-grain choice to be made. Second, deformation enables fine scale tuning in which errors introduced by inaccurate choice of viewpoint are progressively reduced by iteration. The invention requires no information about the viewpoint from which the 2D source projection was captured, because a search is performed over all possible viewpoints, and the viewpoint is taken to be that which corresponds to the closest fit between the projected 3D representation and the 2D source data. In a first aspect, the invention comprises a method of comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects. In accordance with the method, a plurality of reference 3D representations of generically similar multifeatured objects is provided, and a viewpoint-invariant search ofthe reference 3D representations is performed to locate the reference 3D representation having a 2D projection most resembling the source projection(s). In some embodiments, resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the source 2D projection(s). Each reference 3D representation may be searched over a range of possible 2D projections ofthe 3D representation without actually generating any projections. The search over a range of possible 2D projections may comprise computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections. The rigid motions may comprise pitch, roll, yaw, and translation in three dimensions. Automatic camera calibration may be performed by estimation of camera parameters, such as aspect ratio and field of view, from image landmarks.
In some embodiments, the optimum rigid motion may be determined by estimating a conditional mean pose or geometric registration as it relates to feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation such that the feature items are projectionally consistent with feature items in source 2D projection(s). MMSE estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation are generated. The rigid motion may be constrained by known 3D position information associated with the source 2D projection(s).
In some embodiments, the feature items may include curves as well as points which are extracted from the source projection using dynamic programming. Further, areas as well as surfaces and or subvolumes may be used as features generated via isocontouring (such as via the Marching Cubes algorithm) or automated segmentation algorithms. The feature items used in the matching process may be found automatically by using correspondences between the 2D source projection(s) and projected imagery of at least one reference 3D object.
The invention may further comprise the step of creating a 3D representation ofthe source 2D projection(s) by deforming the located (i.e., best-fitting) reference 3D representation so as to resemble the source multifeatured object. In one embodiment, the deformation is a large deformation diffeomorphism, which serves to preserve the geometry and topology ofthe reference 3D representation. The deformation step may deform the located 3D representation so that feature items in the source 2D projection(s) align with corresponding features in the located reference 3D representation. The deformation step may occur with or without rigid motions and may include affine motions. Further, the deformation step may be constrained by at least one of known 3D position information associated with the source 2D projection(s), and 3D data ofthe source object. The deformation may be performed using a closed form expression.
In a second aspect, the invention comprises a system for comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects. The system comprises a database comprising a plurality of reference 3D representations of generically similar multifeatured objects and an analyzer for perfonning a viewpoint-invariant search ofthe reference 3D representations to locate the reference 3D representation having a 2D projection most resembling the source projection(s). In some embodiments, the analyzer determines resemblance by a degree of alignment between feature items in the 3D representation and corresponding feature items in the source 2D projection(s). The analyzer may search each reference 3D representation over a range of possible 2D projections ofthe 3D representation without actually generating any projections. In some embodiments, the analyzer searches over a range of possible 2D projections by computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections. The rigid motions may comprise pitch, roll, yaw, and translation in three dimensions. The analyzer may be configured to perform automatic camera calibration by estimating camera parameters, such as aspect ratio and field of view, from image landmarks. In some embodiments, the analyzer is configured to determine the optimum rigid motion by estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation such that the feature items are projectionally consistent with feature items in the source 2D projection(s). The analyzer is further configured to generate MMSE estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation. The rigid motion may be constrained by known 3D position information associated with the source 2D projection(s).
In some embodiments, the analyzer is configured to extract feature items from the source projection using dynamic programming. In further embodiments, the analyzer may be configured to find feature items used in the matching process automatically by using correspondences between source imagery and projected imagery of at least one reference 3D object.
The invention may further comprise a deformation module for creating a 3D representation ofthe at least one source 2D projection by deforming the located (i.e., best-fitting) reference 3D representation so as to resemble the source multifeatured object. In one embodiment, the deformation module deforms the located reference 3D representation using large deformation diffeomorphism, which serves to preserve the geometry and topology ofthe reference 3D representation. The deformation module may deform the located 3D representation so that feature items in the source 2D projection(s) align with corresponding features in the located reference 3D representation. The deformation module may or may not use rigid motions and may use affine motions. Further, the deformation module may be constrained by at least one of known 3D position information associated with the source 2D projection(s), and 3D data of the source object. The deformation module may operate in accordance with a closed form expression.
In a third aspect, the invention comprises a method of comparing a source 3D object to at least one reference 3D object. The method involves creating 2D representations ofthe source object and the reference object(s) and using projective geometry to characterize a correspondence between the source 3D object and a reference 3D object. For example, the correspondence may be characterized by a particular viewpoint for the 2D representation ofthe 3D source object.
In a fourth aspect, the invention comprises a system for comparing a source 3D object to at least one reference 3D object. The system comprises a projection module for creating 2D representations ofthe source object and the reference object(s) and an analyzer which uses projective geometry to characterize a correspondence between the source 3D object and a reference 3D object.
In a fifth aspect, the above described methods and systems are used for the case when the 3D object is a face and the reference 3D representations are avatars.
In a sixth aspect, the invention comprises a method for creating a 3D representation from at least one source 2D projection of a source multifeatured object. In accordance with the method, at least one reference 3D representation of a generically similar object is provided, one ofthe provided representation(s) is located, and a 3D representation ofthe source 2D projection(s) is created by deforming the located reference representation in accordance with the source 2D projection(s) so as to resemble the source multifeatured object. In some embodiments, the source 2D projection(s) is used to locate the reference representation. In further embodiments, the set of reference representations includes more than one member, and the reference most resembling the source 2D projection(s) is located by performing a viewpoint- invariant search ofthe set of reference representations, without necessarily actually generating any projections. The search may include computing a rigid motion ofthe reference representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe source projections.
In a preferred embodiment, a 3D representation ofthe source projection(s) is created by deforming the located reference representation so as to resemble the source multifeatured object. The deformation may be a large deformation diffeomorphism. In some embodiments, the deformation deforms the located reference so that feature items in the source projection(s) align with corresponding feature items in the located 3D reference representation. In some embodiments, the deformation is performed in real time. In a seventh aspect, the invention comprises a system for creating a 3D representation from at least one source 2D projection of a source multifeatured object. The system includes a database of at least one reference 3D representation of a generically similar object, and an analyzer for locating one ofthe provided representation(s). The system further includes a deformation module for creating a 3D representation ofthe source 2D projection(s) by deforming the located reference representation in accordance with the source 2D projection(s) so as to resemble the source multifeatured object. In some embodiments, the analyzer uses the source 2D projection(s) to locate the reference representation. In further embodiments, the set of reference representations includes more than one member, and the analyzer locates the reference most resembling the source 2D projection(s) by performing a viewpoint-invariant search ofthe set of reference representations, without necessarily actually generating any projections. The search may include computing a rigid motion ofthe reference representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe source projections. In a preferred embodiment, the deformation module creates a 3D representation ofthe source projection(s) by deforming the located reference representation so as to resemble the source multifeatured object. The deformation may be a large deformation diffeomorphism. In some embodiments, the deformation module deforms the located reference so that feature items in the source projection(s) align with corresponding feature items in the located 3D reference representation. In some embodiments, the deformation module operates in real time.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles ofthe invention. In the following description, various embodiments ofthe invention are described with reference to the following drawings, in which:
Figure 1 schematically illustrates the various components ofthe invention, starting with the target object, the reference objects, and yielding an optimal 3D model after performing a search and deformation.
Figures 2A, 2B, and 2C schematically illustrate the components of a 3D avatar. Figure 3 schematically illustrates the matching of feature items in the 2D imagery.
Figure 4 is a block diagram showing a representative hardware environment for the present invention.
Figure 5 is a block diagram showing components ofthe analyzer illustrated in Figure 4.
Figure 6 is a block diagram showing the key functions performed by the analyzer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Figure 1 illustrates the basic operation ofthe invention in the case where the 3D target multifeatured object is a face and the set of reference 3D representations are avatars. The matching process starts with a set of reference 3D avatars which represent, to the extent practicable, the range of different types of heads to be matched. For example, the avatars may include faces of men and women, faces with varying quantities and types of hair, faces of different ages, and faces representing different races. Typically the reference set includes numerous (e.g., several hundred or more) avatars, though the invention works with as few as one reference object, and as many for which storage space and compute time are available. In some situations, only a single reference avatar will be used. This case may arise, for example, when the best-fitting avatar has been selected manually, or by some other means, or when only one reference avatar is available. In Figure 1, the source data ofthe target face is illustrated as a single 2D photograph ofthe target 3D face taken from an unknown viewpoint. First, selected features to be used for matching are identified in the source photograph. The features may be points, such as the extremities ofthe mouth and eyes, or curves representing a profile, an eyebrow, or other distinctive curve, or subareas such as an eyebrow, nose or cheek. The corresponding features are identified in the reference avatars. The selection of features in the target photograph, and the identification of corresponding features in the reference avatars may be done automatically according to the invention. Next, a viewpoint-invariant search is conducted in which each 3D avatar is notionally subjected to all possible rigid motions, and the features projected into 2D. The positions ofthe projected avatar features are compared to the feature positions in the target photograph. The avatar for which a particular rigid motion provides the closest fit between projected features and those ofthe source photograph is selected as the best reference avatar. Figure 1 illustrates the best-fitting reference avatar to be the middle one.
Next, the best reference avatar is deformed to match the target photograph more closely. First the features ofthe photograph are reverse projected to the coordinates ofthe best reference avatar in the orientation and position corresponding to the best match. The mesh points ofthe avatar are then deformed in 3D to minimize the distances between the reverse-projected features ofthe photograph and the corresponding avatar features. The avatar resulting from this deformation will be a closer approximation to the target 3D face. The rigid motion search and deformation steps may be repeated iteratively, e.g., until the quality of fit no longer improves appreciably. The resulting 3D model is the optimal match to the target face.
The invention can be used effectively even when the source imagery includes only a part ofthe target face, or when the target face is partially obscured, such as, for example, by sunglasses or facial hair. The approach ofthe invention is suitable for any multifeatured object, such as faces, animals, plants, or buildings. For ease of explanation, however, the ensuing description will focus on faces as an exemplary (and non-limiting) application.
Figures 2A, 2B, and 2C show the components of a representative avatar. In one embodiment ofthe invention, the geometry ofthe avatar is represented by a mesh of points in 3D which are the vertices of set of triangular polygons approximating the surface ofthe avatar. Figure 2A illustrates a head-on view 202 and a side view 204 ofthe triangular polygon representation. In one representation, each vertex is given a color value, and each triangular face may be colored according to an average ofthe color values assigned to its vertices. The color values are determined from a 2D texture map 206, illustrated in Figure 2B, which may be derived from a photograph. In Figure 2C3 the final avatar with texture is illustrated in a head-on view 208 and side view 210. The avatar is associated with a coordinate system which is fixed to it, and is indexed by three angular degrees of freedom (pitch, roll, and yaw), and three translational degrees of freedom ofthe rigid body center in three-space. In addition, individual features ofthe avatar, such as the chin, teeth and eyes may have their own local coordinates (e.g., chin axis) which form part ofthe avatar description. The present invention may be equally applied to avatars for which a different data representation is used. For example, texture values may be represented as RGB values, or using other color representations, such as HSL. The data representing the avatar vertices and the relationships among the vertices may vary. For example, the mesh points may be connected to form non-triangular polygons representing the avatar surface.
The invention may include a conventional rendering engine for generating 2D imagery from a 3D avatar. The rendering engine may be implemented in OpenGL, or in any other 3D rendering system, and allows for the rapid projection of a 3D avatar into a 2D image plane representing a camera view ofthe 3D avatar. The rendering engine may also include the specification ofthe avatar lighting, allowing for the generation of 2D projections corresponding to varying illumination ofthe avatar. Lighting corresponding to a varying number of light sources of varying colors, intensities, and positions may be generated.
The feature items in the 2D source projection which are used for matching are selected by hand or via automated methods. These items may be points or curves. When the source projection includes a front view, suitable points may be inflection points at the lips, points on the -eyebrows, points at the extremities ofthe eyes, or extremities of nostrils, and suitable curves may include an eyebrow or lip. When the source projection includes a side view, the feature points corresponding to the profile are used and may include the tip ofthe nose or chin. Suitable feature item curves may include distinct parts ofthe profile, such as nose, forehead, or chin. When the feature items are determined manually, a user interface is provided which allows the user to identify feature points individually or to mark groups of points delineated by a spline curve, or to select a set of points forming a line. The automated detection of feature items on the 2D source projection is performed by searching for specific features of a face, such as eyeballs, nostrils, and lips. As understood by those of ordinary skill in the art, the approach may use Bayesian classifiers and decision trees in which hierarchical detection probes are built from training data generated from actual avatars. The detection probes are desirably stored at multiple pixel scales so that the specific parameters, such as for orientation of a feature, are only computing on finer scales if the larger-scale probes yield a positive detection. The feature detection probes may be generated from image databases representing large numbers of individuals who have had their features demarcated and segregated so that the detection probes become specifically tuned to these features. The automated feature detection approach may use pattern classification, Bayes nets, neural networks, or other known techniques for determining the location of features in facial images.
The automated detection of curve features in the source projection may use dynamic programming approaches to generate curves from a series of points so as to reduce the amount of computation required to identify an optimal curve and maximize a sequentially additive cost function. Such a cost function represents a sequence of features such as the contrast of the profile against background, or the darkness of an eyebrow, or the crease between lips. A path of N points can be thought of as consisting of a starting node x0 and a set of vectors v0 , vl , ... , ^
;-l connecting neighboring nodes. The nodes comprising this path are defined as %,. = Vj +x0.
Rather than searching over all paths of length N, dynamic programming may be used to generate maximum (or minimum) cost paths. This reduces the complexity ofthe algorithm from KN to NK2 where N is the length of a path andK is the total number of nodes, as dynamic programming takes advantage ofthe fact that the cost is sequentially additive, allowing a host of sub-optimal paths to be ignored. Dynamic programming techniques and systems are well- characterized in the art and can be applied as discussed herein without undue experimentation. Next, the 3D rotation and translation from the avatar coordinates to the source projection is determined. This corresponds to finding the viewpoint from which the source projection was captured. In preferred embodiments, this is achieved by calculating the position ofthe avatar in 3D space that best matches the set of selected feature items in the 2D source projection. Generally, these feature items will be points, curves, or subareas and the source projection will be a photograph on which the position of these items can be measured, either manually or automatically. The position calculation may be based on the computation ofthe conditional mean estimate ofthe reverse projection positions in 3D ofthe 2D feature items, followed by the computation of MMSE estimates for the rotation and translation parameters in 3D, given the estimates ofthe 3D positions ofthe feature items. Since position in 3D space is a vector parameter, the MMSE estimate for translation position is closed form; when substituted back into the squared error function, it gives an explicit function in terms of only the rotations. Since the rotations are not vector parameters, they may be calculated using non-linear gradient descent through the tangent space ofthe group or via local representation using the angular velocities of the skew-symmetric matrices.
In addition to or in lieu ofthe least squares or weighted least squares techniques described herein, the distance metrics used to measure the quality of fit between the reverse projections of feature items from the source imagery and corresponding items in the 3D avatar may be, for example, Poisson or other distance metrics which may or may not satisfy the triangle inequality.
If feature items measured in 3D are available, such as from actual 3D source data from 3D cameras or scanners, the feature item matching may be performed directly, without the intermediate step of calculating the conditional mean estimate ofthe deprojected 2D features. The cost function used for positioning the 3D avatar can be minimized using algorithms such as closed form quadratic optimization, iterative Newton descent or gradient methods.
The 3D positioning technique is first considered without deformation ofthe reference avatar. In the following, a 3D reference avatar is referred to as a CAD (computer-aided design) model, or by the symbol CAD. The set of Xj = (x y z^),j - 1,...,N features is defined on the
CAD model. The projective geometry mapping is defined as either positive or negative z , i.e., projection occurs along the z axis. In all the projective geometry p, = (^L,^L) (for negative z-
axis projection), or Pj = (r ->-τr) (for positive z -axis projection) is the projected position of the point xj where α is the projection angle. Let the rigid transformation be ofthe form A = O, b : x l-→ Ox + b centered around xc = 0. For positive (i.e., z > 0 ) mapping and n = 0 ,
Pi =
Figure imgf000013_0001
= lj---i.N , where n is the cotangent ofthe projective angle. The following data structures are defined throughout: (Equation 1)
(Equation 2)
Figure imgf000014_0001
with (•)' matrix transpose, and the identity matrix / For negative (i.e., z < 0 )
Figure imgf000014_0002
mapping, p} = (^L,^L), and the change^ = (™,— ^-,1)' is made.
The basis vectors Z Z2,Z^ at the tangent to the 3x3 rotation element O are defined as:
Zx = lj Oold = [o21 , o22 , o23 , -on , -o12 , -o,3 , 0, 0, 0]' (Equation 3) Z2 = l2OoW = [o31,o3233, 0,0,0, -oπ,-o12,-o13]' (Equation 4) Z3 = 13 OoW = [0, 0, 0, σ31 , o32 , o33 , -o21 , -ø22 , -o23 ]' (Equation 5)
where lj (Equation 6)
Figure imgf000014_0003
The reverse projection of feature points from the 2D projection may now be performed.
Given the feature points p = pl ,Pj^),j = 1,2,... in the image plane, the minimum norm-
_ <Ox, +b,P,> estimates for z, are given by, 0,b as , , and the MMSE O,b satisfies
r n ∑ΩOxl +b -zlP, f= min ∑ (Oxl +b)'Q,(Ox, +b). (Equation 7)
:,0,1> " 0,h ~f
During the process of matching a source image to reference avatars, there may be uncertainty in the determined points x, implying that cost matching is performed with a covariance variability structure built in to the formula. In this case, the norm has within it a 3x3 matrix which represents this variability in the norm. The optimum rotation and translation may next be estimated from feature points. Given the proj ective points p , j = 1, 2, ... , the rigid transformation has the form O, b : x \-> Ox + b
(centered around center xc = 0 ). Then for positive ( z > 0 ) mapping and n = 0 , p - (^L,^ L) ,
(Ox, + b)' Qt (Ox, + b). (Equation 8)
Figure imgf000015_0001
The optimum translation/rotation solutions are preferably generated as follows. Compute the 3x 9 matrix M, = X, - Q~ XQ and evaluate exhaustively the cost function choosing
minimum O and computing the translation b = - _, Q,Y -.,=1 QPx ι w^tn minimum O attained, for example, via brute force search over the orthogonal group which may for example be parameterized by pitch, roll or yaw or running the gradient search algorithm to convergence as follows.
( N
Brute Force : O = arg min O ∑M.'Q , O; (Equation 9) =ι
VJIUUMU . V - * '='*' lO""1, ;' w = < 2(∑ M,'Q,M,)0M,ZJ >,j = 1,2,3 (Equation 10)
Figure imgf000015_0002
In a typical application some information about the position ofthe object in 3D space is known. For example, in a system which takes a succession of photographs of a moving source object, such as in a tracking system, the position from a previous image may be available. The invention may incorporate this information into the matching process as follows. Given a sequence of points p„i = \,...,N and a rigid transformation of the form A = 0,b : x r-^ Ox + b
(centered around xc = 0), the MMSE of rotation and translation O,b satisfies:
rmn ∑ΩOx, +b-z,P, f +(b-μ)'∑- b-μ) = min ∑ (Ox, +b)'Q,(Ox, +b) + (b -μ)'∑-l(b -μ). z,0,b " 0,b "
(Equation 11) The 3x9 matrix M, and a 3 x 1 column vector N are computed :
M, = X, -QzXQ,N = QzXQ,QΣ = (β + ∑- = Ql∑μ, = Ql∑μ*Zμ = ∑"V (Equation 12)
The translation is then determined b = -QXQ0+QΣμ at the minimum O obtained by exhaustive search or gradient algorithm run until convergence: Mi'Qiψ (Equation 13)
Figure imgf000016_0001
Figure imgf000016_0002
Gradient:
Omw = - N'∑~lφ, ZJ >
Figure imgf000016_0003
(Equation 14) with the projection onto the basis vectors Z1,Z2,Z3 of equations 3-5 defined at the tangent to
00,d in the exponential representation where a"aw are the directional derivatives of cost.
The rotation/translation data may be indexed in many different ways. For example, to index according to the rotation around the center ofthe object, rather than fixed external world coordinates, the coordinates are just reparameterized by defining x <— x - xc . All ofthe techniques described herein remain the same.
The preferred 3D algorithm for rigid motion efficiently changes states of geometric pose for comparison to the measured imagery. The preferred 3D algorithm for diffeomorphic transformation of geometry matches the geometry to target 2D image features. It should be understood, however, that other methods of performing the comparison ofthe 3D representation to the source imagery may be used, including those that do not make use of specific image features.
Once the rigid motion (i.e., rotation and translation) that results in the best fit between 2D source imagery and a selected 3D avatar is determined, the 3D avatar may be deformed in order to improve its correspondence with the source imagery. The allowed deformations are generally limited to diffeomorphisms ofthe original avatar. This serves to preserve the avatar topology, guaranteeing that the result ofthe deformation will be a face. The deformations may also enforce topological constraints, such as the symmetry ofthe geometry. This constraint is especially useful in situations where parts ofthe source object are obscured, and the full geometry is inferred from partial source information. Figures 3 A and 3B illustrate the effect of avatar deformation on the matching ofthe avatar to the source imagery. In Figure 3 A, feature points are shown as black crosses on the source image 302. An example is the feature point at the left extremity ofthe left eyebrow 304. The projections ofthe corresponding feature points belonging to the best-matching reference avatar with optimal rigid motion prior to deformation are shown as white crosses. It can be seen that the projected point corresponding to the left extremity ofthe left eyebrow 306 is noticeably displaced from its counterpart 304. In Figure 3B, the same source image 302 is shown with feature points again indicated by black crosses. This time, the best-fitting avatar feature points shown as white crosses are now projected after deformation. The correspondence between source feature points and avatar feature points is markedly improved, as shown, for example, by the improved proximity ofthe projected left eyebrow feature point 308 to its source counterpart 304.
The 3D avatar diffeomorphism calculation starts with the initial conditions for placement ofthe avatar determined by the feature item detection and computation ofthe best-fitting rigid motion and the original geometry ofthe avatar. It then proceeds by allowing all ofthe points on the avatar to move independently according to a predefined formula to minimize the distance between the deformed avatar points in 3D and the conditional mean estimates ofthe 2D landmark points reverse projected to the 3D coordinates. Once this diffeomorphism is calculated, the 3D landmark rigid motion algorithm is applied again to the source projections and feature items to find the best guess ofthe camera positions given this newly transformed avatar with its new vertex positions. Subsequently, a new diffeomorphism is generated, and this process is continued until it converges. Alternatively, iteration may not be used, with the rigid motion calculation being performed only a single time, and just one diffeomorphism transformation applied. In the case where camera orientations (i.e., the viewpoint ofthe measured source projections) are known precisely, these can be used as fixed inputs to the calculation, with no rigid transformation required. When the measured sets of feature items are in 3D, such as from a cyber scan or 3D camera observations ofthe candidate head, the avatar is transformed onto the candidate sets of points without any intermediate generation ofthe candidate points in 3D space via the conditional mean algorithm for generating 3D points from 2D sets of points.
The diffeomorphic deformation of an avatar proceeds as follows. Given the set Xj = (xJ,yJ,zJ),j = l,...,N feature items defined on the CAD model, with the proj ective geometry mapping with x = 3 , a2 = , where n is cotangent ofthe angle, and w, h are aspect ratio widths and heights, (x,y,z) H> p(x, y,z) = (^ -,^-) , with observations ofthe feature items through the projective geometry p = (^ -,^ -) . The goal is to construct the deformation ofthe
CAD model x — > x + u(x), x e CAD with unknown camera rigid motions corresponding to the measured projective image feature items. The projective points for each orientation v = 1, ..., V , and smoothing matrices
Figure imgf000018_0001
(Equation 16)
(Equation 17)
Figure imgf000018_0002
are constructed, where for example Ky = diag(e a ' Xj ,e a ' Xj ,e " ~Xj ) corresponds to the square root inverse Laplacian operator L = diag(-V2 + c) .
In one embodiment, the avatar may be deformed with small deformations only and no rigid motions. For this embodiment, it is assumed that the measured feature items are all points from a single camera viewing which generated the projected source image in which the feature points were measured. The goal is to construct the deformation ofthe CAD model constructing the mapping x -» x + u(x), x e CAD :
N N min □ Lu f +∑ D (x„ + u(xn )) - znPn tf = min □ Lu D2 +∑ (xn + u(x„ ))' Qn (x„ + u(x„ )). (Equation 18) ">z» »=ι " »=ι
First, the transformation ofthe model x - x + u(x) with u(x) = ∑ =l K(x„,x)βn and where
: KTX IC1 (Equation 19)
Figure imgf000018_0004
Figure imgf000018_0003
is computed. Next, rigid motions are added and the following equation solved for the optimizer: inΩLu f +∑ ∑ (0M(xll +u(x,l)) + bM),Q )(Oiv)(x„ +u(xl,)) + bM). (Equation20)
71=1 v=l
The transformation ofthe model using small deformation x — x + u(x) is computed, where "(*) = ∑ K(x,„x)β„ and
- (K-1 (Equation 21)
Figure imgf000019_0001
Figure imgf000019_0002
In another embodiment, diffeomorphic deformations with no rigid motions ofthe avatar are applied. In the case that the change in shape ofthe face is extensive the large deformation φ : x \-^ φ(x) satisfying φ = φ^φt = j vss(x))ds + x,x e CAD is generated. The deformation of the CAD model constructing the mapping x ι-» φ(x),x 6 CAD is constructed:
A N ,4 N mm JpZv, tf dt + ∑Ωφ(x„)-znPn rf = minveto..] lU Lv, ^ dt + ∑φ(xn)' Qnφ(xn). (Equation22)
V<≡[0,l],z„ n=ϊ «=1 Using the initialization v"ew = 0, φ'm x) = x, x e CAD , mappings are repeatedly generated from the new vector field by running an iterative algorithm until convergence:
(Equation 23)
Φ" (X) = IVΓ' Γ '( W+X , (Equation 24)
where D lMΦtX = The addition rigid motions to the large deformation.
Figure imgf000019_0003
x \→ φ(x), x e CAD is accomplished as follows:
(Equation 25)
Figure imgf000019_0004
Using the initialization v""1" = 0, φ"ew (x) = x, x e C4D , a mapping is generated from the new vector field by running an iterative algorithm until convergence: wo (Equation 26)
Figure imgf000019_0005
φ™(χ) = j v Φ' x )dx + X , (Equation 27)
where D ΛXΛI =
Figure imgf000020_0001
In a further embodiment, the deformation may be performed in real time for the case when the rigid motions (i.e., the rotation/translation) which bring the avatar into correspondence with the one or more source 2D projection are not known. A similar approach to the one above is used, with the addition of an estimation ofthe rigid motions using the techniques described herein. The initialization u"ew = 0 is used. Rigid motions are calculated using the rotation/translation techniques above to register the CAD model x ι-» x + u"ew(x) to each photograph, generating rigid motions Ow"cw ,b{v)mw ,v = 1,2,.... Oiv)mw ,b{r)mw are fixed from the previous step, and the deformation ofthe CAD model x l-» x + u"ew(x) or large deformation x → φ(x) are computed using the above techniques to solve the real-time small deformation or large deformation problem:
(small) +u(x„)) + bM). (Equation 28)
Figure imgf000020_0002
(large) b(v)). (Equation29)
Figure imgf000020_0003
In another embodiment, the avatar is deformed in real-time using diffeomorphic deformations. The solution to the real-time deformation algorithm generates a deformation which may be used as an initial condition for the solution ofthe diffeomorphic deformation. Real-time diffeomorphic deformation is accomplished by incorporating the real-time deformations solution as an initial condition and then performing a small number (in the region of 1 to 10) iterations ofthe diffeomorphic deformation calculation.
The deformation may include affine motions. For the affine motion A x → Ax where
A is the 3x 3 generalized linear matrix so that
N v N v min ∑ ∑ΩO^Ax,, +b<"> - «^"> tf = min ∑ ∑ (OW , +b^)'Q^(O^Axn +*«),
A,z„ n=ϊ v=\ n=l v=l
(Equation 30) the least-squares estimator A : x —. > Ax is computed:
A = (Equation 31)
Figure imgf000020_0004
In many cases, both feature items in the projective imagery as well as the imagery itself can be used to drive the deformation ofthe avatar. Augmentation of source data to incorporate source imagery may improve the quality ofthe fit between the deformed avatar and the target face. To implement this, one more term is added to the deformation techniques. Let / be the measured imagery, which in general includes multiple measured images I v v = 1,2,... corresponding to an indexed sequence of pixels indexed by p e [0,1]2 , with the projection mapping points x = (x, y, z) e IR3 _-> p(x) = (pλ (x) = , p2 (x) = — -) . For the discrete setting of pixels in the source image plane with color (R,G,B) template, the observed projective U(p) is
an (R,G,B) vector and the projective matrix becomes Px , operating on points
Figure imgf000021_0001
(x,y,z) e IR3 according to the projective matrix
Px ■ (x, y, L) ι→ (px (x, y, z), p2 (x, y, z)) the point x(p) being the revealed
Figure imgf000021_0002
point which is not occluded (closest point to the projection on the ray) on the 3D CAD model wliich projects to the point p in the image plane. Next, the projected template matrices resulting from finite differences on the (R,G,B) components at the projective coordinate p of the template value are required. The norm is interpreted componentwise:
j(p) Ω I(p)-Ω(p), (Equation 32)
Figure imgf000021_0003
v'π(p) = VΩ(p)Px(p) , (Equation 33)
Figure imgf000021_0004
with matrix norm ΩA-Btf=\ Ar -Br |2 + \ AS -B8 + 1 A" -B" . Associated with each image is a translation/rotation assumed already known from the previous rigid motion calculation techniques. The following assumes there is one 2D image, with O,b identity, and let any ofthe movements be represented as x -> x + u(x) . Then u(x) = Ox — x is a rotational motion, u(x) = b is a constant velocity, u(x) = ∑ e,E,(x) is a constrained motion to a basis function such as "chin rotation," "eyebrow lift," etc., and the general motion u is given by: UΪ(p)- U(p)u(x(p))[
Figure imgf000022_0001
(Equation 34) = u(x(p))' U.(p) U(p)u(x(P)).
Figure imgf000022_0002
(Equation 35)
This is linear in u so closed-form expressions exist for each ofthe forms of u , for example, for the unconstrained general spline motion, ύ(x(p)) = (VΩ(p) π(p) 1i(p)Vπ(p). (Equation 36)
This approach can be incorporated into the other embodiments ofthe present invention for the various possible deformations described herein.
In the situation where large numbers of feature points, curves, and subvolumes are to be automatically generated from the source projection(s) and 3D data (if any), image matching is performed directly on the source imagery or on the fundamental 3D volumes into which the source object can be divided. For the case where the avatar is generated from 2D projective photographic images, the measured target projective image has labeled points, curves, and subregions generated by diffeomorphic image matching. Defining a template projective exemplar face with all ofthe labeled submanifolds from the avatar, the projective exemplar can be transformed bijectively via the diffeomorphisms onto the target candidate photograph, thereby automatically labeling the target photographs into its constituent submanifolds. Given these submanifolds, the avatars can then be matched or transformed into the labeled photographs. Accordingly, in the image plane a deformation : x -> φ(x) satisfying
Φ = J v ιι (*))dt +x,x e R2 is generated. The template and target images I0 , Iλ are transformed to satisfy min [ □ Lvt D2 dt + Y D I0 o φ'1 - 1. D2 . (Equation 37) v„/e[0,l],2„ * ^
The given diffeomorphism is applied to labeled points, curves, and areas in the template I0 thereby labeling those points, curves, and areas in the target photograph.
When the target source data are in 3D, the diffeomorphisms are used to define bijective correspondence in the 3D background space, and the matching is performed in the volume rather than in the image plane.
The following techniques may be used to select avatar models automatically. Given a collection of avatar models {CADa,a = 1,2,...}, and a set of measured photographs ofthe face of one individual, the task is to select the avatar model which is most representative ofthe individual face being analyzed. Let the avatar models be specified via points, curves, surfaces, or subvolumes. Assume for example N feature initial and target points, x°,xn e IRd ,n = 1,...,N, with x„ = x° + u(x°) , one for each avatar a = 1, 2, ....
In one embodiment, the avatar is deformed with small deformations only and no rigid motions. For this embodiment, it is assumed that the measured feature items are all points from a single camera viewing which generated the projected source image in which the feature points were measured. The matching x" }→ x" +u(x ),n = l,...,N is constructed and the CAD" model which is of smallest metric distance is selected. The optimum selected avatar model is the one closest to the candidate in the metric. Any of a variety of distance functions may be used in the selection ofthe avatar, including the large deformation metric from the diffeomorphism mapping technique described above, the real-time metric described above, the Euclidean metric, and the similitude metric of Kendall. The technique is described herein using the real-time metric. When there is no rigid motion, then the CAD model is selected to minimize the metric based on one or several sets of features from photographs, here described for one photo:
CAD6 = arg min mmΩLutf +∑Ω(x„α +u(x„α))-z„P„ tf
CAD" ,α=l,2,... ti, z„ „Tj
(Equation 38)
= arg min minULu tf +∑ « + ««))'βB« +««))•
CAD" , =\χ... » „=1 In other embodiments both metrics including unknown or known rigid motions, large deformation metrics, or affine motions can be used such as described in equations 28 and 29, respectively.
For selecting avatars given 3D information such as features of points, curves, surfaces and or subvolumes in the 3D volume, then the metric is selected which minimizes the distance between the measurements and the family of candidate CAD models. First, the K matrix defining the quadratic form metric measuring the distance is computed: f is, \χ , X, J iv(Xj ,X2 ) i ( j ,Xfl))
(Equation 39)
" ^N ' "^1 XN ' X2 ) ^N ' XN )
K may be, for example, K(xnx ) = diag(e a ' Xj ) . Next, the metric between the CAD models and the candidate photographic feature points is computed according to x" r- xn ' = Ax", +b + u(x ),n = l,...,N and CAD" of small distance is selected. Exact matching or inexact matching ( σ = 0 or inexact σ ≠ 0 ) can be used:
N exactCAD" = arg min min ∑ (Ax + b - x, )' [ K~x j (Ax + b - x ); (Equation 40)
CADa,a=\,2, A,b j J,j J J
inexactCAD" = arg min min T NO^, +b -x,')'[(K + σ2I) (Ax +b-x ).
CAD",a=\,2, A,b χ J,J (Equation 41)
The minimum norm is determined by the error between the CAD model feature points and the photographic feature points.
The present invention may also be used to match to articulated target objects. The diffeomorphism and real-time mapping techniques carry the template 3D representations bijectively onto the target models, carrying all ofthe information in the template. The template models are labeled with different regions corresponding to different components ofthe articulated object. For example, in the case of a face, the articulated regions may include teeth, eyebrows, eyelids, eyes, and jaw. Each of these subcomponents can be articulated during motion ofthe model according to an articulation model specifying allowed modes of articulation. The mapping techniques carry these triangulated subcomponents onto the targets, thereby labeling them with their subcomponents automatically. The resulting selected CAD model therefore has its constituent parts automatically labeled, thereby allowing each avatar to be articulated during motion sequences. In the case when direct 3D measurement ofthe source object is available x„ , y„ € D 3 , n = 1, ... , N from points, curves, surfaces, or subvolumes, the techniques for determining the rotation/translation correspondences are unchanged. However, since the matching terms involve direct measurements in the volume, there is no need for the intermediate step to determine the dependence on the unknown z-depth via the MMSE technique. Accordingly, the best matching rigid motion corresponds to:
N min ΥβOx„ +b - y„ D2 . (Equation 42)
The real-time deformation corresponds to:
N min □ Lu D2 +∑ D (x„ + u(x„ )) - V„ tf . (Equation 43)
«>*_, „=1 The diffeomorphism deformation corresponds to with φ = f v, (φt (x))dt + x, x e D 3 : j N min [ULv, ϊ3 dt + ∑Uφ(x„)-yn [ . (Equation44) v,^[0,l],z„ * ^
The techniques described herein also allow for the automated calibration camera parameters, such as the aspect ratio and field of view. The set of x. = (x y z ,i = 1,...,N features are defined on the CAD model. The positive depth projective geometry mapping with ax = j- , a2 = j- is defined, according to (x, y, z) h-> p(x, y, z) = (-^ , ■ ) , z e [0, ∞), n > 0.
Given are observations of some features through the projective geometry /? . = (jγ-, -) ■
The calibration ofthe camera is determined under the assumption that there is no transformation (affine or other) ofthe avatar. The z value is parameterized by incorporating the n frustrum distance so that all depth coordinates are the above coordinates plus frustrum depth. Videos can show different aspect ratios AR = ^- and fields-of-view FOV = 2 tan-1 — . The
technique estimates the aspect ratios γx2 from measured points Pt = (y1pn,y2Pn>ty '> = 1>>N :
min ∑DOx;. +b
Figure imgf000025_0001
Using the initialization "'''"' = γ2' w = 1 , the calculation is run to convergence. In the first step, the data terms
Figure imgf000026_0001
rotations/translations are computed:
(Equation 46)
Figure imgf000026_0002
O = arg (Equation 47)
Figure imgf000026_0003
Next, the expression max„ „ - Y ^'^'^' 2"^3'1 is maximized using an optimization method, such as Newton Raphson, gradient or conjugate gradient. Using the gradient algorithm, for example, the calculation is run to convergence, and the first step is repeated. The gradient method is shown here, with step-size selected for stability:
+ dr(γ olda)step-size (Equation 48)
Figure imgf000026_0005
(Equation 49)
(Equation 50)
Figure imgf000026_0004
The techniques described herein may be used to compare a source 3D object to a single reference object. 2D representations ofthe source object and the reference object are created, and the correspondence between them is characterized using mathematical optimization and projective geometry. Typically, the correspondence is characterized by specifying the viewpoint from which the 2D source projection was captured.
Refer now to Figure 4, which illustrates a hardware system 400 incorporating the invention. As indicated therein, the system includes a video source 402 (e.g., a video camera or a scanning device) which supplies a still input image to be analyzed. The output ofthe video source 402 is digitized as a frame into an array of pixels by a digitizer 404. The digitized images are transmitted along the system bus 406 over which all system components communicate, and may be stored in a mass storage device (such as a hard disc or optical storage unit) 408 as well as in main system memory 410 (specifically, within a partition defining a series of identically sized input image buffers) 412.
The operation ofthe illustrated system is directed by a central-processing unit ("CPU") 414. To facilitate rapid execution ofthe image-processing operations hereinafter described, the system preferably contains a graphics or image-processing board 416; this is a standard component well-known to those skilled in the art.
The user interacts with the system using a keyboard 418 and a position-sensing device (e.g., a mouse) 420. The output of either device can be used to designate information or select particular points or areas of a screen display 420 to direct functions performed by the system. The main memory 410 contains a group of modules that control the operation of the CPU
414 and its interaction with the other hardware components. An operating system 424 directs the execution of low-level, basic system functions such as memory allocation, file management and operation of mass storage devices 408. At a higher level, the analyzer 426, implemented as a series of stored instructions, directs execution ofthe primary functions performed by the invention, as discussed below; and instructions defining a user interface 428 allow straightforward interaction over screen display 422. The user interface 428 generates words or graphical images on the display 422 to prompt action by the user, and accepts commands from the keyboard 418 and/or position-sensing device 420. Finally, the memory 410 includes a partition 430 for storing for storing a database of 3D reference avatars, as described above. The contents of each image buffer 412 define a "raster," i.e.1, a regular 2D pattern of discrete pixel positions that collectively represent an image and may be used to drive (e.g., by means of image-processing board 416 or an image server) screen display 422 to display that image. The content of each memory location in a frame buffer directly governs the appearance of a corresponding pixel on the display 422. It must be understood that although the modules of main memory 410 have been described separately, this is for clarity of presentation only; so long as the system performs all the necessary functions, it is immaterial how they are distributed within the system and the programming architecture thereof. Likewise, though conceptually organized as grids, pixelmaps need not actually be stored digitally in this fashion. Rather, for convenience of memory utilization and transmission, the raster pattern is usually encoded as an ordered array of pixels. As noted above, execution ofthe key tasks associated with the present invention is directed by the analyzer 426, which governs the operation ofthe CPU 414 and controls its interaction with main memory 410 in performing the steps necessary to match and deform reference 3D representations to match a target multifeatured object. Figure 5 illustrates the components of a preferred implementation ofthe analyzer 426. The projection module 502 takes a 3D model and makes a 2D projection of it onto any chosen plane. In general, an efficient projection module 502 will be required in order to create numerous projections over the space of rotations and translations for each ofthe candidate reference avatars. The deformation module 504 performs one or more types of deformation on an avatar in order to make it more closely resemble the source object. The deformation is performed in 3D space, with every point defining the avatar mesh being free to move in order to optimize the fit to the conditional mean estimates ofthe reverse projected feature items from the source imagery. In general, deformation is only applied to the best-fitting reference object, if more than one reference object is supplied. The rendering module 506 allows for the rapid projection of a 3D avatar into 2D with the option of including the specification ofthe avatar lighting. The 2D projection corresponds to the chosen lighting ofthe 3D avatar. The feature detection module 508 searches for specific feature items in the 2D source projection. The features may include eyes, nostrils, lips, and may incorporate probes that operate at several different pixel scales.
Figure 6 illustrates the functions ofthe invention performed in main memory. In step 602, the system examines the source imagery and automatically detects features of a face, such as eyeballs, nostrils, and lips that can be used for matching purposes, as described above. In step 604, the detected feature items are reverse projected into the coordinate frame ofthe candidate avatar, as described above and using equation 7. In step 606, the optimum rotation/translation of the candidate avatar is estimated using the techniques described above and using equations 8, 9 and 10. In step 608, any prior information that may be available about the position ofthe source object with respect to the available 2D projections is added into the computation, as described herein using equations 11-13. When 3D measurements ofthe source are available, this data is used to constrain the rigid motion search as shown in step 610 and as described above with reference to equations 41-43. When the rotation/translation search 606 is completed over all the reference 3D avatars, the best-fitting avatar is selected in step 612, as described above, with reference to equations 38-40. Subsequently, the best-fitting avatar located in step 612 is deformed in step 614. 3D measurements ofthe source object 610, if any, are used to constrain the deformation 614. In addition, portions ofthe source imagery 616 itself may be used to influence the deformation 614.
The invention provides for several different kinds of deformation which may be optionally applied to the best-fitting reference avatar in order to improve its correspondence with 'j the target object. The deformations may include real-time deformation without rigid motions in which a closed form expression is found for the deformation, as described above using equations 18, 19. A diffeomorphic deformation ofthe avatar with no rigid motions may be applied (equations 22-24). Alternatively, a real time deformation with unknown rigid motion ofthe avatar may be deployed (equations 28, 29). A real-time diffeomorphic deformation may be applied to the avatar by iterating the real-time deformation. The avatar may be deformed using affine motions (equations 30, 31). The deformation ofthe avatar may be guided by matching a projection to large numbers of feature items in the source data, including the identification of submanifolds within the avatar, as described above with reference to equation 37. When tlie target object is described by an articulated model, the deformations described above may be applied to each articulated component separately.
The invention enables camera parameters, such as aspect ratio and field of view to be estimated as shown in step 618 and described above, with reference to equations 44-49.
As noted previously, while certain aspects ofthe hardware implementation have been described for the case where the target object is a face and the reference object is an avatar, tlie invention is not limited to the matching of faces, but may be used for matching any multifeatured object using a database of reference 3D representations that correspond to the generic type ofthe target object to be matched.
It will therefore be seen that the foregoing represents a highly extensible and advantageous approach to the generation of 3D models of a target multifeatured object when only partial information describing the object is available. The terms and expressions employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents ofthe features shown and described or portions thereof, but it is recognized that various modifications are possible within the scope ofthe invention claimed. For example, the various modules ofthe invention can be implemented on a general-purpose computer using appropriate software instructions, or as hardware circuits, or as mixed hardware-software combinations (wherein, for example, pixel manipulation and rendering is performed by dedicated hardware components). What is claimed is:

Claims

CLAIMS 1. A method of comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects, the method comprising the steps of: a. providing a plurality of reference 3D representations of generically similar multifeatured objects; and b. performing a viewpoint-invariant search ofthe reference 3D representations to locate the reference 3D representation having a 2D projection most resembling the at least one source 2D projection.
2. The method of claim 1 wherein the search step comprises, for each reference 3D representation, searching over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
3. The method of claim 2 wherein searching over a range of possible 2D projections comprises computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
4. The method of claim 3 wherein the optimum rigid motion is determined by: a. estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation, which feature items are projectionally consistent with feature items in the at least one source 2D projection; and b. generating, for rigid motions ofthe reference 3D representation, minimum mean- squared error estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation.
5. The method of claim 4 wherein the feature items are generated from the imagery using dynamic programming.
6. The method of claim 4 wherein the feature items are found automatically by using correspondences between source imagery and projected imagery of a reference 3D object.
7. The method of claim 3 wherein the rigid motions comprise pitch, roll, yaw, and translation in three dimensions.
8. The method of claim 7 wherein automatic camera calibration is performed by estimation of camera parameters from image landmarks.
9. The method of claim 1 wherein resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the at least one source 2D projection.
10. The method of claim 4 further comprising constraining the rigid motion based on known 3D position information associated with the at least one source 2D projection.
11. The method of claim 1 further comprising the step of creating a 3D representation of the at least one source 2D projection by deforming the located reference 3D representation so as to resemble the source multifeatured object.
12. The method of claim 11 wherein the located reference 3D representation is deformed using large deformation diffeomorphisms, thereby preserving the geometry and topology ofthe reference 3D representation.
13. The method of claim 11 wherein the deformation enforces constraints on the symmetry of the reference 3D representation.
14. The method of claim 11 wherein the deformation step comprises deforming the located reference 3D representation so that feature items in the at least one source 2D projection align with corresponding feature items in the located reference 3D representation.
15. The method of claim 11 wherein the deformation step comprises deforming the located reference 3D representation to optimize the match between the projection of a plurality of points on the located reference 3D representation and the at least one source 2D projection.
16. The method of claim 11 wherein the deformation occurs without rigid motions.
17. The method of claim 11 wherein the deformation includes at least one of rigid motions and affine motions.
18. The method of claim 11 wherein the deformation is constrained by at least one of known 3D position information associated with at least one source 2D projection, and 3D data ofthe source multifeatured object.
19. The method of claim 11 wherein the deformation is performed using a closed form expression.
20. The method of claim 11 wherein the deformation is performed substantially in real time.
21. A method of comparing at least one source 2D projection of a source face to a reference library of 3D reference avatars, the method comprising the steps of: a. providing a plurality of reference 3D representations of avatars; and b. performing a viewpoint-invariant search ofthe reference 3D avatars to locate the reference 3D avatar having a 2D projection most resembling the at least one source 2D projection.
22. The method of claim 21 wherein the search step comprises, for each reference 3D avatar, searching over a range of possible 2D projections ofthe 3D avatar without actually generating any projections.
23. The method of claim 22 wherein searching over a range of possible 2D proj ections comprises computing a rigid motion ofthe reference 3D avatar optimally consistent with a viewpoint ofthe source face in at least one ofthe 2D projections.
24. The method of claim 23 wherein the optimum rigid motion is determined by: a. estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D avatar, which feature items are projectionally consistent with feature items in the at least one source 2D projection; and b. generating, for rigid motions ofthe reference 3D avatar, minimum mean-squared error estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D avatar.
25. The method of claim 21 further comprising the step of creating a 3D avatar ofthe at least one source 2D projection by deforming the located reference 3D avatar so as to resemble the source face.
26. The method of claim 21 wherein resemblance is determined by a degree of alignment between feature items in the 3D avatar and the corresponding feature items in the at least one source 2D projection.
27. The method of claim 25 wherein the deformation step comprises deforming the located reference avatar to optimize the match between the projection of a plurality of points on the located reference avatar and the at least one source 2D projection.
28. A system for comparing at least one source 2D projection of a source multifeatured object to a reference library of 3D reference objects, the system comprising: a. a database comprising a plurality of reference 3D representations of generically similar multifeatured objects; and b. an analyzer for performing a viewpoint-invariant search of the reference 3D representations to locate the reference 3D representation having a 2D projection most resembling the at least one source 2D projection.
29. The system of claim 28 wherein the analyzer searches, for each reference 3D representation, over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
30. The system of claim 29 wherein the analyzer computes a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
31. The system of claim 30 wherein the analyzer is configured to determine the optimum rigid motion by: a. estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D representation, which feature items are projectionally consistent with feature items in the at least one source 2D projection; and b. generating, for rigid motions ofthe reference 3D representation, minimum mean- squared error estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D representation.
32. The system of claim 31 wherein the feature items are generated from the imagery using dynamic programming.
33. The system of claim 31 wherein the feature items are found automatically by using correspondences between source imagery and projected imagery of a reference 3D object.
34. The system of claim 30 wherein the rigid motions comprise pitch, roll, yaw, and translation in three dimensions.
35. The system of claim 34 wherein the analyzer is further configured to perform automatic camera calibration by estimation of camera parameters from image landmarks.
36. The system of claim 28 wherein resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the at least one source 2D projection.
37. The system of claim 31 wherein the analyzer constrains the rigid motion based on known 3D position information associated with the at least one source 2D projection.
38. The system of claim 28 further comprising a deformation module for creating a 3D representation ofthe at least one source 2D projection by deforming the located reference 3D representation so as to resemble the source multifeatured object.
39. The system of claim 38 wherein the deformation module deforms the located reference 3D representation using large deformation diffeomorphism, thereby preserving the geometry and topology ofthe reference 3D representation.
40. The system of claim 38 wherein the deformation module enforces constraints on the symmetry ofthe deformed located reference 3D representation.
41. The system of claim 38 wherein the deformation module deforms the located reference 3D representation so that feature items in the at least one source 2D projection align with corresponding feature items in the located reference 3D representation.
42. The system of claim 38 wherein the deformation module deforms the located reference 3D representation to optimize the match between the projection of a plurality of points on the located reference 3D representation and the at least one source 2D projection.
43 The system of claim 38 wherein the deformation module does not use rigid motions.
44. The system of claim 38 wherein the deformation module uses at least one of rigid motions and affine motions.
45. The system of claim 38 wherein operation ofthe deformation module is constrained by at least one of known 3D position information associated with at least one source 2D projection, and 3D data ofthe source multifeatured object.
46. The system of claim 38 wherein the deformation module operates in accordance with a closed-form expression.
47. The system of claim 38 wherein the deformation module performs the deformation substantially in real time.
48. A system for comparing at least one source 2D projection of a source face to a reference library of 3D reference avatars, the system comprising: a. a database comprising a plurality of reference 3D representations of avatars; and b. an analyzer for performing a viewpoint-invariant search ofthe reference 3D avatars to locate the reference 3D avatai" having a 2D projection most resembling the at least one source 2D projection.
49. The system of claim 48 wherein the analyzer searches, for each reference 3D representation, over a range of possible 2D projections ofthe 3D avatar without actually generating any projections.
50. The system of claim 49 wherein the analyzer computes a rigid motion ofthe reference 3D avatar optimally consistent with a viewpoint ofthe source face in at least one ofthe 2D projections.
51. The system of claim 50 wherein the analyzer is configured to determine the optimum rigid motion by: a. estimating a conditional mean of feature items comprising points, curves, surfaces, and subvolumes in a 3D coordinate space associated with the reference 3D avatar, which feature items are projectionally consistent with feature items in the at least one source 2D projection; and b. generating, for rigid motions ofthe reference 3D avatar, minimum mean-squared error estimates between the conditional mean estimate ofthe projected feature items and corresponding feature items ofthe reference 3D avatar.
52. The system of claim 48 wherein resemblance is determined by a degree of alignment between feature items in the 3D avatar and corresponding feature items in the at least one source 2D projection.
53. The system of claim 48 further comprising a deformation module for creating a 3D representation ofthe at least one source 2D projection by deforming the located reference 3D avatar so as to resemble the source face.
54. The system of claim 53 wherein the deformation module deforms the located reference avatar to optimize the match between the projection of a plurality of points on the located reference avatar and the at least one source 2D projection.
55. A method of comparing a source 3D object to at least one reference 3D object, the method comprising the steps of: a. creating 2D representations ofthe source object and the at least one reference object; and b. using projective geometry to characterize a correspondence between the source 3D object and a reference 3D object.
56. A system for comparing a source 3D object to at least one reference 3D object, the system comprising: a. a projection module for creating 2D representations ofthe source object and the at least one reference object; and b. an analyzer which uses projective geometry to characterize a correspondence between tlie source 3D object and a reference 3D object.
57. A method of creating a 3D representation from at least one source 2D proj ection of a source multifeatured object, the method comprising the steps of: a. providing a set of at least one reference 3D representations of generically similar multifeatured objects; b. locating a reference 3D representation from the set; and c. creating a 3D representation ofthe at least one source 2D projection by deforming the located reference 3D representation in accordance with the at least one source 2D projection so as to resemble the source multifeatured object.
58. The method of claim 57 further comprising the step of basing the locating step on the at least one source 2D projection.
59. The method of claim 57 wherein: a. the set comprises a plurality of reference 3D representations; and b. the locating step comprises performing a viewpoint-invariant search ofthe set of reference 3D representations to locate the reference 3D representation having a 2D projection most resembling the at least one source 2D projection.
60. The method of claim 59 wherein the search step comprises, for each reference 3D representation, searching over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
61. The method of claim 57 wherein searching over a range of possible 2D projections comprises computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
62. The method of claim 57 wherein resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the at least one source 2D projection
63. The method of claim 57 wherein tlie deformation step comprises deforming the located reference 3D representation using large deformation diffeomorphism, thereby preserving the geometry and topology ofthe reference 3D representation.
64. The method of claim 57 wherein the deformation step comprises deforming the located reference 3D representation so that feature items in the at least one source 2D projection align with corresponding feature items in the located reference 3D representation.
65. The method of claim 57 wherein the deformation step comprises deforming the located reference 3D representation to optimize the match between the projection of a plurality of points on the located reference 3D representation and the at least one source 2D projection.
66. The method of claim 57 wherein the deformation is performed substantially in real time.
67. A system for creating a 3D representation from at least one source 2D projection of a source multifeatured object, the system comprising: a. a database comprising at least one reference 3D representation of generically similar multifeatured objects; b. an analyzer for locating a reference 3D representation from the set; and c. a deformation module for creating a 3D representation ofthe at least one source 2D projection by deforming the located reference 3D representation in accordance with the at least one source 2D projection so as to resemble the source multifeatured object.
68. The system of claim 67 in which the analyzer bases the location ofthe 3D reference representation on the at least one source 2D projection.
69. The system of claim 67 wherein: a. the database comprises a plurality of reference 3D representations; and b. the analyzer locates the reference 3D representation having a 2D projection most resembling the at least one source 2D projection step by performing a viewpoint-invariant search ofthe set of reference 3D representations.
70. The system of claim 69 wherein the analyzer searches, for each reference 3D representation, over a range of possible 2D projections ofthe 3D representation without actually generating any projections.
71. The system of claim 70 wherein the search over a range of possible 2D proj ections comprises computing a rigid motion ofthe reference 3D representation optimally consistent with a viewpoint ofthe source multifeatured object in at least one ofthe 2D projections.
72. The system of claim 67 wherein resemblance is determined by a degree of alignment between feature items in the 3D representation and corresponding feature items in the at least one source 2D projection.
73. The system of claim 67 wherein the deformation module deforms the located reference 3D representation using large deformation diffeomorphism, thereby preserving the geometry and topology ofthe reference 3D representation.
74. The system of claim 67 wherein the deformation module deforms the located reference 3D representation so that feature items in the at least one source 2D projection align with corresponding feature items in the located reference 3D representation.
75. The system of claim 67 wherein the deformation module deforms the located reference 3D representation to optimize the match between the projection of a plurality of points on the located reference 3D representation and the at least one source 2D projection.
76. The system of claim 67 wherein the deformation is performed substantially in real time.
PCT/US2004/006604 2003-03-06 2004-03-05 Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery WO2004081853A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP04717974A EP1599828A1 (en) 2003-03-06 2004-03-05 Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
JP2006509130A JP2006520054A (en) 2003-03-06 2004-03-05 Image matching from invariant viewpoints and generation of 3D models from 2D images

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US45243103P 2003-03-06 2003-03-06
US45242903P 2003-03-06 2003-03-06
US45243003P 2003-03-06 2003-03-06
US60/452,429 2003-03-06
US60/452,430 2003-03-06
US60/452,431 2003-03-06

Publications (1)

Publication Number Publication Date
WO2004081853A1 true WO2004081853A1 (en) 2004-09-23

Family

ID=32995971

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/US2004/006614 WO2004081854A1 (en) 2003-03-06 2004-03-05 Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery
PCT/US2004/006827 WO2004081855A1 (en) 2003-03-06 2004-03-05 Generation of image databases for multifeatured objects
PCT/US2004/006604 WO2004081853A1 (en) 2003-03-06 2004-03-05 Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery

Family Applications Before (2)

Application Number Title Priority Date Filing Date
PCT/US2004/006614 WO2004081854A1 (en) 2003-03-06 2004-03-05 Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery
PCT/US2004/006827 WO2004081855A1 (en) 2003-03-06 2004-03-05 Generation of image databases for multifeatured objects

Country Status (4)

Country Link
US (4) US7643685B2 (en)
EP (3) EP1599829A1 (en)
JP (3) JP2006522411A (en)
WO (3) WO2004081854A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007064836A (en) * 2005-08-31 2007-03-15 Kyushu Institute Of Technology Algorithm for automating camera calibration
EP1840796A3 (en) * 2006-03-29 2011-11-30 NEC Corporation Restoring and collating system and method for 3-dimensional face data
WO2012092946A1 (en) * 2011-01-07 2012-07-12 Martin Tank Method and tooth restoration determination system for determining tooth restorations

Families Citing this family (168)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5020458B2 (en) * 2001-11-24 2012-09-05 フェニックス スリーディー インコーポレイテッド Generation of stereoscopic image sequence from 2D image sequence
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US7421098B2 (en) * 2002-10-07 2008-09-02 Technion Research & Development Foundation Ltd. Facial recognition and the open mouth problem
EP2479726B9 (en) * 2003-10-21 2013-10-23 Nec Corporation Image comparison system and image comparison method
US20050152504A1 (en) * 2004-01-13 2005-07-14 Ang Shih Method and apparatus for automated tomography inspection
WO2006002320A2 (en) * 2004-06-23 2006-01-05 Strider Labs, Inc. System and method for 3d object recognition using range and intensity
US7542034B2 (en) 2004-09-23 2009-06-02 Conversion Works, Inc. System and method for processing video images
US7480414B2 (en) * 2004-10-14 2009-01-20 International Business Machines Corporation Method and apparatus for object normalization using object classification
ES2369021T3 (en) * 2004-10-22 2011-11-24 Shiseido Company, Limited PROCEDURE FOR CATEGORIZING LIPS.
US20060127852A1 (en) * 2004-12-14 2006-06-15 Huafeng Wen Image based orthodontic treatment viewing system
KR100601989B1 (en) * 2005-02-07 2006-07-18 삼성전자주식회사 Apparatus and method for estimating 3d face shape from 2d image and computer readable media for storing computer program
JP4852764B2 (en) * 2005-03-04 2012-01-11 国立大学法人 奈良先端科学技術大学院大学 Motion measuring device, motion measuring system, in-vehicle device, motion measuring method, motion measuring program, and computer-readable recording medium
US7657126B2 (en) * 2005-05-09 2010-02-02 Like.Com System and method for search portions of objects in images and features thereof
US20080177640A1 (en) 2005-05-09 2008-07-24 Salih Burak Gokturk System and method for using image analysis and search in e-commerce
US7660468B2 (en) * 2005-05-09 2010-02-09 Like.Com System and method for enabling image searching using manual enrichment, classification, and/or segmentation
US8732025B2 (en) 2005-05-09 2014-05-20 Google Inc. System and method for enabling image recognition and searching of remote content on display
US7760917B2 (en) 2005-05-09 2010-07-20 Like.Com Computer-implemented method for performing similarity searches
US7783135B2 (en) 2005-05-09 2010-08-24 Like.Com System and method for providing objectified image renderings using recognition information from images
US7945099B2 (en) 2005-05-09 2011-05-17 Like.Com System and method for use of images with recognition analysis
US7519200B2 (en) 2005-05-09 2009-04-14 Like.Com System and method for enabling the use of captured images through recognition
WO2006119629A1 (en) * 2005-05-11 2006-11-16 Optosecurity Inc. Database of target objects suitable for use in screening receptacles or people and method and apparatus for generating same
EP1886257A1 (en) 2005-05-11 2008-02-13 Optosecurity Inc. Method and system for screening luggage items, cargo containers or persons
US7991242B2 (en) 2005-05-11 2011-08-02 Optosecurity Inc. Apparatus, method and system for screening receptacles and persons, having image distortion correction functionality
US7929775B2 (en) * 2005-06-16 2011-04-19 Strider Labs, Inc. System and method for recognition in 2D images using 3D class models
US20070080967A1 (en) * 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US7961937B2 (en) * 2005-10-26 2011-06-14 Hewlett-Packard Development Company, L.P. Pre-normalization data classification
US8094928B2 (en) * 2005-11-14 2012-01-10 Microsoft Corporation Stereo video for gaming
US20070124330A1 (en) * 2005-11-17 2007-05-31 Lydia Glass Methods of rendering information services and related devices
TW200725433A (en) * 2005-12-29 2007-07-01 Ind Tech Res Inst Three-dimensional face recognition system and method thereof
US9101990B2 (en) 2006-01-23 2015-08-11 Hy-Ko Products Key duplication machine
WO2007087389A2 (en) 2006-01-23 2007-08-02 Hy-Ko Products Company Key duplication machine
EP1984898A4 (en) * 2006-02-09 2010-05-05 Nms Comm Corp Smooth morphing between personal video calling avatars
US9690979B2 (en) 2006-03-12 2017-06-27 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
US8571272B2 (en) * 2006-03-12 2013-10-29 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
JP4785598B2 (en) * 2006-04-07 2011-10-05 株式会社日立製作所 Similar shape search device
US7899232B2 (en) 2006-05-11 2011-03-01 Optosecurity Inc. Method and apparatus for providing threat image projection (TIP) in a luggage screening system, and luggage screening system implementing same
US8494210B2 (en) 2007-03-30 2013-07-23 Optosecurity Inc. User interface for use in security screening providing image enhancement capabilities and apparatus for implementing same
US8233702B2 (en) * 2006-08-18 2012-07-31 Google Inc. Computer implemented technique for analyzing images
WO2008036354A1 (en) 2006-09-19 2008-03-27 Braintech Canada, Inc. System and method of determining object pose
TWI332639B (en) * 2006-09-27 2010-11-01 Compal Electronics Inc Method for displaying expressional image
US7966567B2 (en) * 2007-07-12 2011-06-21 Center'd Corp. Character expression in a geo-spatial environment
WO2008076942A1 (en) * 2006-12-15 2008-06-26 Braintech Canada, Inc. System and method of identifying objects
US8170297B2 (en) * 2007-01-19 2012-05-01 Konica Minolta Holdings, Inc. Face authentication system and face authentication method
US8655052B2 (en) 2007-01-26 2014-02-18 Intellectual Discovery Co., Ltd. Methodology for 3D scene reconstruction from 2D image sequences
JP2008187591A (en) * 2007-01-31 2008-08-14 Fujifilm Corp Imaging apparatus and imaging method
US20080225045A1 (en) * 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for 2-d to 3-d image conversion using mask to model, or model to mask, conversion
US8274530B2 (en) 2007-03-12 2012-09-25 Conversion Works, Inc. Systems and methods for filling occluded information for 2-D to 3-D conversion
US20080228449A1 (en) * 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for 2-d to 3-d conversion using depth access segments to define an object
JP4337064B2 (en) * 2007-04-04 2009-09-30 ソニー株式会社 Information processing apparatus, information processing method, and program
JP5096776B2 (en) * 2007-04-04 2012-12-12 キヤノン株式会社 Image processing apparatus and image search method
NO327279B1 (en) * 2007-05-22 2009-06-02 Metaio Gmbh Camera position estimation device and method for augmented reality imaging
US8416981B2 (en) 2007-07-29 2013-04-09 Google Inc. System and method for displaying contextual supplemental content based on image content
DE602007003849D1 (en) * 2007-10-11 2010-01-28 Mvtec Software Gmbh System and method for 3D object recognition
US8059888B2 (en) * 2007-10-30 2011-11-15 Microsoft Corporation Semi-automatic plane extrusion for 3D modeling
US8862582B2 (en) * 2007-11-15 2014-10-14 At&T Intellectual Property I, L.P. System and method of organizing images
US8190604B2 (en) * 2008-04-03 2012-05-29 Microsoft Corporation User intention modeling for interactive image retrieval
US20100017033A1 (en) * 2008-07-18 2010-01-21 Remus Boca Robotic systems with user operable robot control terminals
JP5253066B2 (en) * 2008-09-24 2013-07-31 キヤノン株式会社 Position and orientation measurement apparatus and method
US8368689B2 (en) * 2008-09-25 2013-02-05 Siemens Product Lifecycle Management Software Inc. System, method, and computer program product for radial functions and distributions of three dimensional object models
US8559699B2 (en) 2008-10-10 2013-10-15 Roboticvisiontech Llc Methods and apparatus to facilitate operations in image based systems
US8446288B2 (en) * 2008-10-15 2013-05-21 Panasonic Corporation Light projection device
US8159327B2 (en) * 2008-11-13 2012-04-17 Visa International Service Association Device including authentication glyph
JP2010186288A (en) * 2009-02-12 2010-08-26 Seiko Epson Corp Image processing for changing predetermined texture characteristic amount of face image
US9740921B2 (en) 2009-02-26 2017-08-22 Tko Enterprises, Inc. Image processing sensor systems
US9277878B2 (en) * 2009-02-26 2016-03-08 Tko Enterprises, Inc. Image processing sensor systems
US9293017B2 (en) * 2009-02-26 2016-03-22 Tko Enterprises, Inc. Image processing sensor systems
WO2010127352A2 (en) 2009-05-01 2010-11-04 Hy-Ko Products Key blank identification system with groove scanning
US8634655B2 (en) 2009-05-01 2014-01-21 Hy-Ko Products Company Key blank identification system with bitting analysis
WO2010131371A1 (en) * 2009-05-12 2010-11-18 Toyota Jidosha Kabushiki Kaisha Object recognition method, object recognition apparatus, and autonomous mobile robot
US20100313141A1 (en) * 2009-06-03 2010-12-09 Tianli Yu System and Method for Learning User Genres and Styles and for Matching Products to User Preferences
US8553972B2 (en) * 2009-07-06 2013-10-08 Samsung Electronics Co., Ltd. Apparatus, method and computer-readable medium generating depth map
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
US8604796B2 (en) * 2009-10-08 2013-12-10 Precision Energy Services, Inc. Steerable magnetic dipole antenna for measurement-while-drilling applications
US9366780B2 (en) 2009-10-08 2016-06-14 Precision Energy Services, Inc. Steerable magnetic dipole antenna for measurement while drilling applications
JP2011090466A (en) * 2009-10-21 2011-05-06 Sony Corp Information processing apparatus, method, and program
KR20110070056A (en) * 2009-12-18 2011-06-24 한국전자통신연구원 Method and apparatus for easy and intuitive generation of user-customized 3d avatar with high-quality
US8570343B2 (en) * 2010-04-20 2013-10-29 Dassault Systemes Automatic generation of 3D models from packaged goods product images
US20110268365A1 (en) * 2010-04-30 2011-11-03 Acer Incorporated 3d hand posture recognition system and vision based hand posture recognition method thereof
EP2385483B1 (en) 2010-05-07 2012-11-21 MVTec Software GmbH Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform
US10628666B2 (en) * 2010-06-08 2020-04-21 Styku, LLC Cloud server body scan data system
US10628729B2 (en) * 2010-06-08 2020-04-21 Styku, LLC System and method for body scanning and avatar creation
US20160088284A1 (en) * 2010-06-08 2016-03-24 Styku, Inc. Method and system for determining biometrics from body surface imaging technology
US11244223B2 (en) * 2010-06-08 2022-02-08 Iva Sareen Online garment design and collaboration system and method
US10702216B2 (en) * 2010-06-08 2020-07-07 Styku, LLC Method and system for body scanning and display of biometric data
US11640672B2 (en) * 2010-06-08 2023-05-02 Styku Llc Method and system for wireless ultra-low footprint body scanning
US8452721B2 (en) * 2010-06-15 2013-05-28 Nvidia Corporation Region of interest tracking for fluid simulation
US8928659B2 (en) * 2010-06-23 2015-01-06 Microsoft Corporation Telepresence systems with viewer perspective adjustment
US8416990B2 (en) 2010-08-17 2013-04-09 Microsoft Corporation Hierarchical video sub-volume search
US8933927B2 (en) * 2010-09-02 2015-01-13 Samsung Electronics Co., Ltd. Display system with image conversion mechanism and method of operation thereof
US9317533B2 (en) 2010-11-02 2016-04-19 Microsoft Technology Licensing, Inc. Adaptive image retrieval database
US8463045B2 (en) 2010-11-10 2013-06-11 Microsoft Corporation Hierarchical sparse representation for image retrieval
US8711210B2 (en) 2010-12-14 2014-04-29 Raytheon Company Facial recognition using a sphericity metric
WO2012082077A2 (en) * 2010-12-17 2012-06-21 Agency For Science, Technology And Research Pose-independent 3d face reconstruction from a sample 2d face image
EP3678035A1 (en) * 2010-12-21 2020-07-08 QUALCOMM Incorporated Computerized method and device for annotating at least one feature of an image of a view
US9952046B1 (en) 2011-02-15 2018-04-24 Guardvant, Inc. Cellular phone and personal protective equipment usage monitoring system
US9198575B1 (en) 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
US8988512B2 (en) * 2011-04-14 2015-03-24 Mediatek Inc. Method for adjusting playback of multimedia content according to detection result of user status and related apparatus thereof
KR101608253B1 (en) * 2011-08-09 2016-04-01 인텔 코포레이션 Image-based multi-view 3d face generation
EP3925676A1 (en) * 2011-08-18 2021-12-22 Pfaqutruma Research LLC Systems and methods of virtual world interaction
JP5143262B1 (en) * 2011-08-30 2013-02-13 株式会社東芝 3D image processing apparatus and 3D image processing method
KR102067367B1 (en) 2011-09-07 2020-02-11 라피스캔 시스템스, 인코포레이티드 X-ray inspection method that integrates manifest data with imaging/detection processing
KR20130063310A (en) * 2011-12-06 2013-06-14 엘지전자 주식회사 Mobile terminal and control method for mobile terminal
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US10013787B2 (en) 2011-12-12 2018-07-03 Faceshift Ag Method for facial animation
WO2013145496A1 (en) * 2012-03-27 2013-10-03 日本電気株式会社 Information processing device, information processing method, and program
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9378584B2 (en) 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US9141848B2 (en) * 2012-09-04 2015-09-22 Intel Corporation Automatic media distribution
US10678259B1 (en) * 2012-09-13 2020-06-09 Waymo Llc Use of a reference image to detect a road obstacle
WO2014043755A1 (en) * 2012-09-19 2014-03-27 Commonwealth Scientific And Industrial Research Organisation System and method of generating a non-rigid model
US9020982B2 (en) 2012-10-15 2015-04-28 Qualcomm Incorporated Detection of planar targets under steep angles
US9743002B2 (en) * 2012-11-19 2017-08-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
CN104782120B (en) * 2012-12-17 2018-08-10 英特尔公司 Incarnation animation method, computing device and storage medium
US9990373B2 (en) * 2013-02-06 2018-06-05 John A. Fortkort Creation and geospatial placement of avatars based on real-world interactions
US9381426B1 (en) * 2013-03-15 2016-07-05 University Of Central Florida Research Foundation, Inc. Semi-automated digital puppetry control
US9990004B2 (en) * 2013-04-02 2018-06-05 Samsung Dispaly Co., Ltd. Optical detection of bending motions of a flexible display
US9449392B2 (en) * 2013-06-05 2016-09-20 Samsung Electronics Co., Ltd. Estimator training method and pose estimating method using depth image
US9839761B1 (en) 2013-07-04 2017-12-12 Hal Rucker Airflow control for pressurized air delivery
US9355123B2 (en) 2013-07-19 2016-05-31 Nant Holdings Ip, Llc Fast recognition algorithm processing, systems and methods
KR101509934B1 (en) * 2013-10-10 2015-04-16 재단법인대구경북과학기술원 Device of a front head pose guidance, and method thereof
US9613449B2 (en) 2013-10-18 2017-04-04 Nvidia Corporation Method and apparatus for simulating stiff stacks
US9589383B2 (en) 2013-10-18 2017-03-07 Nvidia Corporation Unified position based solver for visual effects
US10013767B2 (en) * 2013-11-01 2018-07-03 The Research Foundation For The State University Of New York Method for measuring the interior three-dimensional movement, stress and strain of an object
US9466009B2 (en) 2013-12-09 2016-10-11 Nant Holdings Ip. Llc Feature density object classification, systems and methods
US9501498B2 (en) * 2014-02-14 2016-11-22 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
MY188125A (en) * 2014-09-15 2021-11-22 Temasek Life Sciences Laboratory Image recognition system and method
US9710699B2 (en) * 2014-10-31 2017-07-18 Irvine Sensors Corp. Three dimensional recognition from unscripted sources technology (TRUST)
RU2582852C1 (en) * 2015-01-21 2016-04-27 Общество с ограниченной ответственностью "Вокорд СофтЛаб" (ООО "Вокорд СофтЛаб") Automatic construction of 3d model of face based on series of 2d images or movie
KR102146398B1 (en) 2015-07-14 2020-08-20 삼성전자주식회사 Three dimensional content producing apparatus and three dimensional content producing method thereof
US9818041B2 (en) 2015-08-03 2017-11-14 Hy-Ko Products Company High security key scanning system
US20170103563A1 (en) * 2015-10-07 2017-04-13 Victor Erukhimov Method of creating an animated realistic 3d model of a person
KR101755248B1 (en) * 2016-01-27 2017-07-07 (주)에이아이퍼스트 Method and system of generating 3D model and mobile device for the same
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
EP3420563A4 (en) 2016-02-22 2020-03-11 Rapiscan Systems, Inc. Systems and methods for detecting threats and contraband in cargo
US20170280130A1 (en) * 2016-03-25 2017-09-28 Microsoft Technology Licensing, Llc 2d video analysis for 3d modeling
JP6744747B2 (en) * 2016-04-01 2020-08-19 キヤノン株式会社 Information processing apparatus and control method thereof
WO2017209777A1 (en) * 2016-06-03 2017-12-07 Oculus Vr, Llc Face and eye tracking and facial animation using facial sensors within a head-mounted display
US10474932B2 (en) * 2016-09-01 2019-11-12 Uptake Technologies, Inc. Detection of anomalies in multivariate data
US10818064B2 (en) * 2016-09-21 2020-10-27 Intel Corporation Estimating accurate face shape and texture from an image
FR3060170B1 (en) * 2016-12-14 2019-05-24 Smart Me Up OBJECT RECOGNITION SYSTEM BASED ON AN ADAPTIVE 3D GENERIC MODEL
US20180357819A1 (en) * 2017-06-13 2018-12-13 Fotonation Limited Method for generating a set of annotated images
JP7003455B2 (en) * 2017-06-15 2022-01-20 オムロン株式会社 Template creation device, object recognition processing device, template creation method and program
US11948057B2 (en) * 2017-06-22 2024-04-02 Iva Sareen Online garment design and collaboration system and method
EP3420903B1 (en) * 2017-06-29 2019-10-23 Siemens Healthcare GmbH Visualisation of at least one indicator
US11069121B2 (en) * 2017-08-31 2021-07-20 Sony Group Corporation Methods, devices and computer program products for creating textured 3D images
US10870056B2 (en) * 2017-11-01 2020-12-22 Sony Interactive Entertainment Inc. Emoji-based communications derived from facial features during game play
US20190143221A1 (en) * 2017-11-15 2019-05-16 Sony Interactive Entertainment America Llc Generation and customization of personalized avatars
US10783346B2 (en) * 2017-12-11 2020-09-22 Invensense, Inc. Enhancing quality of a fingerprint image
CN109918976B (en) * 2017-12-13 2021-04-02 航天信息股份有限公司 Portrait comparison algorithm fusion method and device thereof
US11099708B2 (en) 2017-12-15 2021-08-24 Hewlett-Packard Development Company, L.P. Patterns for locations on three-dimensional objects
KR20190101835A (en) * 2018-02-23 2019-09-02 삼성전자주식회사 Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
US20200410210A1 (en) * 2018-03-12 2020-12-31 Carnegie Mellon University Pose invariant face recognition
CN108549848B (en) * 2018-03-27 2022-02-25 百度在线网络技术(北京)有限公司 Method and apparatus for outputting information
CN109118569B (en) * 2018-08-16 2023-03-10 Oppo广东移动通信有限公司 Rendering method and device based on three-dimensional model
KR102615196B1 (en) 2018-08-21 2023-12-18 삼성전자주식회사 Method and device to train object detection model
US10621788B1 (en) * 2018-09-25 2020-04-14 Sony Corporation Reconstructing three-dimensional (3D) human body model based on depth points-to-3D human body model surface distance
US10489683B1 (en) * 2018-12-17 2019-11-26 Bodygram, Inc. Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
CN109816704B (en) * 2019-01-28 2021-08-03 北京百度网讯科技有限公司 Method and device for acquiring three-dimensional information of object
US11386636B2 (en) 2019-04-04 2022-07-12 Datalogic Usa, Inc. Image preprocessing for optical character recognition
EP3731132A1 (en) * 2019-04-25 2020-10-28 XRSpace CO., LTD. Method of generating 3d facial model for an avatar and related device
US10992619B2 (en) * 2019-04-30 2021-04-27 Snap Inc. Messaging system with avatar generation
US10803301B1 (en) * 2019-08-02 2020-10-13 Capital One Services, Llc Detecting fraud in image recognition systems
WO2021084662A1 (en) * 2019-10-30 2021-05-06 日本電気株式会社 Checking assistance device, checking assistance method, and computer-readable recording medium
CN114730459B (en) 2019-11-28 2024-02-13 三菱电机株式会社 Workpiece image retrieval device and workpiece image retrieval method
WO2021237169A1 (en) * 2020-05-21 2021-11-25 Sareen Iva Online garment design and collaboration and virtual try-on system and method
US11810256B2 (en) * 2021-11-11 2023-11-07 Qualcomm Incorporated Image modification techniques
JPWO2023175648A1 (en) * 2022-03-14 2023-09-21

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1039417A1 (en) * 1999-03-19 2000-09-27 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for the processing of images based on morphable models
WO2001063560A1 (en) * 2000-02-22 2001-08-30 Digimask Limited 3d game avatar using physical characteristics
EP1143375A2 (en) * 2000-04-03 2001-10-10 Nec Corporation Device, method and record medium for image comparison
US20020012454A1 (en) 2000-03-09 2002-01-31 Zicheng Liu Rapid computer modeling of faces for animation
US6434278B1 (en) 1997-09-23 2002-08-13 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
US6532011B1 (en) 1998-10-02 2003-03-11 Telecom Italia Lab S.P.A. Method of creating 3-D facial models starting from face images

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159361A (en) * 1989-03-09 1992-10-27 Par Technology Corporation Method and apparatus for obtaining the topography of an object
US5825936A (en) * 1994-09-22 1998-10-20 University Of South Florida Image analyzing device using adaptive criteria
US5761638A (en) * 1995-03-17 1998-06-02 Us West Inc Telephone network apparatus and method using echo delay and attenuation
US5742291A (en) 1995-05-09 1998-04-21 Synthonics Incorporated Method and apparatus for creation of three-dimensional wire frames
US5844573A (en) 1995-06-07 1998-12-01 Massachusetts Institute Of Technology Image compression by pointwise prototype correspondence using shape and texture information
US6226418B1 (en) * 1997-11-07 2001-05-01 Washington University Rapid convolution based large deformation image matching via landmark and volume imagery
US5898438A (en) * 1996-11-12 1999-04-27 Ford Global Technologies, Inc. Texture mapping of photographic images to CAD surfaces
US6094199A (en) * 1997-05-23 2000-07-25 University Of Washington 3D objects morphing employing skeletons indicating symmetric differences to define intermediate objects used in morphing
US5990901A (en) 1997-06-27 1999-11-23 Microsoft Corporation Model based image editing and correction
EP0907145A3 (en) * 1997-10-03 2003-03-26 Nippon Telegraph and Telephone Corporation Method and equipment for extracting image features from image sequence
US6249600B1 (en) 1997-11-07 2001-06-19 The Trustees Of Columbia University In The City Of New York System and method for generation of a three-dimensional solid model
US6002782A (en) * 1997-11-12 1999-12-14 Unisys Corporation System and method for recognizing a 3-D object by generating a 2-D image of the object from a transformed 3-D model
AU1613599A (en) * 1997-12-01 1999-06-16 Arsev H. Eraslan Three-dimensional face identification system
US6362833B2 (en) * 1998-04-08 2002-03-26 Intel Corporation Method and apparatus for progressively constructing a series of morphs between two-dimensional or three-dimensional models
JP3467725B2 (en) * 1998-06-02 2003-11-17 富士通株式会社 Image shadow removal method, image processing apparatus, and recording medium
US6366282B1 (en) * 1998-09-08 2002-04-02 Intel Corporation Method and apparatus for morphing objects by subdividing and mapping portions of the objects
JP4025442B2 (en) 1998-12-01 2007-12-19 富士通株式会社 3D model conversion apparatus and method
US6296317B1 (en) * 1999-10-29 2001-10-02 Carnegie Mellon University Vision-based motion sensor for mining machine control
JP4341135B2 (en) * 2000-03-10 2009-10-07 コニカミノルタホールディングス株式会社 Object recognition device
US6956569B1 (en) * 2000-03-30 2005-10-18 Nec Corporation Method for matching a two dimensional image to one of a plurality of three dimensional candidate models contained in a database
JP4387552B2 (en) * 2000-04-27 2009-12-16 富士通株式会社 Image verification processing system
US6853745B1 (en) * 2000-11-03 2005-02-08 Nec Laboratories America, Inc. Lambertian reflectance and linear subspaces
US6975750B2 (en) * 2000-12-01 2005-12-13 Microsoft Corp. System and method for face recognition using synthesized training images
GB2383915B (en) * 2001-11-23 2005-09-28 Canon Kk Method and apparatus for generating models of individuals
US7221809B2 (en) 2001-12-17 2007-05-22 Genex Technologies, Inc. Face recognition system and method
WO2003073359A2 (en) * 2002-02-26 2003-09-04 Canesta, Inc. Method and apparatus for recognizing objects
FR2849241B1 (en) * 2002-12-20 2005-06-24 Biospace Instr RADIOGRAPHIC IMAGING METHOD AND DEVICE
WO2004088590A1 (en) 2003-03-28 2004-10-14 Fujitsu Limited Imager and personal idenfification system
US7756325B2 (en) * 2005-06-20 2010-07-13 University Of Basel Estimating 3D shape and texture of a 3D object based on a 2D image of the 3D object

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434278B1 (en) 1997-09-23 2002-08-13 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
US6532011B1 (en) 1998-10-02 2003-03-11 Telecom Italia Lab S.P.A. Method of creating 3-D facial models starting from face images
EP1039417A1 (en) * 1999-03-19 2000-09-27 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for the processing of images based on morphable models
US6556196B1 (en) 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
WO2001063560A1 (en) * 2000-02-22 2001-08-30 Digimask Limited 3d game avatar using physical characteristics
US20020012454A1 (en) 2000-03-09 2002-01-31 Zicheng Liu Rapid computer modeling of faces for animation
EP1143375A2 (en) * 2000-04-03 2001-10-10 Nec Corporation Device, method and record medium for image comparison

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ERIKSSON A ET AL: "Towards 3-dimensional face recognition", AFRICON, 1999 IEEE CAPE TOWN, SOUTH AFRICA 28 SEPT.-1 OCT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 28 September 1999 (1999-09-28), pages 401 - 406, XP010367205, ISBN: 0-7803-5546-6 *
MUN WAI LEE ET AL: "3D deformable face model for pose determination and face synthesis", IMAGE ANALYSIS AND PROCESSING, 1999. PROCEEDINGS. INTERNATIONAL CONFERENCE ON VENICE, ITALY 27-29 SEPT. 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 27 September 1999 (1999-09-27), pages 260 - 265, XP010354170, ISBN: 0-7695-0040-4 *
PIGHIN F ET AL: "Synthesizing realistic facial expressions from photographs", COMPUTER GRAPHICS. SIGGRAPH 98 CONFERENCE PROCEEDINGS. ORLANDO, FL, JULY 19- - 24, 1998, COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH, NEW YORK, NY : ACM, US, 19 July 1998 (1998-07-19), pages 75 - 84, XP002188569, ISBN: 0-89791-999-8 *
REIN-LIEN HSU ET AL: "Face modeling for recognition", PROCEEDINGS 2001 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP 2001. THESSALONIKI, GREECE, OCT. 7 - 10, 2001, INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, NEW YORK, NY : IEEE, US, vol. VOL. 1 OF 3. CONF. 8, 7 October 2001 (2001-10-07), pages 693 - 696, XP010563858, ISBN: 0-7803-6725-1 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007064836A (en) * 2005-08-31 2007-03-15 Kyushu Institute Of Technology Algorithm for automating camera calibration
EP1840796A3 (en) * 2006-03-29 2011-11-30 NEC Corporation Restoring and collating system and method for 3-dimensional face data
WO2012092946A1 (en) * 2011-01-07 2012-07-12 Martin Tank Method and tooth restoration determination system for determining tooth restorations

Also Published As

Publication number Publication date
US7643685B2 (en) 2010-01-05
US20040190775A1 (en) 2004-09-30
EP1599830A1 (en) 2005-11-30
JP2006520055A (en) 2006-08-31
JP2006520054A (en) 2006-08-31
US20100295854A1 (en) 2010-11-25
US7643683B2 (en) 2010-01-05
JP2006522411A (en) 2006-09-28
US20040175039A1 (en) 2004-09-09
US20040175041A1 (en) 2004-09-09
EP1599829A1 (en) 2005-11-30
EP1599828A1 (en) 2005-11-30
WO2004081855A1 (en) 2004-09-23
WO2004081854A1 (en) 2004-09-23
US7853085B2 (en) 2010-12-14

Similar Documents

Publication Publication Date Title
US7643685B2 (en) Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
Blanz et al. Face identification across different poses and illuminations with a 3d morphable model
Roth et al. Adaptive 3D face reconstruction from unconstrained photo collections
Gu et al. 3D alignment of face in a single image
Patel et al. 3d morphable face models revisited
Jeni et al. Dense 3D face alignment from 2D videos in real-time
CN104937635B (en) More hypothesis target tracking devices based on model
Cai et al. 3d deformable face tracking with a commodity depth camera
US7221809B2 (en) Face recognition system and method
Liu et al. Pose-robust face recognition using geometry assisted probabilistic modeling
JP4466951B2 (en) Alignment of 3D face shape
Dimitrijevic et al. Accurate face models from uncalibrated and ill-lit video sequences
JP2011039869A (en) Face image processing apparatus and computer program
Ye et al. 3d morphable face model for face animation
JP4379459B2 (en) Object collation method, object collation apparatus, and recording medium recording the program
CN110348359B (en) Hand gesture tracking method, device and system
Kang et al. Appearance-based structure from motion using linear classes of 3-d models
JP5639499B2 (en) Face image processing device
Paterson et al. 3D head tracking using non-linear optimization.
Chen et al. Extending 3D Lucas–Kanade tracking with adaptive templates for head pose estimation
Chen et al. 2d face alignment and pose estimation based on 3d facial models
Romdhani et al. On utilising template and feature-based correspondence in multi-view appearance models
Ibikunle et al. Face recognition using line edge mapping approach
Meher et al. A survey and classification of face alignment methods based on face models
Aleksandrova et al. Approach for Creating a 3D Model of a Face from its 2D Image

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006509130

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2004717974

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2004717974

Country of ref document: EP