US20120147167A1 - Facial recognition using a sphericity metric - Google Patents

Facial recognition using a sphericity metric Download PDF

Info

Publication number
US20120147167A1
US20120147167A1 US12/967,641 US96764110A US2012147167A1 US 20120147167 A1 US20120147167 A1 US 20120147167A1 US 96764110 A US96764110 A US 96764110A US 2012147167 A1 US2012147167 A1 US 2012147167A1
Authority
US
United States
Prior art keywords
subject
data set
processor
face
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/967,641
Other versions
US8711210B2 (en
Inventor
Steven J. Manson
Tara L. Trumbull
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raytheon Co
Original Assignee
Raytheon Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytheon Co filed Critical Raytheon Co
Priority to US12/967,641 priority Critical patent/US8711210B2/en
Assigned to RAYTHEON COMPANY reassignment RAYTHEON COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MANSON, STEVEN J., TRUMBULL, TARA L.
Priority to PCT/US2011/053093 priority patent/WO2012082210A1/en
Publication of US20120147167A1 publication Critical patent/US20120147167A1/en
Application granted granted Critical
Publication of US8711210B2 publication Critical patent/US8711210B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • Embodiments of the subject matter described herein relate generally to facial recognition systems and methods, and more particularly to using multiple two-dimensional images to composite three-dimensional point features, and to using a sphericity metric to compare a subject's face to a target's face.
  • a facial recognition system includes a memory configured to store a target data set identifying a plurality of predefined points on a face of a target and a processor.
  • the processor may be configured to receive an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject, create a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographs, and perform facial recognition on the subject data set by comparing the subject data set to the target data set.
  • a method for performing facial recognition includes receiving, by a processor, photographic data from an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject, creating, by the processor, a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographic data, and comparing, by the processor, the subject data set to a target data set stored in a memory.
  • an apparatus may include a camera configured to take at least one photograph of a target, the photograph being taken at any arbitrary angle relative to the subject and at any arbitrary distance from the subject, a memory configured to store the at least one photograph and further configured to store a database including facial recognition data for at least one target identifying a predetermined number of points on the target's face and a processor.
  • the processor may be configured to create a subject data set by analyzing the at least one photograph to determine the location of the predefined points on the subject's face, and compare the subject data set to the target data set stored in the memory.
  • FIG. 1 illustrates a facial recognition system in accordance with an embodiment
  • FIG. 2 illustrates a method of performing facial recognition in accordance with an embodiment
  • FIG. 3 illustrates a subject and an exemplary set of points which may be identified on the subject's face
  • FIG. 4 illustrates a series of photographs of a subject before and after the points are identified on the subject's face
  • FIG. 5 illustrates a three-dimensional sphericity method useful in understanding an embodiment
  • FIG. 6 illustrates another method for performing facial recognition in accordance with an embodiment.
  • the following discussion generally relates to methods, systems, and apparatus for the use of a three-dimensional ensemble sphericity measure to perform facial recognition wherein subjects (photographed as opportunities arise) are compared to an enrolled database of targets (e.g. known bad guys).
  • targets e.g. known bad guys.
  • FIG. 1 illustrates a facial recognition system 100 in accordance with an exemplary embodiment.
  • the facial recognition system 100 includes a processor 110 and a memory 120 .
  • the processor 110 may be a computer processing unit (“CPU”), a graphical processing unit (“GPU”), an application specific integrated circuit (“ASIC”), a field programmable gate array (“FPGA”), a micro-controller, or any other logic device.
  • the memory 120 may be any type of memory.
  • the memory 120 may be a computer-readable memory.
  • the memory 120 includes a database 130 .
  • the database 130 may include a data set for multiple targets that the facial recognition system is trying to identify.
  • the database 130 may be separate from the facial recognition system 100 and may be accessed by the facial recognition system, for example, through a network connection.
  • the facial recognition system 100 may also include a camera system 150 .
  • the camera system 150 may have N number of cameras 140 ( 1 ) to 140 (N) in any combination of still cameras and video cameras which may be placed at any arbitrary angle and distance respective to each other and to a subject.
  • the camera and video cameras within the camera system 150 may be in communication with each other and/or in communication with the facial recognition system 100 .
  • the facial recognition system 100 may also include only a single camera 140 ( 1 ).
  • a camera 140 ( 1 ) or multiple cameras 140 ( 1 ) to 140 (N) may be separate from the facial recognition system.
  • photographic data from the camera 140 ( 1 ) or multiple cameras 140 ( 1 ) to 140 (N) may be input to the facial recognition system 100 .
  • the photographic data may be transmitted to the facial recognition system via a network connection (not shown), input to the facial recognition system using any computer-readable memory or scanned in using a scanner in communication with the facial recognition system 100 .
  • FIG. 2 illustrates a method 200 for performing facial recognition in accordance with an embodiment.
  • the method 200 includes receiving M number of photographs or sets of photographic data of the subject. (Step 210 ).
  • the photographs may be taken by a still camera or may be captured from a video camera.
  • the number of photographs M is preferably greater than or equal to 2, however, if only 1 photograph is available, the facial recognition process can still be accomplished as discussed in further detail below.
  • a more accurate data set can be extracted from the photographs.
  • the photograph or photographs can be taken at any angle and at any lighting condition.
  • FIG. 3 illustrates a photograph 300 taken of a subject 310 .
  • the photograph 300 includes a series of exemplary points 320 which may be identified on the subject.
  • the points may include, but are not limited to, points related to the chin, lips, eyes, ears, nose, nostrils, eyebrows, teeth, forehead, cheeks, neck, skin, and hairline.
  • Some of the predetermined points 320 may be fixed relative to other points. For example, a position of a tooth on the upper jaw of the subject and a corner of an eye are relatively fixed to each other. Some of the predetermined points may also have a variable position. For example, the relative position of the lower jaw with respect to an eyebrow may depend upon the subject's facial expression. In one embodiment, the points which are fixed relative to other points may be assigned a higher weight than points which are more variable relative to other points. Points which are assigned a higher weight may, for example, be given a greater consideration during the facial recognition process, as discussed in greater detail below.
  • the points which may be identified on the photograph will depend upon the angle of the photograph. For example, a photograph depicting a profile of the left side of a subject's face would not include any identifiable points corresponding to the right side of the subject's face. Furthermore, objects, such as scarves, glasses, hats, or other head coverings may obscure portions of the subject's face and would affect which data points are available for extraction.
  • each of the M photographs received by the processor 110 are then analyzed to identify predetermined points on the subject's face. (Step 220 ).
  • three-dimensional data can be extracted from the photographs, as described in further detail below.
  • FIG. 4 illustrates a series of photographs 400 of a subject before processing, and the series of photographs 410 after the predefined points were identified by the processor no.
  • the processor 110 analyzes the photographs 400 and generates a single data set associated with the subject using a three-dimensional compositing process.
  • the single data set includes three-dimensional data points 420 at a plurality of predetermined points on the subject's face, as illustrated in the photograph series 410 .
  • the points 420 illustrated in the photographic series 410 are merely at exemplary locations; other locations and a varying number of points on the subject's face may also be used.
  • the photographic series 410 includes five photographs of the subject, any arbitrary number of photographs may be used. As seen in FIG. 4 , the photographs may be on any arbitrary angle of the subject 410 face. Further, as see in FIG. 4 , each of the photographs 400 illustrated in FIG. 4 may be taken at any arbitrary distance relative to the subject 410 .
  • the processor 110 may use, for example, a differential genetic algorithm to perform the three-dimensional compositing process to generate the single set of data points 420 , based upon the photographs 400 input to the system.
  • Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.
  • the following step makes reference to a three-dimensional canonical facial point data set.
  • the canonical facial point data set is a priori data wherein each predetermined facial point is assigned a nominal location in three-dimensional space that corresponds to a typical arrangement of such points on an average human face.
  • This canonical data may be used to convey to the algorithm the weight set for the facial data points as well. For example, a point on the subject's chin may have a low weight due to the variability introduced by the fact that the picture may depict the subject's mouth either open or closed.
  • the canonical facial point data should include any point features used to do facial recognition as implemented by any variation of the subject method, so long as those features have a reasonably consistent arrangement (as do feature points related specifically to eyes, ears, mouth, nose, etc) from face to face and are not highly variable with respect to location (as are features such as moles, dimples, and so forth).
  • This canonical facial point data can also be used to compute an approximate camera position given an individual 2-D point feature set as described in further detail below.
  • FIG. 5 illustrates the principles of the three-dimensional compositing process 500 .
  • the processor no first extracts pixel coordinates for the predefined points on the subject's face from the series of photographs 400 . (Step 510 ).
  • the processor no may then determine candidate camera parameters for each photograph of the subject that minimize an aggregate distance between input points (i.e., the extracted pixel coordinates) and the three-dimensional canonical facial point data. (Step 520 ).
  • the camera parameters may include, but are not limited to, estimated distance between the camera and the subject, azimuth, elevation, range, roll, instantaneous field of view (“IFOV”), x-offset, and y-offset.
  • IFOV instantaneous field of view
  • the processor 110 may then use triangulation to determine the closest intersection points of camera rays corresponding to image points to create a first rough three-dimensional data set corresponding to the predefined points on the subject's face. (Step 530 ).
  • the processor no triangulates an estimated position for each of the predefined points by comparing the position of the pixels extracted from each photo for the respective predefined point and the camera parameters for the respective camera. For example, the processor may determine a line from each camera through the pixel where the predefined point is located. The processor may then, based upon the lines, the location of the respective predefined point along each line and the parameters for each camera, triangulate a rough three-dimensional position for the predefined point.
  • the processor no then refines the rough three-dimensional data set to create the subject data set 420 .
  • the processor no may implement a genetic algorithm to iteratively 1) determine camera parameters and three-dimensional point locations simultaneously; 2) determine 3-D point locations that minimize aggregate image plane errors for all camera views; and 3) determine camera parameters that minimize aggregate distance between input image points and the derived three-dimensional face point data (i.e., the subject data set 420 ).
  • the 3-D canonical facial point data extracted in step 510 may be scaled, rotated, translated, and refigured with regard to range parallax until an appropriate minimal error is found in the aggregate Cartesian distance between the canonical set and the photo data set.
  • the scaling and other parameters are then held as the first approximation of the camera position that produced the 2-D photo.
  • the camera position also includes camera parameters such as field of view and camera roll and view angles.
  • the points 420 may be saved in the memory 120 .
  • the points 420 may be created and stored relative to any coordinate system.
  • a face-derived coordinate system may be used.
  • the face-derived coordinate system may be based upon, for example, an average face.
  • the axes in the face-derived coordinate system may be centered on one of the predefined points or an expected position of one of the predefined points on an average face.
  • the face-derived coordinate system may hold the X-direction as that between the two inner eye corners, with the positive X in the direction of the left eye, the Y-direction as the component of the vector from the bridge of the nose to the center of the upper lip that is orthogonal to the X-direction, and with the positive Y-direction in the direction of the forehead.
  • the Z-direction may be computed by taking the cross-product of the X and Y direction vectors, such that the positive Z-direction points in the general direction of the nose.
  • the origin is defined to be at the tip of the nose, and the scale factor is such that the distance between the two inner eye points is defined to be two units.
  • coordinate systems may be used.
  • an absolute Cartesian system a cylindrical coordinate system, a spherical coordinate system or a camera oriented coordinate system can be used.
  • a database of targets may be built using this technique.
  • the memory 120 only needs to store a relatively small amount of data for each subject or target since only a series of three-dimensional points are needed to identify a subject or target. In one embodiment, for example, up to thirty nine points may be identified on each target. Accordingly, the memory 120 would only need to store thirty nine points, at relatively low precision (e.g. three significant figures in the aforementioned face-derived coordinate system) to identify any target. Accordingly, one benefit of the claimed embodiments is, for example, that a large number of targets can be stored in a small amount of memory.
  • the processor can also create a data set based upon a single photo.
  • the processor 110 creates a two-dimensional data set and performs the facial recognition on the two-dimensional data set as described in further detail below. While this implementation by itself is limited to performing recognition only on images taken from very nearly the same angle, it can be coupled with the 3-D recognition system in the case where target data is available in 3-D, but only a single image of the subject is available. In this embodiment, the 3-D target data can be used in place of the canonical face, and the camera angle derived to reduce spatial errors.
  • the 2-D sphericity measure can then be used as a recognition metric in comparing the 2-D subject points to the best fit of a given target.
  • the inverse application (A single target photo and multiple subject photos) is also a possible implementation.
  • the processor 110 compares the data points 420 of the subject to a target data set stored in the database 130 to determine if there is a match. (Step 230 ).
  • the processor may implement, for example, a two-dimensional sphericity metric to perform the comparison.
  • FIGS. 6 and 7 illustrate an exemplary method 600 of a three-dimensional sphericity metric.
  • the processor no determines (i.e., models or creates) a three-dimensional object 610 based upon an arbitrary combination of the data points 420 in the subject data set. (Step 710 ).
  • the object 610 illustrated in FIG. 6 is a tetrahedron based upon four points in the data set, however, many different polyhedra with varying numbers of points may be used as the basis for the object 610 .
  • an n-hedron or n-simplex may be used.
  • a simplex is a generalization of the notion of a triangle or tetrahedron to arbitrary dimension.
  • a n-hedron is a generalization of the notation for multi-dimensional hedron.
  • An n-simplex is an n-dimensional polytope which is the convex hull of its n+1 vertices.
  • a 2-simplex is a triangle
  • a 3-simplex is a tetrahedron
  • a 4-simplex is a pentachoron.
  • the object will have one more point than the number of dimensions.
  • the object 610 preferably has at least two dimensions. In some embodiments the object may have four or more dimensions. For example, a skin color or a texture of the skin may be used as a fourth dimension.
  • the processor determines a perfect sphere 620 that fits within the object 610 and touches each side of the object 610 . (Step 720 ).
  • the processor 110 selects a target data set to compare the subject data set to. (Step 730 ).
  • the target may be selected, for example, based upon an attribute of the subject as described in further detail below.
  • the processor 110 selects the same points in the target data set to create a target object 630 .
  • Step 740 For example, if the object 610 was created based upon an outer corner of a right eye, an inner corner of a left eye, a tip of a nose, and a lower point on the right ear of the subject, the same points on the targets face are selected to create the target object 630 .
  • the processor 110 then remaps the object 610 and the sphere therein such that the points in object 610 match the points in target object 630 . (Step 750 ).
  • Remapping the object 610 compensates for differences within the subject data set and the target data set. For example, if the subject data set and the target data set are based upon photographs taken at different distances, the data target data set and subject data set may have different coordinate locations based on their different scaling parameters; similarly datasets may be different in terms of translation or rotation. However, by remapping the object 610 to match the target object 630 , as seen in FIG. 6 , the differences based upon the photographic data used to create the subject data set and target data set are compensated for.
  • the processor 110 determines a sphericity value for the mapped sphere 620 (i.e., if the original sphere is mapped to a non-similar simplex, it will become somewhat oblate; the sphericity metric evaluates how oblate the spheroid 620 has become due to the mapping operation.). (Step 760 ). The more spherical the object 620 is after the remapping process the more likely that the subject will be identified as the target.
  • Sphericity is a metric that is used to determine whether two triangles, tetrahedra, or corresponding simplex solids in any dimensional space greater than three are geometrically similar.
  • the sphericity of the resulting ellipsoid may be computed as:
  • the sphericity measurement may be accomplished with triangles.
  • the sphericity is then computed as:
  • d1 and d2 are the minor and major axes of the inscribed ellipse.
  • the orientation of the target dataset should be optimized (i.e., adjusted to match the orientation of the subject in the single photograph) and rendered in two-dimensions in order to have target triangles to compare to.
  • the processor 110 may perform the sphericity comparison between multiple different multi-point objects within the subject data set and a target data set. That is, the processor 110 may create multiple objects 610 and multiple target objects 630 based upon the same subject data set and target data set and perform the sphericity measurement thereon. The processor 110 may then calculate a mean, median, or sum sphericity of all of the spheroids 620 and determine if there was a match if the mean, median, or sum sphericity is greater than a predetermined threshold. As discussed above, weighting factors may be applied to the spherocity to decrease the relative importance of points that are likely to vary most under different facial expressions.
  • a weight assigned to an object 610 may be calculated as the product of the weights of the points that comprise it. In other embodiments, the weight of each object 610 may be equal to the weight of the point with the largest weight, the weight of the point with the smallest weight, or the mean, median, or sum of the points within the object 610 .
  • the sphericity of the oblate spheroid 620 after the remapping may be compared to a predetermined threshold. If the sphericity of the spheroid 620 is greater than or equal to the predetermined threshold, the subject may be identified as the target.
  • the predetermined threshold may very depending upon how many false-positive matches are willing to be tolerated. For example, the predetermined threshold would be higher if a single false identification for every one-hundred positive identifications is acceptable versus if one false identification for every ten identifications is acceptable.
  • the sphericity comparison can also be performed between a subject data set and multiple target data sets within the database 130 .
  • the sphericity comparison can be performed between the subject and each target in the database 130 .
  • a search algorithm can be used to determine the more likely matches.
  • the processor 110 may analyze the data within the subject data set and, based upon an attribute of the subject's face, determine which target to compare with.
  • the target database may be organized in a tree structure, organizing the targets based upon their facial attributes. For example, people with large noses could be in one branch whereas people with small noses could be in another branch of the tree.
  • the processor 110 may analyze the data within the subject data set and, based upon an attribute of the subject's face, determine which branch of targets to compare with the subject.

Abstract

A facial recognition system and a method for performing facial recognition are provided. The facial recognition system includes a memory configured to store a target data set identifying a plurality of predefined points on a face of a target and a processor. The processor may be configured to receive an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject, create a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographs, and perform facial recognition on the subject data set by comparing the subject data set to the target data set.

Description

    TECHNICAL FIELD
  • Embodiments of the subject matter described herein relate generally to facial recognition systems and methods, and more particularly to using multiple two-dimensional images to composite three-dimensional point features, and to using a sphericity metric to compare a subject's face to a target's face.
  • BACKGROUND
  • Current facial recognition techniques require large amounts of processing power to compare a subject's face to a target's face. Furthermore, current facial recognition techniques are vulnerable, for example, to changes in lighting and differences in angles and distances between a subject and photographs of targets the subject is being compared to. Further still, current facial recognition techniques require large amounts of data to be stored for each target.
  • Accordingly, there is a need for improved systems and methods for performing facial recognition. Other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
  • BRIEF SUMMARY
  • In accordance with one embodiment, a facial recognition system is provided. The facial recognition system includes a memory configured to store a target data set identifying a plurality of predefined points on a face of a target and a processor. The processor may be configured to receive an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject, create a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographs, and perform facial recognition on the subject data set by comparing the subject data set to the target data set.
  • A method for performing facial recognition is also provided. The method includes receiving, by a processor, photographic data from an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject, creating, by the processor, a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographic data, and comparing, by the processor, the subject data set to a target data set stored in a memory.
  • In accordance with another embodiment, an apparatus is provided. The apparatus may include a camera configured to take at least one photograph of a target, the photograph being taken at any arbitrary angle relative to the subject and at any arbitrary distance from the subject, a memory configured to store the at least one photograph and further configured to store a database including facial recognition data for at least one target identifying a predetermined number of points on the target's face and a processor. The processor may be configured to create a subject data set by analyzing the at least one photograph to determine the location of the predefined points on the subject's face, and compare the subject data set to the target data set stored in the memory.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more complete understanding of the embodiments may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
  • FIG. 1 illustrates a facial recognition system in accordance with an embodiment;
  • FIG. 2 illustrates a method of performing facial recognition in accordance with an embodiment;
  • FIG. 3 illustrates a subject and an exemplary set of points which may be identified on the subject's face;
  • FIG. 4 illustrates a series of photographs of a subject before and after the points are identified on the subject's face;
  • FIG. 5 illustrates a three-dimensional sphericity method useful in understanding an embodiment; and
  • FIG. 6 illustrates another method for performing facial recognition in accordance with an embodiment.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • The following discussion generally relates to methods, systems, and apparatus for the use of a three-dimensional ensemble sphericity measure to perform facial recognition wherein subjects (photographed as opportunities arise) are compared to an enrolled database of targets (e.g. known bad guys). In that regard, the following detailed description is merely illustrative in nature and is not intended to limit the embodiments or the application and uses thereof. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary, or the following detailed description.
  • FIG. 1 illustrates a facial recognition system 100 in accordance with an exemplary embodiment. The facial recognition system 100 includes a processor 110 and a memory 120. The processor 110 may be a computer processing unit (“CPU”), a graphical processing unit (“GPU”), an application specific integrated circuit (“ASIC”), a field programmable gate array (“FPGA”), a micro-controller, or any other logic device. The memory 120 may be any type of memory. For example, the memory 120 may be a computer-readable memory.
  • The memory 120 includes a database 130. The database 130 may include a data set for multiple targets that the facial recognition system is trying to identify. In other embodiments the database 130 may be separate from the facial recognition system 100 and may be accessed by the facial recognition system, for example, through a network connection.
  • The facial recognition system 100 may also include a camera system 150. The camera system 150 may have N number of cameras 140(1) to 140(N) in any combination of still cameras and video cameras which may be placed at any arbitrary angle and distance respective to each other and to a subject. The camera and video cameras within the camera system 150 may be in communication with each other and/or in communication with the facial recognition system 100. In another embodiment, the facial recognition system 100 may also include only a single camera 140(1).
  • In another embodiment a camera 140(1) or multiple cameras 140(1) to 140(N) may be separate from the facial recognition system. In this embodiment, photographic data from the camera 140(1) or multiple cameras 140(1) to 140(N) may be input to the facial recognition system 100. For example, the photographic data may be transmitted to the facial recognition system via a network connection (not shown), input to the facial recognition system using any computer-readable memory or scanned in using a scanner in communication with the facial recognition system 100.
  • The operation of the facial recognition system 100 will be described in further detail with reference to FIGS. 2-6.
  • FIG. 2 illustrates a method 200 for performing facial recognition in accordance with an embodiment. The method 200 includes receiving M number of photographs or sets of photographic data of the subject. (Step 210). As discussed above, the photographs may be taken by a still camera or may be captured from a video camera. The number of photographs M is preferably greater than or equal to 2, however, if only 1 photograph is available, the facial recognition process can still be accomplished as discussed in further detail below. When multiple photographs of the subject are available, a more accurate data set can be extracted from the photographs. As discussed above, the photograph or photographs can be taken at any angle and at any lighting condition.
  • FIG. 3 illustrates a photograph 300 taken of a subject 310. The photograph 300 includes a series of exemplary points 320 which may be identified on the subject. For example, the points may include, but are not limited to, points related to the chin, lips, eyes, ears, nose, nostrils, eyebrows, teeth, forehead, cheeks, neck, skin, and hairline.
  • Some of the predetermined points 320 may be fixed relative to other points. For example, a position of a tooth on the upper jaw of the subject and a corner of an eye are relatively fixed to each other. Some of the predetermined points may also have a variable position. For example, the relative position of the lower jaw with respect to an eyebrow may depend upon the subject's facial expression. In one embodiment, the points which are fixed relative to other points may be assigned a higher weight than points which are more variable relative to other points. Points which are assigned a higher weight may, for example, be given a greater consideration during the facial recognition process, as discussed in greater detail below.
  • The points which may be identified on the photograph will depend upon the angle of the photograph. For example, a photograph depicting a profile of the left side of a subject's face would not include any identifiable points corresponding to the right side of the subject's face. Furthermore, objects, such as scarves, glasses, hats, or other head coverings may obscure portions of the subject's face and would affect which data points are available for extraction.
  • Returning to FIG. 2, each of the M photographs received by the processor 110 are then analyzed to identify predetermined points on the subject's face. (Step 220). When multiple photographs are used, three-dimensional data can be extracted from the photographs, as described in further detail below.
  • FIG. 4 illustrates a series of photographs 400 of a subject before processing, and the series of photographs 410 after the predefined points were identified by the processor no. The processor 110 analyzes the photographs 400 and generates a single data set associated with the subject using a three-dimensional compositing process. The single data set includes three-dimensional data points 420 at a plurality of predetermined points on the subject's face, as illustrated in the photograph series 410. As discussed above, the points 420 illustrated in the photographic series 410 are merely at exemplary locations; other locations and a varying number of points on the subject's face may also be used. While the photographic series 410 includes five photographs of the subject, any arbitrary number of photographs may be used. As seen in FIG. 4, the photographs may be on any arbitrary angle of the subject 410 face. Further, as see in FIG. 4, each of the photographs 400 illustrated in FIG. 4 may be taken at any arbitrary distance relative to the subject 410.
  • In one embodiment the processor 110 may use, for example, a differential genetic algorithm to perform the three-dimensional compositing process to generate the single set of data points 420, based upon the photographs 400 input to the system. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.
  • The following step makes reference to a three-dimensional canonical facial point data set. The canonical facial point data set is a priori data wherein each predetermined facial point is assigned a nominal location in three-dimensional space that corresponds to a typical arrangement of such points on an average human face. This canonical data may be used to convey to the algorithm the weight set for the facial data points as well. For example, a point on the subject's chin may have a low weight due to the variability introduced by the fact that the picture may depict the subject's mouth either open or closed. The canonical facial point data should include any point features used to do facial recognition as implemented by any variation of the subject method, so long as those features have a reasonably consistent arrangement (as do feature points related specifically to eyes, ears, mouth, nose, etc) from face to face and are not highly variable with respect to location (as are features such as moles, dimples, and so forth). This canonical facial point data can also be used to compute an approximate camera position given an individual 2-D point feature set as described in further detail below.
  • FIG. 5 illustrates the principles of the three-dimensional compositing process 500. The processor no first extracts pixel coordinates for the predefined points on the subject's face from the series of photographs 400. (Step 510). The processor no may then determine candidate camera parameters for each photograph of the subject that minimize an aggregate distance between input points (i.e., the extracted pixel coordinates) and the three-dimensional canonical facial point data. (Step 520). The camera parameters may include, but are not limited to, estimated distance between the camera and the subject, azimuth, elevation, range, roll, instantaneous field of view (“IFOV”), x-offset, and y-offset. The processor 110 may then use triangulation to determine the closest intersection points of camera rays corresponding to image points to create a first rough three-dimensional data set corresponding to the predefined points on the subject's face. (Step 530). In other words, the processor no triangulates an estimated position for each of the predefined points by comparing the position of the pixels extracted from each photo for the respective predefined point and the camera parameters for the respective camera. For example, the processor may determine a line from each camera through the pixel where the predefined point is located. The processor may then, based upon the lines, the location of the respective predefined point along each line and the parameters for each camera, triangulate a rough three-dimensional position for the predefined point.
  • The processor no then refines the rough three-dimensional data set to create the subject data set 420. (Step 540). The processor no may implement a genetic algorithm to iteratively 1) determine camera parameters and three-dimensional point locations simultaneously; 2) determine 3-D point locations that minimize aggregate image plane errors for all camera views; and 3) determine camera parameters that minimize aggregate distance between input image points and the derived three-dimensional face point data (i.e., the subject data set 420). For example, the 3-D canonical facial point data extracted in step 510 may be scaled, rotated, translated, and refigured with regard to range parallax until an appropriate minimal error is found in the aggregate Cartesian distance between the canonical set and the photo data set. The scaling and other parameters are then held as the first approximation of the camera position that produced the 2-D photo. As a matter of practice, what has been referred to as the camera position also includes camera parameters such as field of view and camera roll and view angles.
  • The points 420 may be saved in the memory 120. The points 420 may be created and stored relative to any coordinate system. For example, a face-derived coordinate system may be used. The face-derived coordinate system may be based upon, for example, an average face. In one embodiment, the axes in the face-derived coordinate system may be centered on one of the predefined points or an expected position of one of the predefined points on an average face. For example, the face-derived coordinate system may hold the X-direction as that between the two inner eye corners, with the positive X in the direction of the left eye, the Y-direction as the component of the vector from the bridge of the nose to the center of the upper lip that is orthogonal to the X-direction, and with the positive Y-direction in the direction of the forehead. The Z-direction may be computed by taking the cross-product of the X and Y direction vectors, such that the positive Z-direction points in the general direction of the nose. The origin is defined to be at the tip of the nose, and the scale factor is such that the distance between the two inner eye points is defined to be two units.
  • Other types of coordinate systems may be used. For example, an absolute Cartesian system, a cylindrical coordinate system, a spherical coordinate system or a camera oriented coordinate system can be used.
  • Further, a database of targets may be built using this technique. The memory 120 only needs to store a relatively small amount of data for each subject or target since only a series of three-dimensional points are needed to identify a subject or target. In one embodiment, for example, up to thirty nine points may be identified on each target. Accordingly, the memory 120 would only need to store thirty nine points, at relatively low precision (e.g. three significant figures in the aforementioned face-derived coordinate system) to identify any target. Accordingly, one benefit of the claimed embodiments is, for example, that a large number of targets can be stored in a small amount of memory.
  • While the performance of the facial recognition system 100 improves when multiple photographs of the subject are available, the processor can also create a data set based upon a single photo. When only a single photograph of the subject is available, the processor 110 creates a two-dimensional data set and performs the facial recognition on the two-dimensional data set as described in further detail below. While this implementation by itself is limited to performing recognition only on images taken from very nearly the same angle, it can be coupled with the 3-D recognition system in the case where target data is available in 3-D, but only a single image of the subject is available. In this embodiment, the 3-D target data can be used in place of the canonical face, and the camera angle derived to reduce spatial errors. The 2-D sphericity measure can then be used as a recognition metric in comparing the 2-D subject points to the best fit of a given target. The inverse application (A single target photo and multiple subject photos) is also a possible implementation. Returning to FIG. 2, the processor 110 then compares the data points 420 of the subject to a target data set stored in the database 130 to determine if there is a match. (Step 230). The processor may implement, for example, a two-dimensional sphericity metric to perform the comparison.
  • FIGS. 6 and 7 illustrate an exemplary method 600 of a three-dimensional sphericity metric. The processor no determines (i.e., models or creates) a three-dimensional object 610 based upon an arbitrary combination of the data points 420 in the subject data set. (Step 710). The object 610 illustrated in FIG. 6 is a tetrahedron based upon four points in the data set, however, many different polyhedra with varying numbers of points may be used as the basis for the object 610. For example, an n-hedron or n-simplex may be used. A simplex is a generalization of the notion of a triangle or tetrahedron to arbitrary dimension. Similarly, a n-hedron is a generalization of the notation for multi-dimensional hedron. An n-simplex is an n-dimensional polytope which is the convex hull of its n+1 vertices. For example, a 2-simplex is a triangle, a 3-simplex is a tetrahedron, and a 4-simplex is a pentachoron. Generally, the object will have one more point than the number of dimensions. The object 610 preferably has at least two dimensions. In some embodiments the object may have four or more dimensions. For example, a skin color or a texture of the skin may be used as a fourth dimension. The processor then determines a perfect sphere 620 that fits within the object 610 and touches each side of the object 610. (Step 720).
  • The processor 110 then selects a target data set to compare the subject data set to. (Step 730). The target may be selected, for example, based upon an attribute of the subject as described in further detail below. The processor 110 selects the same points in the target data set to create a target object 630. (Step 740). For example, if the object 610 was created based upon an outer corner of a right eye, an inner corner of a left eye, a tip of a nose, and a lower point on the right ear of the subject, the same points on the targets face are selected to create the target object 630. The processor 110 then remaps the object 610 and the sphere therein such that the points in object 610 match the points in target object 630. (Step 750).
  • Remapping the object 610 compensates for differences within the subject data set and the target data set. For example, if the subject data set and the target data set are based upon photographs taken at different distances, the data target data set and subject data set may have different coordinate locations based on their different scaling parameters; similarly datasets may be different in terms of translation or rotation. However, by remapping the object 610 to match the target object 630, as seen in FIG. 6, the differences based upon the photographic data used to create the subject data set and target data set are compensated for.
  • The processor 110 then determines a sphericity value for the mapped sphere 620 (i.e., if the original sphere is mapped to a non-similar simplex, it will become somewhat oblate; the sphericity metric evaluates how oblate the spheroid 620 has become due to the mapping operation.). (Step 760). The more spherical the object 620 is after the remapping process the more likely that the subject will be identified as the target.
  • Sphericity is a metric that is used to determine whether two triangles, tetrahedra, or corresponding simplex solids in any dimensional space greater than three are geometrically similar.
  • When the objects 610 and 630 are tetrahedra, the sphericity of the resulting ellipsoid may be computed as:
  • S = ( det ( g t g ) ) 1 / n 1 n tr ( g t g ) Where : B = [ x 1 y 1 z 1 1 x 2 y 2 z 2 1 x 3 y 3 z 3 1 x 4 y 4 z 4 1 ] And : g t g = [ g 11 g 12 g 13 g 21 g 22 g 23 g 31 g 32 g 33 t 1 t 2 t 3 ] = B - 1 [ u 1 v 1 w 1 u 2 v 2 w 2 u 3 v 3 w 3 u 4 v 4 w 4 ]
  • Where (Xn, yn and zn) represent a point on the subject's face, (un, vn and wn) represent a point on a target's face, and tn represents a scaling factor.
  • When only a single photograph of the subject is available, the sphericity measurement may be accomplished with triangles. The sphericity is then computed as:
  • Sphericity = 2 d 1 d 2 d 1 + d 2
  • Where d1 and d2 are the minor and major axes of the inscribed ellipse. In this instance, the orientation of the target dataset should be optimized (i.e., adjusted to match the orientation of the subject in the single photograph) and rendered in two-dimensions in order to have target triangles to compare to.
  • The processor 110 may perform the sphericity comparison between multiple different multi-point objects within the subject data set and a target data set. That is, the processor 110 may create multiple objects 610 and multiple target objects 630 based upon the same subject data set and target data set and perform the sphericity measurement thereon. The processor 110 may then calculate a mean, median, or sum sphericity of all of the spheroids 620 and determine if there was a match if the mean, median, or sum sphericity is greater than a predetermined threshold. As discussed above, weighting factors may be applied to the spherocity to decrease the relative importance of points that are likely to vary most under different facial expressions. In one embodiment, for example, a weight assigned to an object 610 may be calculated as the product of the weights of the points that comprise it. In other embodiments, the weight of each object 610 may be equal to the weight of the point with the largest weight, the weight of the point with the smallest weight, or the mean, median, or sum of the points within the object 610.
  • The sphericity of the oblate spheroid 620 after the remapping may be compared to a predetermined threshold. If the sphericity of the spheroid 620 is greater than or equal to the predetermined threshold, the subject may be identified as the target. The predetermined threshold may very depending upon how many false-positive matches are willing to be tolerated. For example, the predetermined threshold would be higher if a single false identification for every one-hundred positive identifications is acceptable versus if one false identification for every ten identifications is acceptable.
  • The sphericity comparison can also be performed between a subject data set and multiple target data sets within the database 130. In one embodiment, the sphericity comparison can be performed between the subject and each target in the database 130. In another embodiment, a search algorithm can be used to determine the more likely matches. The processor 110 may analyze the data within the subject data set and, based upon an attribute of the subject's face, determine which target to compare with. For example, the target database may be organized in a tree structure, organizing the targets based upon their facial attributes. For example, people with large noses could be in one branch whereas people with small noses could be in another branch of the tree. The processor 110 may analyze the data within the subject data set and, based upon an attribute of the subject's face, determine which branch of targets to compare with the subject.
  • While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.

Claims (25)

1. A facial recognition system, comprising:
a memory configured to store a target data set identifying a plurality of predefined points on a face of a target; and
a processor configured to:
receive an arbitrary number of photographs including a face of a subject, each of the photographs taken from an arbitrary angle and at an arbitrary distance from the subject,
create a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographs, and
perform facial recognition on the subject data set by comparing the subject data set to the target data set.
2. The facial recognition system of claim 1, wherein the processor, when performing the facial recognition, is further configured to:
determine a first object based upon at least three points within the subject data set corresponding to predefined points on the subject's face;
determine a circular object within the first object which touches each side of the first object;
remap the first object and the circular object therein such that the first object, corresponding to predefined points on the subject's face, matches a second object corresponding to the same points in the target data set; and
determine a sphericity of the remapped circular object.
3. The facial recognition system of claim 2, wherein the first object is a tetrahedron and the circular object is a sphere.
4. The facial recognition system of claim 2, wherein the processor is further configured to determine if the sphericity of the remapped circular object is greater than a predetermined threshold.
5. The facial recognition system of claim 2, wherein the processor is further configured to choose a target data set to compare with the subject data set based upon an attribute of the subject data set.
6. The facial recognition system of claim 2, wherein the processor is further configured to compare the subject data set with a plurality of target data sets.
7. The facial recognition system of claim 1, wherein the processor, when performing the facial recognition, is further configured to:
determine a plurality of unique polyhedra based upon a plurality of different points within the subject data set corresponding to predefined points on the subject's face;
determine a sphere within the each of the polyhedra which touches each side of its respective polyhedron;
remap each polyhedron and the sphere therein such that each polyhedron, corresponding to predefined points on the subject's face, matches a polyhedron corresponding to the same points in the target data set; and
determine a mean, median, and/or sum sphericity of the plurality of remapped spheres.
8. A method for performing facial recognition, comprising:
receiving, by a processor, photographic data from an arbitrary number of photographs including a face of a subject, each of the photographs being at an arbitrary angle and at an arbitrary distance from the subject,
creating, by the processor, a subject data set identifying the plurality of predefined points on the subject's face based upon the received photographic data, and
comparing, by the processor, the subject data set to a target data set stored in a memory.
9. The method of claim 8, wherein the comparing further comprises:
determining, by the processor, a polyhedron based upon a plurality of points within the subject data set corresponding to predefined points on the subject's face;
determining, by the processor, a sphere within the polyhedron which touches each side of the polyhedron;
remapping, by the processor, the polyhedron and the sphere therein such that the polyhedron, corresponding to predefined points on the subject's face, matches a polyhedron corresponding to the same points in the target data set; and
determining, by the processor, a sphericity of the remapped sphere.
10. The method of claim 9, wherein the polyhedron is a tetrahedron.
11. The method of claim 9, further comprising determining, by the processor, if the sphericity of the remapped sphere is greater than a predetermined threshold.
12. The method of claim 9, further comprising choosing, by the processor, a target data set stored in the memory based upon an attribute of the subject data set.
13. The method of claim 9, further comprising comparing, by the processor, the subject data set with a plurality of target data sets stored in the memory.
14. The method of claim 8, wherein the comparing further comprises:
determining, by the processor, a plurality of unique polyhedra based upon a plurality of different points within the subject data set corresponding to predefined points on the subject's face;
determining, by the processor, a sphere within the each of the polyhedra which touches each side of its respective polyhedron;
remapping, by the processor, each polyhedron and the sphere therein such that each polyhedron, corresponding to predefined points on the subject's face, matches a polyhedron corresponding to the same points in the target data set; and
determining, by the processor, a mean sphericity of the plurality of remapped spheres.
15. An apparatus, comprising:
a camera configured to take at least one photograph of a target, the photograph being taken at any arbitrary angle relative to the subject and at any arbitrary distance from the subject;
a memory configured to store the at least one photograph and further configured to store a database including facial recognition data for at least one target identifying a predetermined number of points on the target's face; and
a processor configured to:
create a subject data set by analyzing the at least one photograph to determine the location of the predefined points on the subject's face, and
compare the subject data set to the target data set stored in the memory.
16. The apparatus of claim 15, wherein the processor, when comparing the subject data set to the target data set stored in the memory, is further configured to:
determine a polyhedron based upon a plurality of points within the subject data set corresponding to predefined points on the subject's face;
determine a sphere within the polyhedron which touches each side of the polyhedra;
remap remapped the polyhedron and the sphere therein such that the polyhedron, corresponding to predefined points on the subject's face, matches a polyhedron corresponding to the same points in the target data set; and
determine a sphericity of the remapped sphere.
17. The apparatus of claim 16, wherein the polyhedron is a tetrahedron.
18. The apparatus of claim 16, wherein the processor is further configured to determine if the sphericity of the remapped sphere is greater than a predetermined threshold.
19. The apparatus of claim 16, wherein the processor is further configured to choose a target data set to compare with the subject data set based upon an attribute of the subject data set.
20. The apparatus of claim 16, wherein the processor is further configured to compare the subject data set with a plurality of target data sets.
21. The facial recognition system of claim 1, wherein the processor, when creating the subject data set, is further configured to:
extract pixel coordinates from each of the photographs corresponding to the plurality of predefined points on the subject's face;
determine camera parameters for the camera which acquired each photograph; and
generate the subject data set by triangulating each predefined point based upon a closest intersection point of camera rays passing thru the extracted pixels.
22. The facial recognition system of claim 21, wherein the processor, when creating the subject data set, is further configured to refine the subject data set by iteratively:
simultaneously determine camera parameters and three-dimensional point locations;
determine three-dimensional point locations the minimize aggregate image plane errors for all camera views; and
determine camera parameters that minimize an aggregate distance between input image points and derived three-dimensional face point data.
23. The facial recognition system of claim 7, wherein the processor is further configured to:
assign weight factors to plurality of unique polyhedra based upon an expression on the subject's face; and
modify sperocity of each of the remapped spheres based upon the weight assigned to the related polyhedra.
24. The facial recognition system of claim 2, wherein the first object is a n-hedron having n number of dimensions and n is greater than or equal to two.
25. The facial recognition system of claim 24, wherein when only a single photograph of the subject is available the first object is a 2-hedron and the processor is further configured to:
orient the target data set to match the orientation of the face of the subject in the single photograph; and
render the target data set in two-dimensions.
US12/967,641 2010-12-14 2010-12-14 Facial recognition using a sphericity metric Active 2032-10-01 US8711210B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/967,641 US8711210B2 (en) 2010-12-14 2010-12-14 Facial recognition using a sphericity metric
PCT/US2011/053093 WO2012082210A1 (en) 2010-12-14 2011-09-23 Facial recognition using a sphericity metric

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/967,641 US8711210B2 (en) 2010-12-14 2010-12-14 Facial recognition using a sphericity metric

Publications (2)

Publication Number Publication Date
US20120147167A1 true US20120147167A1 (en) 2012-06-14
US8711210B2 US8711210B2 (en) 2014-04-29

Family

ID=44721116

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/967,641 Active 2032-10-01 US8711210B2 (en) 2010-12-14 2010-12-14 Facial recognition using a sphericity metric

Country Status (2)

Country Link
US (1) US8711210B2 (en)
WO (1) WO2012082210A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080189609A1 (en) * 2007-01-23 2008-08-07 Timothy Mark Larson Method and system for creating customized output
US20100229085A1 (en) * 2007-01-23 2010-09-09 Gary Lee Nelson System and method for yearbook creation
US20130070973A1 (en) * 2011-09-15 2013-03-21 Hiroo SAITO Face recognizing apparatus and face recognizing method
US20130148903A1 (en) * 2011-12-08 2013-06-13 Yahool Inc. Image object retrieval
WO2015009624A1 (en) * 2013-07-17 2015-01-22 Emotient, Inc. Head-pose invariant recognition of facial expressions
EP2950714A4 (en) * 2013-02-01 2017-08-16 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9867549B2 (en) 2006-05-19 2018-01-16 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10339654B2 (en) 2013-01-24 2019-07-02 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US10438349B2 (en) 2014-07-23 2019-10-08 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10445391B2 (en) 2015-03-27 2019-10-15 Jostens, Inc. Yearbook publishing system
US10660541B2 (en) 2015-07-28 2020-05-26 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10663553B2 (en) 2011-08-26 2020-05-26 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886702A (en) * 1996-10-16 1999-03-23 Real-Time Geometry Corporation System and method for computer modeling of 3D objects or surfaces by mesh constructions having optimal quality characteristics and dynamic resolution capabilities
US20070046662A1 (en) * 2005-08-23 2007-03-01 Konica Minolta Holdings, Inc. Authentication apparatus and authentication method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081606A (en) 1996-06-17 2000-06-27 Sarnoff Corporation Apparatus and a method for detecting motion within an image sequence
AU1613599A (en) 1997-12-01 1999-06-16 Arsev H. Eraslan Three-dimensional face identification system
US6959119B2 (en) 2000-11-03 2005-10-25 Unilever Home & Personal Care Usa Method of evaluating cosmetic products on a consumer with future predictive transformation
US8346483B2 (en) 2002-09-13 2013-01-01 Life Technologies Corporation Interactive and automated tissue image analysis with global training database and variable-abstraction processing in cytological specimen classification and laser capture microdissection applications
US20040012638A1 (en) 2002-05-24 2004-01-22 Donnelli Richard K. System and method of electronic commitment tracking
JP2006522411A (en) 2003-03-06 2006-09-28 アニメトリックス,インク. Generating an image database of objects containing multiple features
US7337154B2 (en) 2003-05-19 2008-02-26 Raytheon Company Method for solving the binary minimization problem and a variant thereof
US7162063B1 (en) 2003-07-29 2007-01-09 Western Research Company, Inc. Digital skin lesion imaging system and method
WO2005106773A2 (en) 2004-04-15 2005-11-10 Edda Technology, Inc. Spatial-temporal lesion detection, segmentation, and diagnostic information extraction system and method
US7426318B2 (en) 2004-06-30 2008-09-16 Accuray, Inc. Motion field generation for non-rigid image registration
JP4605458B2 (en) 2005-04-12 2011-01-05 富士フイルム株式会社 Image processing apparatus and image processing program
US7454046B2 (en) 2005-09-20 2008-11-18 Brightex Bio-Photonics, Llc Method and system for analyzing skin conditions using digital images
US20070080967A1 (en) 2005-10-11 2007-04-12 Animetrics Inc. Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
US8194952B2 (en) 2008-06-04 2012-06-05 Raytheon Company Image processing system and methods for aligning skin features for early skin cancer detection systems
US20090327890A1 (en) 2008-06-26 2009-12-31 Raytheon Company Graphical user interface (gui), display module and methods for displaying and comparing skin features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886702A (en) * 1996-10-16 1999-03-23 Real-Time Geometry Corporation System and method for computer modeling of 3D objects or surfaces by mesh constructions having optimal quality characteristics and dynamic resolution capabilities
US20070046662A1 (en) * 2005-08-23 2007-03-01 Konica Minolta Holdings, Inc. Authentication apparatus and authentication method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869611B2 (en) 2006-05-19 2020-12-22 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9867549B2 (en) 2006-05-19 2018-01-16 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US20100229085A1 (en) * 2007-01-23 2010-09-09 Gary Lee Nelson System and method for yearbook creation
US8839094B2 (en) 2007-01-23 2014-09-16 Jostens, Inc. System and method for yearbook creation
US20080189609A1 (en) * 2007-01-23 2008-08-07 Timothy Mark Larson Method and system for creating customized output
US10663553B2 (en) 2011-08-26 2020-05-26 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US20130070973A1 (en) * 2011-09-15 2013-03-21 Hiroo SAITO Face recognizing apparatus and face recognizing method
US9098760B2 (en) * 2011-09-15 2015-08-04 Kabushiki Kaisha Toshiba Face recognizing apparatus and face recognizing method
US20130148903A1 (en) * 2011-12-08 2013-06-13 Yahool Inc. Image object retrieval
US9870517B2 (en) * 2011-12-08 2018-01-16 Excalibur Ip, Llc Image object retrieval
US10339654B2 (en) 2013-01-24 2019-07-02 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10653381B2 (en) 2013-02-01 2020-05-19 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
EP2950714A4 (en) * 2013-02-01 2017-08-16 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
US9104907B2 (en) 2013-07-17 2015-08-11 Emotient, Inc. Head-pose invariant recognition of facial expressions
WO2015009624A1 (en) * 2013-07-17 2015-01-22 Emotient, Inc. Head-pose invariant recognition of facial expressions
US10438349B2 (en) 2014-07-23 2019-10-08 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US11100636B2 (en) 2014-07-23 2021-08-24 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10445391B2 (en) 2015-03-27 2019-10-15 Jostens, Inc. Yearbook publishing system
US10660541B2 (en) 2015-07-28 2020-05-26 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US10716515B2 (en) 2015-11-23 2020-07-21 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan

Also Published As

Publication number Publication date
WO2012082210A1 (en) 2012-06-21
US8711210B2 (en) 2014-04-29

Similar Documents

Publication Publication Date Title
US8711210B2 (en) Facial recognition using a sphericity metric
US11830141B2 (en) Systems and methods for 3D facial modeling
CN108549873B (en) Three-dimensional face recognition method and three-dimensional face recognition system
US9959455B2 (en) System and method for face recognition using three dimensions
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
US7894636B2 (en) Apparatus and method for performing facial recognition from arbitrary viewing angles by texturing a 3D model
Papazov et al. Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features
EP3273412B1 (en) Three-dimensional modelling method and device
Nishino et al. Corneal imaging system: Environment from eyes
JP5111210B2 (en) Image processing apparatus and image processing method
JP4284664B2 (en) Three-dimensional shape estimation system and image generation system
WO2017186016A1 (en) Method and device for image warping processing and computer storage medium
JP4780198B2 (en) Authentication system and authentication method
ES2926681T3 (en) Three-dimensional model generation system, three-dimensional model generation method and program
US20050084140A1 (en) Multi-modal face recognition
US9626552B2 (en) Calculating facial image similarity
KR20190098858A (en) Method and apparatus for pose-invariant face recognition based on deep learning
JP5833507B2 (en) Image processing device
CN110532979A (en) A kind of 3-D image face identification method and system
US8941651B2 (en) Object alignment from a 2-dimensional image
CN112801038B (en) Multi-view face in-vivo detection method and system
CN109117726A (en) A kind of identification authentication method, device, system and storage medium
US20200160037A1 (en) Method and apparatus for pattern recognition
Li et al. Evaluating effects of focal length and viewing angle in a comparison of recent face landmark and alignment methods
Çağla Adding Virtual Objects to Realtime Images; A Case Study in Augmented Reality

Legal Events

Date Code Title Description
AS Assignment

Owner name: RAYTHEON COMPANY, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MANSON, STEVEN J.;TRUMBULL, TARA L.;REEL/FRAME:025494/0692

Effective date: 20101210

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8