WO2004055715A1 - Expression invariant face recognition - Google Patents
Expression invariant face recognition Download PDFInfo
- Publication number
- WO2004055715A1 WO2004055715A1 PCT/IB2003/005872 IB0305872W WO2004055715A1 WO 2004055715 A1 WO2004055715 A1 WO 2004055715A1 IB 0305872 W IB0305872 W IB 0305872W WO 2004055715 A1 WO2004055715 A1 WO 2004055715A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- expressive
- feature
- image
- pixels
- captured image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
Definitions
- the invention relates in general to face recognition and in particular to improved face recognition technology which can recognize an image of a person even if the expression of the person is different in the captured image than the stored image.
- Face recognition systems are used for the identification and verification of individuals for many different applications such as gaining entry to secure facilities, recognizing people to personalize services such as in a home network environment, and locating wanted individuals in public facilities.
- the ultimate goal in the design of any face recognition system is to achieve the best possible classification (predictive) performance.
- the process of face recognition typically requires the capture of an image, or multiple images of a person, processing the image(s) and then comparing the processed image with stored images. If there is a positive match between a stored image and the captured image the identity of the individual can either be found or verified. From hereon the term "match” does not necessarily mean an exact match but a probability that a person shown in a stored image is the same as the person or object in the captured image.
- U.S. Patent No. 6,292,575 describes such a system and is hereby incorporated by reference.
- the stored images are typically stored in the form of face models by passing the image through some sort of classifier, one of which is described in US Patent Appn. No.
- 09/794,443 hereby incorporated by reference, in which several images are passed through a neural network and facial objects (e.g. eyes, nose, mouth) are classified. A face model image is then built and stored for subsequent comparison to a face model of a captured image.
- facial objects e.g. eyes, nose, mouth
- a problem with these systems is that the expression on the person's face may be different in the captured image than in the stored image.
- a person may be smiling in the stored image, but not in the captured image or a person may be wearing glasses in the stored image and contacts in the captured image. This leads to inaccuracies in the matching of the captured image with the stored image and may result in misidentification of an individual. Accordingly it is an object of this invention to provide an identification and/or verification system which has improved accuracy when the expressive features on the face of the captured image are different than the expressive features on the face of the stored image.
- the system in accordance with a preferred embodiment of the invention captures an image or multiple images of a person. It then locates the expressive facial features of the captured image, compares the expressive facial features to the expressive facial features of the stored images. If there is no match then the coordinates of the non-matching expressive facial feature in the captured image are marked and/or stored. The pixels within these coordinates are then removed from the overall comparison between the captured image and the stored image. Removing these pixels from the subsequent comparison of the entire image reduces false negatives that result from a difference in the facial expressions of the captured image and a matching stored image.
- Fig. 1 shows images of a person with different facial expressions.
- Fig. 2a shows a facial feature locator.
- Fig. 2b shows a facial image with locations of expressive facial features.
- Fig. 3 shows a preferred embodiment of the invention.
- Fig. 4 is a flow chart of a preferred embodiment of the invention.
- Fig. 5 shows a diagrammatic representation of the comparison of an expressive feature.
- Fig. 6 shows an in-home networking facial identification system in accordance with the invention.
- Fig. 1 shows an exemplary sequence of six images of a person with changing facial expressions.
- Image (a) is the stored image. The face has very little facial expression and it is centered in the picture.
- Images (b)-(f) are captured images. These images have varying facial expressions and some are not centered in the picture. If the images (b-f) are compared to the stored image(a) a positive identification may not be found due to the differing facial expressions.
- Fig. 2a shows an image capture device and facial feature locator.
- a video grabber 20 captures the image(s).
- the video grabber 20 can include any optical sensing device for converting images (visible light or infrared) to electrical images. Such devices include video camera, a monochrome camera, a color camera or cameras that are sensitive to non- visible portions of the spectrum such as infrared devices.
- the video grabber may also be realized as a variety of different types of video cameras or any suitable mechanism for capturing an image.
- the video grabber may also be an interface to a storage device that stores a variety of images.
- the output of the video grabber can for example be in the form of RGB, YUV, HIS or gray scale.
- the imagery acquired via the video grabber 20 usually contains more than just a face.
- the first and foremost step is to perform face detection.
- Face detection can be performed in various ways e.g. holistic based where the whole face is detected at one time or feature based where individual facial features are detected. Since the present invention is concerned with locating expressive parts of the face, the feature based approach is used to detect the interloccular distance between the eyes.
- An example of the feature-based face detection approach is described in "Detection and Tracking of Faces and Facial Features, by Antonio Colmenarez, Brendan Frey and Thomas Huang.” International Conference on Image Processing, Kobe, Japan, 1999 hereby incorporated by reference.
- the Face Detector/Normalizer 21 normalizes the facial image to a preset NxN pixel array size, in a preferred embodiment this size is 64 X 72 pixels, so that the face within the image is approximately the same size as the other stored images. This is achieved by comparing the interloccular distance of the detected face with the interloccular distances of the stored faces. The detected face is then made larger or smaller depending on what the comparison reveals.
- the detector/normalizer 21 employs conventional processes known to one skilled in the art to characterize each detected facial image as a two dimensional image having an N by N array of intensity values.
- the captured normalized images 22 are then sent to a face model creator 22.
- the face model creator 22 takes the detected normalized faces and creates a face model to identify the individual faces.
- Face models are created using Radial Basis Function (RBF) networks. Each face model is the same size as the detected facial image.
- RBF Radial Basis Function
- a radial basis function network is a type of classifier device and it is described in commonly owned co- pending United States Patent Application number 09/794,443 entitled "Classification of Objects through Model Ensembles," filed February 27, 2001, the whole contents and disclosure of which is hereby incorporated by reference as if fully set forth herein. Almost any classifier can be used to create the face models, such as Bayesian Networks, the Maximum Likelihood Distance Metric (ML) or the radial basis function network.
- ML Maximum Likelihood Distance Metric
- the Facial Feature Locator 23 locates facial features such as the beginning and ending of each eyebrow, eye beginning and end, nose tip, mouth beginning and end and additional features as shown in Fig. 2b.
- the facial features are located by either selecting the features by hand, or by using the ML distance metric as described in the paper "Detection and Tracking of Faces and Facial Features" by Antonio Colmenarez and Tomas Huang. Other methods of feature detection include optical flow methods. Depending on the system it may not be necessary to locate all facial features, but only the expressive facial features, which are likely to change as the expression on a person's face changes.
- the facial feature locator stores the locations of the facial features in the captured image.
- FIG. 3 shows a block diagram of a facial identification/verification system in accordance with a preferred embodiment of the invention.
- the system shown in Fig. 3 includes first and second stages.
- the first stage is as shown in Fig. 2a and is the capture device/facial feature locator.
- This stage includes the video grabber 20, which captures an image of a person the Face Detector/Normalizer 21 which normalizes the image the face model creator 22, and the facial feature locator 23.
- the second stage is a comparison stage for comparing the captured image to the stored images.
- This stage includes a feature difference detector 24, a storage device 25 for storing coordinates of non-matching features and a final comparison stage 26 for comparing the entire image minus the non-matching expressive features with the stored images.
- the feature difference detector 24 compares the expressive features of the captured image with like facial features of the stored face models. Once the facial feature locator has located the coordinates for each feature, the feature difference detector 24 determines how different the facial feature of the captured image is from the like facial features of the stored images. This is performed by comparing the pixels of the expressive features in the captured image with the pixels of the like expressive features of the stored images.
- the pixels are in the RGB format.
- One skilled in the art could apply this same type of comparison to other pixel formats as well (e.g. YUV).
- comparator 26 One should note that only non-matching features are removed from the overall comparison performed by comparator 26. If a particular feature matches a like feature in the stored image it is not considered an expressive feature and remains in the comparison. A match can mean within a certain tolerance limit. For example, the left eye of the captured image is compared with all of the left eyes of the stored images (Fig. 5). The comparison is performed by comparing the intensity values of the pixels of the eye within the N x N captured image with the intensity values of the pixels of the eyes of the N x N stored images.
- the coordinates of the expressive features of the captured image are stored at 25.
- the fact that there is no match between an expressive facial feature of a captured image with the corresponding expressive facial features of the stored images could mean that the captured image does not match with any stored image or it could just mean that the eye in the captured image is closed whereas the eye in a matching stored image is open. Accordingly these expressive features do not need to be used in the overall image comparison.
- Fig. 4 shows a flow chart in accordance with a preferred embodiment of the invention. This flow chart explains the overall comparison that is performed between the captured image and the stored images.
- a face model is created from the captured image and the location of the expressive features are found.
- the expressive features are, for example, the eyes, eyebrows, nose and mouth. All or some of these expressive features can be identified.
- the coordinates of the expressive features are then identified. As shown at 90 and at SI 10 the coordinates of the left eye of the captured image are found. These coordinates are denoted herein as CLE . Similar coordinates are found for the right eye CRE ⁇ --t and the mouth CM ⁇ - 4 .
- a facial feature of the captured image is selected for comparison to the stored images. Assume the left eye is chosen.
- the pixels within the coordinates of the left eye CLEi ⁇ are then compared at S120 with the corresponding pixels within the coordinates of the left eyes of the stored images (S n LE ⁇ - 4 ). (See Fig. 5).
- the coordinates CLE 1 of the left eye of the captured image are stored S140 and a next expressive facial feature is selected at SI 20. If the pixels within the left eye coordinates of the captured image match SI 30 the pixels within the left eye coordinates of one of the stored images then the coordinates are not stored as "expressive" feature coordinates and another expressive facial feature is chosen at SI 20. It should be noted that the term match could mean a high probability of a match, a close match or an exact match.
- the NxN pixel array of the captured image (CNxN) is compared to the NxN arrays of the stored images (SiNxN...S n NxN). This comparison however is performed after excluding the pixels falling within any of the stored coordinates of the captured image (SI 50). If for example the person in the captured image is winking his left eye and in the stored image he is not winking then the comparison will probably be as follows:
- the face detection system of the present invention has particular utility in the area of security systems, and in-home networking systems where the user must be identified in order to set home preferences.
- the images of the various people in the house are stored. As the user walks into the room an image is captured and immediately compared to the stored images to determine the identification of the individual in the room. Since the person will be going about normal daily activities it can be easily understood how the facial expressions on the people as they enter a particular environment may be different than his/her facial features in the stored images. Similarly in a security application such as an airport the image of the person as he/she is checking in may be different than his/her image in the stored database.
- Fig. 6 shows an in-home networking system in accordance with the invention.
- the imaging device is a digital camera 60 and it is located in a room such as the living room. As a person 61 sits in the sofa/chair the digital camera captures an image. The image is then compared using the present invention with the images stored in the database on the personal computer 62. Once identification is made, the channel on the television 63 is changed to his/her favorite channel and the computer 62 is set to his/her default web page. While there has been shown and described what is considered to be preferred embodiments of the invention, it will, of course, be understood that various modifications and changes in form or detail could readily be made without departing from the spirit of the invention. It is therefore intended that the invention be not limited to the exact fo ⁇ ns described and illustrated, but should be constructed to cover all modifications that may fall within the scope of the appended claims.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/538,093 US20060110014A1 (en) | 2002-12-13 | 2003-12-10 | Expression invariant face recognition |
JP2004560074A JP2006510109A (en) | 2002-12-13 | 2003-12-10 | Facial expression invariant face recognition method and apparatus |
AU2003302974A AU2003302974A1 (en) | 2002-12-13 | 2003-12-10 | Expression invariant face recognition |
EP03813252A EP1573658A1 (en) | 2002-12-13 | 2003-12-10 | Expression invariant face recognition |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US43337402P | 2002-12-13 | 2002-12-13 | |
US60/433,374 | 2002-12-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004055715A1 true WO2004055715A1 (en) | 2004-07-01 |
Family
ID=32595170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2003/005872 WO2004055715A1 (en) | 2002-12-13 | 2003-12-10 | Expression invariant face recognition |
Country Status (7)
Country | Link |
---|---|
US (1) | US20060110014A1 (en) |
EP (1) | EP1573658A1 (en) |
JP (1) | JP2006510109A (en) |
KR (1) | KR20050085583A (en) |
CN (1) | CN1723467A (en) |
AU (1) | AU2003302974A1 (en) |
WO (1) | WO2004055715A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG123618A1 (en) * | 2004-12-15 | 2006-07-26 | Chee Khin George Loo | A method and system for verifying the identity of a user |
WO2008020038A1 (en) * | 2006-08-16 | 2008-02-21 | Guardia A/S | A method of identifying a person on the basis of a deformable 3d model |
Families Citing this family (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7283649B1 (en) * | 2003-02-27 | 2007-10-16 | Viisage Technology, Inc. | System and method for image recognition using stream data |
US7272246B2 (en) * | 2003-05-22 | 2007-09-18 | Motorola, Inc. | Personal identification method and apparatus |
US8553949B2 (en) | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US20090235364A1 (en) * | 2005-07-01 | 2009-09-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional content alteration |
KR101100429B1 (en) * | 2005-11-01 | 2011-12-30 | 삼성전자주식회사 | Semi automatic enrollment method and apparatus of photo album, and photo album system using them |
US7804983B2 (en) * | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US7995741B1 (en) * | 2006-03-24 | 2011-08-09 | Avaya Inc. | Appearance change prompting during video calls to agents |
JP4197019B2 (en) * | 2006-08-02 | 2008-12-17 | ソニー株式会社 | Imaging apparatus and facial expression evaluation apparatus |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
WO2009116049A2 (en) | 2008-03-20 | 2009-09-24 | Vizi Labs | Relationship mapping employing multi-dimensional context including facial recognition |
US9143573B2 (en) | 2008-03-20 | 2015-09-22 | Facebook, Inc. | Tag suggestions for images on online social networks |
EP2279483B1 (en) | 2008-04-25 | 2019-06-05 | Aware, Inc. | Biometric identification and verification |
KR100947990B1 (en) * | 2008-05-15 | 2010-03-18 | 성균관대학교산학협력단 | Gaze Tracking Apparatus and Method using Difference Image Entropy |
WO2010063463A2 (en) | 2008-12-05 | 2010-06-10 | Fotonation Ireland Limited | Face recognition using face tracker classifier data |
JP5456159B2 (en) * | 2009-05-29 | 2014-03-26 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Method and apparatus for separating the top of the foreground from the background |
TWI447658B (en) | 2010-03-24 | 2014-08-01 | Ind Tech Res Inst | Facial expression capturing method and apparatus therewith |
US8971628B2 (en) | 2010-07-26 | 2015-03-03 | Fotonation Limited | Face detection using division-generated haar-like features for illumination invariance |
CN102385703B (en) * | 2010-08-27 | 2015-09-02 | 北京中星微电子有限公司 | A kind of identity identifying method based on face and system |
KR101649322B1 (en) * | 2011-02-03 | 2016-08-18 | 페이스북, 인크. | Systems and methods for image-to-text and text-to-image association |
JP5791364B2 (en) * | 2011-05-16 | 2015-10-07 | キヤノン株式会社 | Face recognition device, face recognition method, face recognition program, and recording medium recording the program |
TWI439967B (en) * | 2011-10-31 | 2014-06-01 | Hon Hai Prec Ind Co Ltd | Security monitor system and method thereof |
US9104907B2 (en) * | 2013-07-17 | 2015-08-11 | Emotient, Inc. | Head-pose invariant recognition of facial expressions |
US20150227780A1 (en) * | 2014-02-13 | 2015-08-13 | FacialNetwork, Inc. | Method and apparatus for determining identity and programing based on image features |
EP2919142B1 (en) * | 2014-03-14 | 2023-02-22 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for providing health status information |
CN104077579B (en) * | 2014-07-14 | 2017-07-04 | 上海工程技术大学 | Facial expression recognition method based on expert system |
US10915618B2 (en) | 2014-08-28 | 2021-02-09 | Facetec, Inc. | Method to add remotely collected biometric images / templates to a database record of personal information |
US10803160B2 (en) | 2014-08-28 | 2020-10-13 | Facetec, Inc. | Method to verify and identify blockchain with user question data |
US10698995B2 (en) | 2014-08-28 | 2020-06-30 | Facetec, Inc. | Method to verify identity using a previously collected biometric image/data |
US11256792B2 (en) | 2014-08-28 | 2022-02-22 | Facetec, Inc. | Method and apparatus for creation and use of digital identification |
US10614204B2 (en) | 2014-08-28 | 2020-04-07 | Facetec, Inc. | Facial recognition authentication system including path parameters |
CA3186147A1 (en) | 2014-08-28 | 2016-02-28 | Kevin Alan Tussy | Facial recognition authentication system including path parameters |
US10547610B1 (en) * | 2015-03-31 | 2020-01-28 | EMC IP Holding Company LLC | Age adapted biometric authentication |
US9977950B2 (en) | 2016-01-27 | 2018-05-22 | Intel Corporation | Decoy-based matching system for facial recognition |
USD987653S1 (en) | 2016-04-26 | 2023-05-30 | Facetec, Inc. | Display screen or portion thereof with graphical user interface |
US10958807B1 (en) * | 2018-02-08 | 2021-03-23 | Digimarc Corporation | Methods and arrangements for configuring retail scanning systems |
US10880451B2 (en) | 2018-06-08 | 2020-12-29 | Digimarc Corporation | Aggregating detectability metrics to determine signal robustness |
CN110751067B (en) * | 2019-10-08 | 2022-07-26 | 艾特城信息科技有限公司 | Dynamic expression recognition method combined with biological form neuron model |
CN112417198A (en) * | 2020-12-07 | 2021-02-26 | 武汉柏禾智科技有限公司 | Face image retrieval method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5410609A (en) * | 1991-08-09 | 1995-04-25 | Matsushita Electric Industrial Co., Ltd. | Apparatus for identification of individuals |
US5450504A (en) * | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4975969A (en) * | 1987-10-22 | 1990-12-04 | Peter Tal | Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same |
US5229764A (en) * | 1991-06-20 | 1993-07-20 | Matchett Noel D | Continuous biometric authentication matrix |
US6181805B1 (en) * | 1993-08-11 | 2001-01-30 | Nippon Telegraph & Telephone Corporation | Object image detecting method and system |
EP0758471B1 (en) * | 1994-03-15 | 1999-07-28 | Fraunhofer-Gesellschaft Zur Förderung Der Angewandten Forschung E.V. | Person identification based on movement information |
US5717469A (en) * | 1994-06-30 | 1998-02-10 | Agfa-Gevaert N.V. | Video frame grabber comprising analog video signals analysis system |
US5892838A (en) * | 1996-06-11 | 1999-04-06 | Minnesota Mining And Manufacturing Company | Biometric recognition using a classification neural network |
US6819783B2 (en) * | 1996-09-04 | 2004-11-16 | Centerframe, Llc | Obtaining person-specific images in a public venue |
JP2001506032A (en) * | 1997-09-16 | 2001-05-08 | インビィジテック コーポレイション | Personal identification system using complex parameters with low cross-correlation |
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
US6947578B2 (en) * | 2000-11-02 | 2005-09-20 | Seung Yop Lee | Integrated identification data capture system |
US6778705B2 (en) * | 2001-02-27 | 2004-08-17 | Koninklijke Philips Electronics N.V. | Classification of objects through model ensembles |
US6879709B2 (en) * | 2002-01-17 | 2005-04-12 | International Business Machines Corporation | System and method for automatically detecting neutral expressionless faces in digital images |
AU2003228380A1 (en) * | 2002-03-27 | 2003-10-13 | Molex Incorporated | Differential signal connector assembly with improved retention capabilities |
-
2003
- 2003-12-10 US US10/538,093 patent/US20060110014A1/en not_active Abandoned
- 2003-12-10 EP EP03813252A patent/EP1573658A1/en not_active Withdrawn
- 2003-12-10 WO PCT/IB2003/005872 patent/WO2004055715A1/en not_active Application Discontinuation
- 2003-12-10 JP JP2004560074A patent/JP2006510109A/en active Pending
- 2003-12-10 AU AU2003302974A patent/AU2003302974A1/en not_active Abandoned
- 2003-12-10 KR KR1020057010692A patent/KR20050085583A/en not_active Application Discontinuation
- 2003-12-10 CN CNA2003801055694A patent/CN1723467A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5410609A (en) * | 1991-08-09 | 1995-04-25 | Matsushita Electric Industrial Co., Ltd. | Apparatus for identification of individuals |
US5450504A (en) * | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
Non-Patent Citations (3)
Title |
---|
GRUDIN M A: "On internal representations in face recognition systems", PATTERN RECOGNITION, PERGAMON PRESS INC. ELMSFORD, N.Y, US, VOL. 33, NR. 7, PAGE(S) 1161-1177, ISSN: 0031-3203, XP004321195 * |
MARTINEZ ALEIX M: "Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class", June 2002, IEEE TRANS PATTERN ANAL MACH INTELL;IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE JUNE 2002, VOL. 24, NR. 6, PAGE(S) 748 - 763, XP002276538 * |
PANTIC MAJA ET AL: "Automatic analysis of facial expressions: The state of the art", IEEE TRANS PATTERN ANAL MACH INTELL;IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE DEC 2000 IEEE, LOS ALAMITOS, CA, USA, vol. 22, no. 12, December 2000 (2000-12-01), pages 1424 - 1445, XP002276537 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG123618A1 (en) * | 2004-12-15 | 2006-07-26 | Chee Khin George Loo | A method and system for verifying the identity of a user |
WO2008020038A1 (en) * | 2006-08-16 | 2008-02-21 | Guardia A/S | A method of identifying a person on the basis of a deformable 3d model |
Also Published As
Publication number | Publication date |
---|---|
EP1573658A1 (en) | 2005-09-14 |
AU2003302974A1 (en) | 2004-07-09 |
KR20050085583A (en) | 2005-08-29 |
JP2006510109A (en) | 2006-03-23 |
US20060110014A1 (en) | 2006-05-25 |
CN1723467A (en) | 2006-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060110014A1 (en) | Expression invariant face recognition | |
US11288504B2 (en) | Iris liveness detection for mobile devices | |
US7450740B2 (en) | Image classification and information retrieval over wireless digital networks and the internet | |
US8908933B2 (en) | Method and system for attaching a metatag to a digital image | |
US7680330B2 (en) | Methods and apparatus for object recognition using textons | |
JP4241763B2 (en) | Person recognition apparatus and method | |
US20100235400A1 (en) | Method And System For Attaching A Metatag To A Digital Image | |
US20090175512A1 (en) | Face collation apparatus | |
US20070116364A1 (en) | Apparatus and method for feature recognition | |
JP2004133889A (en) | Method and system for recognizing image object | |
JP2005149506A (en) | Method and apparatus for automatic object recognition/collation | |
US7848544B2 (en) | Robust face registration via multiple face prototypes synthesis | |
KR20190093799A (en) | Real-time missing person recognition system using cctv and method thereof | |
KR20210131891A (en) | Method for authentication or identification of an individual | |
US20070253598A1 (en) | Image monitoring apparatus | |
Narzillo et al. | Peculiarities of face detection and recognition | |
JP2002189724A (en) | Image data retrieval device | |
JPH07302327A (en) | Method and device for detecting image of object | |
US20060056667A1 (en) | Identifying faces from multiple images acquired from widely separated viewpoints | |
US20180157896A1 (en) | Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates | |
JP2002208011A (en) | Image collation processing system and its method | |
Prabowo et al. | Application of" Face Recognition" Technology for Class Room Electronic Attendance Management System | |
JP3841482B2 (en) | Face image recognition device | |
Mou et al. | Automatic databases for unsupervised face recognition | |
Mishra et al. | Face Recognition in Real Time Using Opencv and Python |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003813252 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006110014 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10538093 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004560074 Country of ref document: JP Ref document number: 20038A55694 Country of ref document: CN Ref document number: 1020057010692 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057010692 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003813252 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10538093 Country of ref document: US |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2003813252 Country of ref document: EP |