|Numéro de publication||USRE36041 E|
|Type de publication||Octroi|
|Numéro de demande||US 08/340,615|
|Date de publication||12 janv. 1999|
|Date de dépôt||16 nov. 1994|
|Date de priorité||1 nov. 1990|
|État de paiement des frais||Payé|
|Autre référence de publication||DE69130616D1, DE69130616T2, EP0555380A1, EP0555380A4, EP0555380B1, US5164992, WO1992008202A1|
|Numéro de publication||08340615, 340615, US RE36041 E, US RE36041E, US-E-RE36041, USRE36041 E, USRE36041E|
|Inventeurs||Matthew Turk, Alex P. Pentland|
|Cessionnaire d'origine||Massachusetts Institute Of Technology|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (9), Citations hors brevets (2), Référencé par (142), Classifications (40), Événements juridiques (5)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
The invention relates to a system for identifying members of a viewing audience.
For a commercial television network, the cost of its advertising time depends critically on the popularity of its programs among the television viewing audience. Popularity, in this case, is typically measured in terms of the program's share of the total audience viewing television at the time the program airs. As a general rule of thumb, advertisers prefer to place their advertisements where they will reach the greatest number of people. Thus, there is a higher demand among commercial advertisers for advertising time slots along side more popular programs. Such time slots can also demand a higher price.
Because the economics of television advertising depends so critically on the tastes and preferences of the television audience, the television industry invests a substantial amount of time, effort and money in measuring those tastes and preferences. One preferred approach involves monitoring the actual viewing habits of a group of volunteer families which represent a cross-section of all people who watch television. Typically, the participants in such a study allow monitoring equipment to be placed in their homes. Whenever a participant watches a television program, the monitoring equipment records the time, the identity of the program and the identity of the members of the viewing audience. Many of these systems require active participation by the television viewer to obtain the monitoring information. That is, the viewer must in some way interact with the equipment to record his presence in the viewing audience. If the viewer forgets to record his presence the monitoring statistics will be incomplete. In general, the less manual intervention required by the television viewer, the more likely it is that the gathered statistics on viewing habits will be complete and error free.
Systems have been developed which automatically identify members of the viewing audience without requiring the viewer to enter any information. For example, U.S. Pat. No. 4,858,000 to Daozehng Lu, issued Aug. 15, 1989 describes such a system. In the system, a scanner using infrared detectors locates a member of the viewing audience, captures an image of the located member, extracts a pattern signature for the captured image and then compares the extracted pattern signature to a set of stored pattern image signatures to identify the audience member.
In general, in one aspect, the invention is a recognition system for identifying members of an audience. The invention includes an imaging system which generates an image of the audience; a selector module for selecting a portion of the generated image; a detection means which analyzes the selected image portion to determine whether an image of a person is present; and a recognition module for determining whether a detected image of a person resembles one of a reference set of images of individuals.
Preferred embodiments include the following features. The recognition module also determines which one, if any, of the individuals in the reference set the detected image resembles. The selection means includes a motion detector for identifying the selected portion of the image by detecting motion and it includes a locator module for locating the portion of the image corresponding to the face of the person detected. In the recognition system, the detection means and the recognition module employ a first and second pattern recognition techniques, respectively, to determine whether an image of a person is present in the selected portion of the image and both pattern recognition techniques employ a set of eigenvectors in a multi-dimensional image space to characterize the reference set. In addition, the second pattern recognition technique also represents each member of the reference set as a point in a subspace defined by the set of eigenvectors. Also, the image of a person is an image of a person's face and the reference set includes images of faces of the individuals.
Also in preferred embodiments, the recognition system includes means for representing the reference set as a set of eigenvectors in a multi-dimensional image space and the detection means includes means for representing the selected image portion as an input vector in the multi-dimensional image space and means for computing the distance between a point identified by the input vector and a subspace defined by the set of eigenvectors. The detection means also includes a thresholding means for determining whether an image of a person is present by comparing the computed distance to a preselected threshold. The recognition module includes means for representing each member of the reference set as a corresponding point in the subspace. To determine the location of each point in subspace associated with a corresponding member of the reference set, a vector associated with that member is projected onto the subspace.
The recognition module also includes means for projecting the input vector onto the subspace, means for selecting a particular member of the reference set, and means for computing a distance within the subspace between a point identified by the projection of the input vector onto the subspace and the point in the subspace associated with the selected member.
In general, in another aspect, the invention is a method for identifying members of an audience. The invention includes the steps of generating an image of the audience; selecting a portion of the generated image; analyzing the selected image portion to determine whether an image of a person is present; and if an image of a person is determined to be present, determining whether the image of a person resembles one of a reference set of images of individuals.
One advantage of the invention is that it is fast, relatively simple and works well in a constrained environment, i.e., an environment for which the associated image remains relatively constant except for the coming and going of people. In addition, the invention determines whether a selected portion of an image actually contains an image of a face. If it is determined that the selected image portion contains an image of a face, the invention then determine which one of a reference set of known faces the detected face image most resembles. If the detected face image is not present among the reference set, the invention reports the presence of a unknown person in the audience. The invention has the ability to discriminate face images from images of other objects.
Other advantages and features will become apparent from the following description of the preferred embodiment and from the claims.
FIG. 1 is a block diagram of a face recognition system;
FIG. 2 is a flow diagram of an initialization procedure for the face recognition module;
FIG. 3 is a flow diagram of the operation of the face recognition module; and
FIG. 4 is a block diagram of a motion detection system for locating faces within a sequence of images.
Referring to FIG. 1, in an audience monitoring system 2, a video camera 4, which is trained on an area where members of a viewing audience generally sit to watch the TV, sends a sequence of video image frames to a motion detection module 6. Video camera 4, which may, for example, be installed in the home of a family that has volunteered to participate in a study of public viewing habits, generates images of TV viewing audience. Motion detection module 6 processes the sequence of image frames to identify regions of the recorded scene that contain motion, and thus may be evidence of the presence of a person watching TV. In general, motion detection module 6 accomplishes this by comparing successive frames of the image sequence so as to find those locations containing image data that changes over time. Since the image background (i.e., images of the furniture and other objects in the room) will usually remain unchanged from frame to frame, the areas of movement will generally be evidence of the presence of a person in the viewing audience.
When movement is identified, a head locator module 8 selects a block of the image frame containing the movement and sends it to a face recognition module 10 where it is analyzed for the presence of recognizable faces. Face recognition module 10 performs two functions. First, it determines whether the image data within the selected block resembles a face. Then, if it does resemble a face, module 10 determines whether the face is one of a reference set of faces. The reference set may include, for example, the images of faces of all members of the family in whose house the audience monitoring system has been installed.
To perform its recognition functions, face recognizer 10 employs a multi-dimensional representation in which face images are characterized by a set of eigenvectors or "eigenfaces". In general, according to this technique, each image is represented as a vector (or a point) in very high dimensional image space in which each pixel of the image is represented by a corresponding dimension or axis. The dimension of this image space thus depends upon the size of the image being represented and can become very large for any reasonably sized image. For example, if the block of image data is N pixels by N pixels, then the multi-dimensional image space has dimension N2. The image vector which represents the N×N block of image data in this multi-dimensional image space is constructed by simply concatenating the rows of the image data to generate a vector of length N2.
Face images, like all other possible images, are represented by points within this multi-dimensional image space. The distribution of faces, however, tends to be grouped within a region of the image space. Thus, the distribution of faces of the reference set can be characterized by using principal component analysis. The resulting principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images, defines the variation among the set of face images. These eigenvectors are typically ordered, each one accounting for a different amount of variation among the face images. They can be thought of as a set of features which together characterize the variation between face images within the reference set. Each face image location within the multi-dimensional image space contributes more or less to each eigenvector, so that each eigenvector represents a sort of ghostly face which is referred to herein as an eigenface.
Each individual face from the reference set can be represented exactly in terms of a linear combination of M non-zero eigenfaces. Each face can also be approximated using only the M' "best" faces, i.e., those that have the largest eigenvalues, and which therefore account for the most variance within the set of face images. The best M' eigenfaces span an M'-dimensional subspace (referred to hereinafter as "face space") of all possible images.
This approach to face recognition involves the initialization operations shown in FIG. 2 to "train" recognition module 10. First, a reference set of face images is obtained and each of the faces of that set is represented as a corresponding vector or point in the multi-dimensional image space (step 100). Then, using principal component analysis, the distribution of points for the reference set of faces is characterized in terms of a set of eigenvectors (or eigenfaces) (step 102). If a full characterization of the distribution of points is performed, it will yield N2 eigenfaces of which M are non-zero. Of these, only the M' eigenfaces corresponding to the highest eigenvalues are chosen, where M'<M<<N2. This subset of eigenfaces is used to define a subspace (or face space) within the multidimensional image space. Finally, each member of the reference set is represented by a corresponding point within face space (step 104). For a given face, this is accomplished by projecting its point in the higher dimensional image space onto face space.
If additional faces are added to the reference set at a later time, these operations are repeated to update the set of eigenfaces characterizing the reference set.
After face recognition module 10 is initialized, it implements the steps shown in FIG. 3 to recognize face images supplied by face locator module 8. First, face recognition module 10 projects the input image (i.e., the image presumed to contain a face) onto face space by projecting it onto each of the M' eigenfaces (step 200). Then, module 10 determines whether the input image is a face at all (whether known or unknown) by checking to see if the image is sufficiently close to "face space" (step 202). That is, module 10 computes how far the input image in the multi-dimensional image space is from the face space and compares this to a preselected threshold. If the computed distance is greater than the preselected threshold, module 10 indicates that it does not represent a face image and motion detection module 6 locates the next block of the overall image which may contain a face image.
If the computed distance is sufficiently close to face space (i.e., less than the preselected threshold), recognition module 10 treats it as a face image and proceeds with determining whose face it is (step 206). This involves computing distances between the projection of the input image onto face space and each of the reference face images in face space. If the projected input image is sufficiently close to any one of the reference faces (i.e., the computed distance in face space is less than a predetermined distance), recognition module 10 identifies the input image as belonging to the individual associated with that reference face. If the projected input image is not sufficently close to any one of the reference faces, recognition module 10 reports that a person has been located but the identity of the person is unknown.
The mathematics underlying each of these steps will now be described in greater detail.
Let a face image I(x,y) be a two-dimensional N by N array of (8-bit) intensity values. The face image is represented in the multi-dimensional image space as a vector of dimension N2. Thus, a typical image of size 256 by 256 becomes a vector of dimension 65,536, or, equivalently, a point in 65,536-dimensional image space. An ensemble of images, then, maps to a collection of points in this huge space.
Images of faces, being similar in overall configuration, are not randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. Using principal component analysis, one identifies the vectors which best account for the distribution of face images within the entire image space. These vectors, namely, the "eigenfaces", define the "face space". Each vector is of length N2, describes an N by N image, and is a linear combination of the original face images of the reference set.
Let the training set of face images be Γ1, Γ2, Γ3, . . . , Γm. The average face of the set is defined by
Ψ=(M)-1 Σn Γn, (1)
where the summation is from n=1 to M. Each face differs from the average by the vector Φi =Γi -Ψ. This set of very large vectors is then subject to principal component analysis, which seeks a set of M orthonormal vectors, un, which best describes the distribution of the data. The kth vector, uk, is chosen such that:
λk =(M)-1 Σn (uk T Φn)2 (2)
is a maximum, subject to: ##EQU1##
The vectors uk and scalars λk are the eigenvectors and eigenvalues, respectively, of the covariance matrix ##EQU2## where the matrix A= Φ1 Φ2 . . . ΦM !. The matrix C, however, is N2 by N2, and determining the N2 eigenvectors and eigenvalues can become an intractable task for typical image sizes.
If the number of data points in the face space is less than the dimension of the overall image space (namely, if, M<N2), there will be only M-1, rather than N2, meaningful eigenvectors. (The remaining eigenvectors will have associated eigenvalues of zero.) One can solve for the N2 -dimensional eigenvectors in this case by first solving for the eigenvectors of an M by M matrix--e.g. solving a 16×16 matrix rather than a 16,384 by 16,384 matrix--and then taking appropriate linear combinations of the face images Φi. Consider the eigenvectors vi of AT A such that:
AT Avi =μi vi (5)
Premultiplying both sides by A, yields:
AAT Avi =μi Avi (6)
from which it is apparent that Avi are the eigenvectors of C=AAT.
Following this analysis, it is possible to construct the M by M matrix L=AT A, where Lmn =Φm T Φn, and find the M eigenvectors, v1, of L. These vectors determine linear combinations of the M training set face images to form the eigenfaces u1 : ##EQU3##
With this analysis the calculations are greatly reduced, from the order of the number of pixels in the images (N2) to the order of the number of images in the training set (M). In practice, the training set of face images will be relatively small (M<<N2), and the calculations become quite manageable. The associated eigenvalues provide a basis for ranking the eigenvectors according to their usefulness in characterizing the variation among the images.
In practice, a smaller M' is sufficient for identification, since accurate construction of the image is not a requirement. In this framework, identification becomes a pattern recognition task. The eigenfaces span an M'-dimensional subspace of the original N2 image space. The M' significant eigenvectors of the L matrix are chosen as those with the largest associated eigenvalues. In test cases based upon M=16 face images, M'=7 eigenfaces were found to yield acceptable results, i.e., a level of accuracy sufficient for monitoring a TV audience for purposes of studying viewing habits and tastes.
A new face image (Γ) is transformed into its eigenface components (i.e., projected into "face space") by a simple operation,
ωk =uk T (Γ-Ψ), (8)
for k=1, . . . , M'. This describes a set of point-by-point image multiplications and summations, operations which may be performed at approximately frame rate on current image processing hardware.
The weights form a vector ΩT = ω1 ω2 . . . ωM,! that describes the contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. The vector may then be used in a standard pattern recognition algorithm to find which of a number of pre-defined face classes, if any, best describes the face. The simplest method for determining which face class provides the best description of an input face image is to find the face class k that minimizes the Euclidian distance
εk =∥(Ω-Ωk)∥2, (9)
where Ωk is a vector describing the kth face class. The face classes Ωi are calculated by averaging the results of the eigenface representation over a small number of face images (as few as one) of each individual. A face is classified as belonging to class k when the minimum εk is below some chosen threshold θ.sub.ε. Otherwise the face is classified as "unknown", and optionally used to create a new face class.
Because creating the vector of weights is equivalent to projecting the original face image onto the low-dimensional face space, many images (most of them looking nothing like a face) will project onto a given pattern vector. This is not a problem for the system, however, since the distance ε between the image and the face space is simply the squared distance between the mean-adjusted input image Φ=Γ-Ψ and Φf=Σωk uk, its projection onto face space (where the summation is over k from 1 to M'):
ε2 =∥Φ-Φf ∥2 (10)
Thus, there are four possibilities for an input image and its pattern vector: (1) near face space and near a face class; (2) near face space but not near a known face class; (3) distant from face space and near a face class; and (4) distant from face space and not near a known face class.
In the first case, an individual is recognized and identified. In the second case, an unknown individual is present. The last two cases indicate that the image is not a face image. Case three typically shows up as a false positive in most other recognition systems. In the described embodiment, however, the false recognition may be detected because of the significant distance between the image and the subspace of expected face images.
To summarize, the eigenfaces approach to face recognition involves the following steps:
1. Collect a set of characteristic face images of the known individuals. This set may include a number of images for each person, with some variation in expression and in lighting. (Say four images of ten people, so M=40.)
2. Calculate the (40×40) matrix L, find its eigenvectors and eigenvalues, and choose the M' eigenvectors with the highest associated eigenvalues. (Let M'=10 in this example.)
3. Combine the normalized training set of images according to Eq. 7 to produce the (M'=10) eigenfaces uk.
4. For each known individual, calculate the class vector Ωk by averaging the eigenface pattern vectors Ω (from Eq. 9) calculated from the original (four) images of the individual. Choose a threshold θ.sub.ε which defines the maximum allowable distance from any face class, and a threshold θt which defines the maximum allowable distance from face space (according to Eq. 10).
5. For each new face image to be identified, calculate its pattern vector φ, the distances εi to each known class, and the distance ε to face space. If the distance ε>θt, classify the input image as not a face. If the minimum distance εk ≦θ.sub.ε and the distance ε≦θ1, classify the input face as the individual associated with class vector Ωk. If the minimum distance εk >θε and ε≦θ1, then the image may be classified as "unknown", and optionally used to begin a new face class.
6. If the new image is classified as a known individual, this image may be added to the original set of familiar face images, and the eigenfaces may be recalculated (steps 1-4). This gives the opportunity to modify the face space as the system encounters more instances of known faces.
In the described embodiment, calculation of the eigenfaces is done offline as part of the training. The recognition currently takes about 400 msec running rather inefficiently in Lisp on a Sun 4, using face images of size 128×128. With some special-purpose hardware, the current version could run at close to frame rate (33 msec).
Designing a practical system for face recognition within this framework requires assessing the tradeoffs between generality, required accuracy, and speed. If the face recognition task is restricted to a small set of people (such as the members of a family or a small company), a small set of eigenfaces is adequate to span the faces of interest. If the system is to learn new faces or represent many people, a larger basis set of eigenfaces will likely be required.
In the described embodiment, motion detection module 6 and head locator module 8 locates and tracks the position of the head of any person within the scene viewed by video camera 4 by implementing the tracking algorithm depicted in FIG. 4. A sequence of image frames 30 from video camera 4 first passes through a spatio-temporal filtering module 32 which accentuates image locations which change with time. Spatio-temporal filtering module 32 identifies the locations of motion by performing a differencing operation on successive frames of the sequence of image frames. In the output of the spatio-temporal filter module 32, a moving person "lights up" whereas the other areas of the image containing no motion appear as black.
The spatio-temporal filtered image passes to a thresholding module 34 which produces a binary motion image identifying the locations of the image for which the motion exceeds a preselected threshold. That is, it locates the areas of the image containing the most motion. In all such areas, the presence of a person is postulated.
A motion analyzer module 36 analyzes the binary motion image to watch how "motion blobs" change over time to decide if the motion is caused by a person moving and to determine head position. A few simple rules are applied, such as "the head is the small upper blob above a larger blob (i.e., the body)", and "head motion must be reasonably slow and contiguous" (i.e., heads are not expected to jump around the image erratically).
The motion image also allows for an estimate of scale. The size of the blob that is assumed to be the moving head determines the size of the subimage to send to face recognition module 10 (see FIG. 1). This subimage is rescaled to fit the dimensions of the eigenfaces.
Face space may also be used to locate faces in single images, either as an alternative to locating faces from motion (e.g. if there is too little motion or many moving objects) or as a method of achieving more precision than is possible by use of motion tracking alone.
Typically, images of faces do not change radically when projected into the face space; whereas, the projection of non-face images appear quite different. This basic idea may be used to detect the presence of faces in a scene. To implement this approach, the distance ε between the local subimage and face space is calculated at every location in the image. This calculated distance from face space is then used as a measure of "faceness". The result of calculating the distance from face space at every point in the image is a "face map" ε(x,y) in which low values (i.e., the dark areas) indicate the presence of a face.
Direct application of Eq. 10, however, is rather expensive computationally. A simpler, more efficient method of calculating the face map ε(x,y) is as follows.
To calculate the face map at every pixel of an image I(x,y), the subimage centered at that pixel is projected onto face space and the projection is then subtracted from the original subimage. To project a subimage Γ onto face space, one first subtracts the mean image (i.e., Ψ), resulting in Φ=Γ-Ψ. With Φf being the projection of Φ onto face space, the distance measure at a given image location is then: ##EQU4## since Φf ⊥(Φ-Φf). Because Φf is a linear combination of the eigenfaces (Φf =Σi ωi ui) and the eigenfaces are orthonormal vectors,
Φf T Φf =Σi ωi 2 (12)
ε2 (x,y)=ΦT (x,y) Φ(x,y)-Σωi 2 (x,y) (13)
where ε(x,y) and ωi (x,y) are scalar functions of image location, and Φ(x,y) is a vector function of image location.
The second term of Eq. 13 is calculated in practice by a correlation with the L eigenfaces: ##EQU5## where x the correlation operator. The first term of Eq. 13 becomes ##EQU6## Since the average face Ψ and the eigenfaces ui are fixed, the terms ΨT Ψ and Ψxui may be computed ahead of time.
Thus, the computation of the face map involves only L+1 correlations over the input image and the computation of the first term ΓT (x,y)Γ(x,y). This is computed by squaring the input image I(x,y) and, at each image location, summing the squared values of the local subimage.
Experiments reveal that recognition performance decreases quickly as the head size, or scale, is mis-judged. It is therefore desirable for the head size in the input image must be close to that of the eigenfaces. The motion analysis can give an estimate of head size, from which the face image is rescaled to the eigenface size.
Another approach to the scale problem, which may be separate from or in addition to the motion estimate, is to use multiscale eigenfaces, in which an input face image is compared with eigenfaces at a number of scales. In this case the image will appear to be near the face space of only the closest scale eigenfaces. Equivalently, the input image (i.e., the portion of the overall image selected for analysis) can be scaled to multiple sizes and the scale which results in the smallest distance measure to face space used.
Other embodiments are within the following claims. For example, although the eigenfaces approach to face recognition has been presented as an information processing model, it may also be implemented using simple parallel computing elements, as in a connectionist system or artificial neural network.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US4636862 *||7 févr. 1985||13 janv. 1987||Kokusai Denshin Denwa Kabushiki Kaisha||System for detecting vector of motion of moving objects on picture|
|US4651289 *||24 janv. 1983||17 mars 1987||Tokyo Shibaura Denki Kabushiki Kaisha||Pattern recognition apparatus and method for making same|
|US4752957 *||7 sept. 1984||21 juin 1988||Kabushiki Kaisha Toshiba||Apparatus and method for recognizing unknown patterns|
|US4838644 *||15 sept. 1987||13 juin 1989||The United States Of America As Represented By The United States Department Of Energy||Position, rotation, and intensity invariant recognizing method|
|US4858000 *||14 sept. 1988||15 août 1989||A. C. Nielsen Company||Image recognition audience measurement system and method|
|US4926491 *||6 juin 1988||15 mai 1990||Kabushiki Kaisha Toshiba||Pattern recognition device|
|US4930011 *||2 août 1988||29 mai 1990||A. C. Nielsen Company||Method and apparatus for identifying individual members of a marketing and viewing audience|
|US4998286 *||20 janv. 1988||5 mars 1991||Olympus Optical Co., Ltd.||Correlation operational apparatus for multi-dimensional images|
|US5031228 *||14 sept. 1988||9 juil. 1991||A. C. Nielsen Company||Image recognition system and method|
|1||L. Sirovich et al., 1987 Optical Society of America, "Low-dimensional procedure for the characterization of human faces", pp. 519-524.|
|2||*||L. Sirovich et al., 1987 Optical Society of America, Low dimensional procedure for the characterization of human faces , pp. 519 524.|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US6445810 *||1 déc. 2000||3 sept. 2002||Interval Research Corporation||Method and apparatus for personnel detection and tracking|
|US6456320 *||26 mai 1998||24 sept. 2002||Sanyo Electric Co., Ltd.||Monitoring system and imaging system|
|US6501857 *||20 juil. 1999||31 déc. 2002||Craig Gotsman||Method and system for detecting and classifying objects in an image|
|US6529620||12 sept. 2001||4 mars 2003||Pinotage, L.L.C.||System and method for obtaining and utilizing maintenance information|
|US6535620 *||12 mars 2001||18 mars 2003||Sarnoff Corporation||Method and apparatus for qualitative spatiotemporal data processing|
|US6597801 *||20 déc. 1999||22 juil. 2003||Hewlett-Packard Development Company L.P.||Method for object registration via selection of models with dynamically ordered features|
|US6618490 *||20 déc. 1999||9 sept. 2003||Hewlett-Packard Development Company, L.P.||Method for efficiently registering object models in images via dynamic ordering of features|
|US6628811 *||18 mars 1999||30 sept. 2003||Matsushita Electric Industrial Co. Ltd.||Method and apparatus for recognizing image pattern, method and apparatus for judging identity of image patterns, recording medium for recording the pattern recognizing method and recording medium for recording the pattern identity judging method|
|US6628834 *||11 juil. 2002||30 sept. 2003||Hewlett-Packard Development Company, L.P.||Template matching system for images|
|US6690414 *||12 déc. 2000||10 févr. 2004||Koninklijke Philips Electronics N.V.||Method and apparatus to reduce false alarms in exit/entrance situations for residential security monitoring|
|US6724920||21 juil. 2000||20 avr. 2004||Trw Inc.||Application of human facial features recognition to automobile safety|
|US6795567||5 mai 2000||21 sept. 2004||Hewlett-Packard Development Company, L.P.||Method for efficiently tracking object models in video sequences via dynamic ordering of features|
|US6810135||29 juin 2000||26 oct. 2004||Trw Inc.||Optimized human presence detection through elimination of background interference|
|US6816085||17 nov. 2000||9 nov. 2004||Michael N. Haynes||Method for managing a parking lot|
|US6865296 *||5 juin 2001||8 mars 2005||Matsushita Electric Industrial Co., Ltd.||Pattern recognition method, pattern check method and pattern recognition apparatus as well as pattern check apparatus using the same methods|
|US6873743||29 mars 2002||29 mars 2005||Fotonation Holdings, Llc||Method and apparatus for the automatic real-time detection and correction of red-eye defects in batches of digital images or in handheld appliances|
|US6904168||22 oct. 2001||7 juin 2005||Fotonation Holdings, Llc||Workflow system for detection and classification of images suspected as pornographic|
|US6904347||29 juin 2000||7 juin 2005||Trw Inc.||Human presence detection, identification and tracking using a facial feature image sensing system for airbag deployment|
|US6965694 *||27 nov. 2001||15 nov. 2005||Honda Giken Kogyo Kabushiki Kaisa||Motion information recognition system|
|US6975763 *||11 juil. 2001||13 déc. 2005||Minolta Co., Ltd.||Shade component removing apparatus and shade component removing method for removing shade in image|
|US7050084||24 sept. 2004||23 mai 2006||Avaya Technology Corp.||Camera frame display|
|US7054468||22 juil. 2002||30 mai 2006||Honda Motor Co., Ltd.||Face recognition using kernel fisherfaces|
|US7068301||24 avr. 2002||27 juin 2006||Pinotage L.L.C.||System and method for obtaining and utilizing maintenance information|
|US7085774||30 août 2001||1 août 2006||Infonox On The Web||Active profiling system for tracking and quantifying customer conversion efficiency|
|US7103215||7 mai 2004||5 sept. 2006||Potomedia Technologies Llc||Automated detection of pornographic images|
|US7110570||21 juil. 2000||19 sept. 2006||Trw Inc.||Application of human facial features recognition to automobile security and convenience|
|US7188307 *||8 nov. 2001||6 mars 2007||Canon Kabushiki Kaisha||Access system|
|US7227567||14 sept. 2004||5 juin 2007||Avaya Technology Corp.||Customizable background for video communications|
|US7269292||26 juin 2003||11 sept. 2007||Fotonation Vision Limited||Digital image adjustable compression and resolution using face detection information|
|US7295687 *||31 juil. 2003||13 nov. 2007||Samsung Electronics Co., Ltd.||Face recognition method using artificial neural network and apparatus thereof|
|US7315630||26 juin 2003||1 janv. 2008||Fotonation Vision Limited||Perfecting of digital image rendering parameters within rendering devices using face detection|
|US7317815||26 juin 2003||8 janv. 2008||Fotonation Vision Limited||Digital image processing composition using face detection information|
|US7331671||29 mars 2004||19 févr. 2008||Delphi Technologies, Inc.||Eye tracking method based on correlation and detected eye movement|
|US7362368||26 juin 2003||22 avr. 2008||Fotonation Vision Limited||Perfecting the optics within a digital image acquisition device using face detection|
|US7362885||20 avr. 2004||22 avr. 2008||Delphi Technologies, Inc.||Object tracking and eye state identification method|
|US7379602||16 juil. 2003||27 mai 2008||Honda Giken Kogyo Kabushiki Kaisha||Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant|
|US7382903 *||19 nov. 2003||3 juin 2008||Eastman Kodak Company||Method for selecting an emphasis image from an image collection based upon content recognition|
|US7388971||23 oct. 2003||17 juin 2008||Northrop Grumman Corporation||Robust and low cost optical system for sensing stress, emotion and deception in human subjects|
|US7440593||26 juin 2003||21 oct. 2008||Fotonation Vision Limited||Method of improving orientation and color balance of digital images using face detection information|
|US7460150||14 mars 2005||2 déc. 2008||Avaya Inc.||Using gaze detection to determine an area of interest within a scene|
|US7466866||5 juil. 2007||16 déc. 2008||Fotonation Vision Limited||Digital image adjustable compression and resolution using face detection information|
|US7471846||26 juin 2003||30 déc. 2008||Fotonation Vision Limited||Perfecting the effect of flash within an image acquisition devices using face detection|
|US7512571||26 août 2003||31 mars 2009||Paul Rudolf||Associative memory device and method based on wave propagation|
|US7564476||13 mai 2005||21 juil. 2009||Avaya Inc.||Prevent video calls based on appearance|
|US7565030||27 déc. 2004||21 juil. 2009||Fotonation Vision Limited||Detecting orientation of digital images using face detection information|
|US7570785||29 nov. 2007||4 août 2009||Automotive Technologies International, Inc.||Face monitoring system and method for vehicular occupants|
|US7574016||26 juin 2003||11 août 2009||Fotonation Vision Limited||Digital image processing using face detection information|
|US7616233||26 juin 2003||10 nov. 2009||Fotonation Vision Limited||Perfecting of digital image capture parameters within acquisition devices using face detection|
|US7620216||14 juin 2006||17 nov. 2009||Delphi Technologies, Inc.||Method of tracking a human eye in a video image|
|US7620218||17 juin 2008||17 nov. 2009||Fotonation Ireland Limited||Real-time face tracking with reference images|
|US7630527||20 juin 2007||8 déc. 2009||Fotonation Ireland Limited||Method of improving orientation and color balance of digital images using face detection information|
|US7634109||30 oct. 2008||15 déc. 2009||Fotonation Ireland Limited||Digital image processing using face detection information|
|US7650034||14 déc. 2005||19 janv. 2010||Delphi Technologies, Inc.||Method of locating a human eye in a video image|
|US7652593||5 oct. 2006||26 janv. 2010||Haynes Michael N||Method for managing a parking lot|
|US7660445 *||17 avr. 2008||9 févr. 2010||Eastman Kodak Company||Method for selecting an emphasis image from an image collection based upon content recognition|
|US7668304||25 janv. 2006||23 févr. 2010||Avaya Inc.||Display hierarchy of participants during phone call|
|US7684630||9 déc. 2008||23 mars 2010||Fotonation Vision Limited||Digital image adjustable compression and resolution using face detection information|
|US7688225||22 oct. 2007||30 mars 2010||Haynes Michael N||Method for managing a parking lot|
|US7693311||5 juil. 2007||6 avr. 2010||Fotonation Vision Limited||Perfecting the effect of flash within an image acquisition devices using face detection|
|US7702136||5 juil. 2007||20 avr. 2010||Fotonation Vision Limited||Perfecting the effect of flash within an image acquisition devices using face detection|
|US7706576||28 déc. 2004||27 avr. 2010||Avaya Inc.||Dynamic video equalization of images using face-tracking|
|US7809162||30 oct. 2008||5 oct. 2010||Fotonation Vision Limited||Digital image processing using face detection information|
|US7844076||30 oct. 2006||30 nov. 2010||Fotonation Vision Limited||Digital image processing using face detection and skin tone information|
|US7844135||10 juin 2009||30 nov. 2010||Tessera Technologies Ireland Limited||Detecting orientation of digital images using face detection information|
|US7848549||30 oct. 2008||7 déc. 2010||Fotonation Vision Limited||Digital image processing using face detection information|
|US7853043||14 déc. 2009||14 déc. 2010||Tessera Technologies Ireland Limited||Digital image processing using face detection information|
|US7855737||26 mars 2008||21 déc. 2010||Fotonation Ireland Limited||Method of making a digital camera image of a scene including the camera user|
|US7860274||30 oct. 2008||28 déc. 2010||Fotonation Vision Limited||Digital image processing using face detection information|
|US7864990||11 déc. 2008||4 janv. 2011||Tessera Technologies Ireland Limited||Real-time face tracking in a digital image acquisition device|
|US7912245||20 juin 2007||22 mars 2011||Tessera Technologies Ireland Limited||Method of improving orientation and color balance of digital images using face detection information|
|US7916897||5 juin 2009||29 mars 2011||Tessera Technologies Ireland Limited||Face tracking for controlling imaging parameters|
|US7916971||24 mai 2007||29 mars 2011||Tessera Technologies Ireland Limited||Image processing method and apparatus|
|US7953251||16 nov. 2010||31 mai 2011||Tessera Technologies Ireland Limited||Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images|
|US7962629||6 sept. 2010||14 juin 2011||Tessera Technologies Ireland Limited||Method for establishing a paired connection between media devices|
|US7965875||12 juin 2007||21 juin 2011||Tessera Technologies Ireland Limited||Advances in extending the AAM techniques from grayscale to color images|
|US7974714||29 août 2006||5 juil. 2011||Steven Mark Hoffberg||Intelligent electronic appliance system and method|
|US8005265||8 sept. 2008||23 août 2011||Tessera Technologies Ireland Limited||Digital image processing using face detection information|
|US8031914||11 oct. 2006||4 oct. 2011||Hewlett-Packard Development Company, L.P.||Face-based image clustering|
|US8046313||13 nov. 2006||25 oct. 2011||Hoffberg Steven M||Ergonomic man-machine interface incorporating adaptive pattern recognition based control system|
|US8050465||3 juil. 2008||1 nov. 2011||DigitalOptics Corporation Europe Limited||Real-time face tracking in a digital image acquisition device|
|US8055029||18 juin 2007||8 nov. 2011||DigitalOptics Corporation Europe Limited||Real-time face tracking in a digital image acquisition device|
|US8055067||18 janv. 2007||8 nov. 2011||DigitalOptics Corporation Europe Limited||Color segmentation|
|US8055090||14 sept. 2010||8 nov. 2011||DigitalOptics Corporation Europe Limited||Digital image processing using face detection information|
|US8064653||29 nov. 2007||22 nov. 2011||Viewdle, Inc.||Method and system of person identification by facial image|
|US8131016||3 déc. 2010||6 mars 2012||DigitalOptics Corporation Europe Limited||Digital image processing using face detection information|
|US8135184||23 mai 2011||13 mars 2012||DigitalOptics Corporation Europe Limited||Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images|
|US8155397||26 sept. 2007||10 avr. 2012||DigitalOptics Corporation Europe Limited||Face tracking in a camera processor|
|US8155401||29 sept. 2010||10 avr. 2012||DigitalOptics Corporation Europe Limited||Perfecting the effect of flash within an image acquisition devices using face detection|
|US8160312||29 sept. 2010||17 avr. 2012||DigitalOptics Corporation Europe Limited||Perfecting the effect of flash within an image acquisition devices using face detection|
|US8165282||30 août 2006||24 avr. 2012||Avaya Inc.||Exploiting facial characteristics for improved agent selection|
|US8213737||20 juin 2008||3 juil. 2012||DigitalOptics Corporation Europe Limited||Digital image enhancement with reference images|
|US8224039||3 sept. 2008||17 juil. 2012||DigitalOptics Corporation Europe Limited||Separating a directional lighting variability in statistical face modelling based on texture space decomposition|
|US8224108||4 déc. 2010||17 juil. 2012||DigitalOptics Corporation Europe Limited||Digital image processing using face detection information|
|US8243182||8 nov. 2010||14 août 2012||DigitalOptics Corporation Europe Limited||Method of making a digital camera image of a scene including the camera user|
|US8251597||15 oct. 2010||28 août 2012||Wavecam Media, Inc.||Aerial support structure for capturing an image of a target|
|US8270674||3 janv. 2011||18 sept. 2012||DigitalOptics Corporation Europe Limited||Real-time face tracking in a digital image acquisition device|
|US8320641||19 juin 2008||27 nov. 2012||DigitalOptics Corporation Europe Limited||Method and apparatus for red-eye detection using preview or other reference images|
|US8326066||8 mars 2010||4 déc. 2012||DigitalOptics Corporation Europe Limited||Digital image adjustable compression and resolution using face detection information|
|US8330831||16 juin 2008||11 déc. 2012||DigitalOptics Corporation Europe Limited||Method of gathering visual meta data using a reference image|
|US8345114||30 juil. 2009||1 janv. 2013||DigitalOptics Corporation Europe Limited||Automatic face and skin beautification using face detection|
|US8379917||2 oct. 2009||19 févr. 2013||DigitalOptics Corporation Europe Limited||Face recognition performance using additional image features|
|US8384793||30 juil. 2009||26 févr. 2013||DigitalOptics Corporation Europe Limited||Automatic face and skin beautification using face detection|
|US8385610||11 juin 2010||26 févr. 2013||DigitalOptics Corporation Europe Limited||Face tracking for controlling imaging parameters|
|US8433050||6 févr. 2006||30 avr. 2013||Avaya Inc.||Optimizing conference quality with diverse codecs|
|US8494232||25 févr. 2011||23 juil. 2013||DigitalOptics Corporation Europe Limited||Image processing method and apparatus|
|US8494286||5 févr. 2008||23 juil. 2013||DigitalOptics Corporation Europe Limited||Face detection in mid-shot digital images|
|US8498452||26 août 2008||30 juil. 2013||DigitalOptics Corporation Europe Limited||Digital image processing using face detection information|
|US8503800||27 févr. 2008||6 août 2013||DigitalOptics Corporation Europe Limited||Illumination detection using classifier chains|
|US8509496||16 nov. 2009||13 août 2013||DigitalOptics Corporation Europe Limited||Real-time face tracking with reference images|
|US8509561||27 févr. 2008||13 août 2013||DigitalOptics Corporation Europe Limited||Separating directional lighting variability in statistical face modelling based on texture space decomposition|
|US8515138||8 mai 2011||20 août 2013||DigitalOptics Corporation Europe Limited||Image processing method and apparatus|
|US8577616||16 déc. 2004||5 nov. 2013||Aerulean Plant Identification Systems, Inc.||System and method for plant identification|
|US8593542||17 juin 2008||26 nov. 2013||DigitalOptics Corporation Europe Limited||Foreground/background separation using reference images|
|US8649604||23 juil. 2007||11 févr. 2014||DigitalOptics Corporation Europe Limited||Face searching and detection in a digital image acquisition device|
|US8675991||2 juin 2006||18 mars 2014||DigitalOptics Corporation Europe Limited||Modification of post-viewing parameters for digital images using region or feature information|
|US8682097||16 juin 2008||25 mars 2014||DigitalOptics Corporation Europe Limited||Digital image enhancement with reference images|
|US8896725||17 juin 2008||25 nov. 2014||Fotonation Limited||Image capture device with contemporaneous reference image capture mechanism|
|US8923564||10 févr. 2014||30 déc. 2014||DigitalOptics Corporation Europe Limited||Face searching and detection in a digital image acquisition device|
|US8948468||26 juin 2003||3 févr. 2015||Fotonation Limited||Modification of viewing parameters for digital images using face detection information|
|US8989453||26 août 2008||24 mars 2015||Fotonation Limited||Digital image processing using face detection information|
|US9007480||30 juil. 2009||14 avr. 2015||Fotonation Limited||Automatic face and skin beautification using face detection|
|US9053545||19 mars 2007||9 juin 2015||Fotonation Limited||Modification of viewing parameters for digital images using face detection information|
|US9129381||17 juin 2008||8 sept. 2015||Fotonation Limited||Modification of post-viewing parameters for digital images using image region or feature information|
|US20020006226 *||11 juil. 2001||17 janv. 2002||Minolta Co., Ltd.||Shade component removing apparatus and shade component removing method for removing shade in image|
|US20020018596 *||5 juin 2001||14 févr. 2002||Kenji Nagao||Pattern recognition method, pattern check method and pattern recognition apparatus as well as pattern check apparatus using the same methods|
|US20020055957 *||8 nov. 2001||9 mai 2002||Hiroyuki Ohsawa||Access system|
|US20020067856 *||30 nov. 2001||6 juin 2002||Iwao Fujii||Image recognition apparatus, image recognition method, and recording medium|
|US20040034611 *||31 juil. 2003||19 févr. 2004||Samsung Electronics Co., Ltd.||Face recognition method using artificial neural network and apparatus thereof|
|US20040193789 *||26 août 2003||30 sept. 2004||Paul Rudolf||Associative memory device and method based on wave propagation|
|US20040208361 *||7 mai 2004||21 oct. 2004||Vasile Buzuloiu||Automated detection of pornographic images|
|US20050060738 *||15 sept. 2003||17 mars 2005||Mitsubishi Digital Electronics America, Inc.||Passive enforcement method for media ratings|
|US20050089206 *||23 oct. 2003||28 avr. 2005||Rice Robert R.||Robust and low cost optical system for sensing stress, emotion and deception in human subjects|
|US20050105803 *||19 nov. 2003||19 mai 2005||Ray Lawrence A.||Method for selecting an emphasis image from an image collection based upon content recognition|
|US20050192760 *||16 déc. 2004||1 sept. 2005||Dunlap Susan C.||System and method for plant identification|
|US20060203107 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Perfecting of digital image capture parameters within acquisition devices using face detection|
|US20060203108 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Perfecting the optics within a digital image acquisition device using face detection|
|US20060204054 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Digital image processing composition using face detection information|
|US20060204055 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Digital image processing using face detection information|
|US20060204056 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Perfecting the effect of flash within an image acquisition devices using face detection|
|US20060204057 *||26 juin 2003||14 sept. 2006||Eran Steinberg||Digital image adjustable compression and resolution using face detection information|
|US20060204110 *||27 déc. 2004||14 sept. 2006||Eran Steinberg||Detecting orientation of digital images using face detection information|
|US20060215924 *||26 juin 2003||28 sept. 2006||Eran Steinberg||Perfecting of digital image rendering parameters within rendering devices using face detection|
|Classification aux États-Unis||382/118, 382/204, 382/201|
|Classification internationale||G07C9/00, A61B5/117, H04N7/28, H04H60/59, G06K9/62, H04N7/26, G06K9/00, H04H60/45, H04H1/00, H04H60/56|
|Classification coopérative||H04N19/94, H04N19/20, H04N19/00, H04N21/42201, G06K9/00241, H04H60/45, H04H60/56, G06K9/6232, G06K9/6247, G06K9/00228, A61B5/1176, G07C9/00158, H04H60/59, G06K9/00275|
|Classification européenne||H04N21/422B, G07C9/00C2D, A61B5/117F, H04N7/26, G06K9/00F2H, H04N7/28, G06K9/00F1H, G06K9/62B4P, G06K9/62B4, H04N7/26J4, G06K9/00F1, H04H60/45, H04H60/56|
|24 août 2000||FPAY||Fee payment|
Year of fee payment: 8
|24 août 2000||SULP||Surcharge for late payment|
|30 juin 2005||FPAY||Fee payment|
Year of fee payment: 12
|30 juin 2005||SULP||Surcharge for late payment|
|25 juil. 2005||PRDP||Patent reinstated due to the acceptance of a late maintenance fee|
Effective date: 19990112