« PrécédentContinuer »
SYSTEM AND METHOD FOR DETECTING
BACKGROUND OF THE INVENTION
This application claims the priority of Korean Patent Application No. 10-2002-0067974 filed on Nov. 4, 2002, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of Invention 10 The present invention relates to a system and method for
detecting a face, and more particularly, to a system and method for detecting a face that is capable of quickly and correctly deciding, using an algorithm for determining whether a face is occluded, whether an input facial image is 15 occluded.
2. Description of the Related Art
As the information society advances, automatic teller machines have rapidly come into wide use. Financial crimes 2Q in which money is illegally withdrawn using credit cards or passwords of other persons have also increased. To deter these financial crimes, a CCTV is installed in automatic teller machines to identify criminals. However, criminals often commit these crimes while wearing sunglasses or caps ^ so as not to be photographed by the CCTV, and thus, it makes it difficult to identify the faces of the criminals.
Korean Patent No. 0306355 (entitled "User identification system and automatic teller machine using the same") discloses a user identification system for identifying the face 30 of a user, by acquiring a facial image of the user, obtaining a facial region through filtering of only a skin color region, extracting an eye position from the obtained region, setting a range of a mouth and nose on the basis of the eye position, and checking whether confirmable characteristic points 35 exist. However, the eye position is extracted and the mouth and nose positions are then extracted on the basis of the extracted eye positions. Thus, if eye positions are not extracted, it is difficult to extract the mouth and nose positions. Further, image data of the user is not searched but 40 the presence of facial components (e.g., eyes, nose, mouth, and the like) is merely checked to identify the user. Thus, the face detection of a user cannot be correctly performed.
In addition, Korean Patent No. 0293897 (entitled "Method for recognizing face of user of bank transaction 45 system") discloses a method for recognizing the face of a user, which comprises the steps of determining facial candidate entities matching with an input user image using chain tracking, extracting contour points and comparing brightness values of the contour points to search graphics 50 corresponding to eyes and mouth of the user, calculating a recognition index for the face, extracting only a single facial candidate entity, and comparing the recognition index for the face of the extracted facial candidate entity with a reference recognition index for the face. However, the presence of the 55 eyes and mouth is determined according to whether the contour points of the eyes or mouth have been extracted. Thus, there is a possibility of misrecognizing sunglasses as eyes in a case where the sunglasses a user wears are similar to the eyes of the user in view of their shape and size. go
Furthermore, since conventional face detection technology uses color images, it is difficult to detect feature points for the nose and mouth of the user due to illumination. Consequently, the feature points of the nose and mouth of the user may not be detected. Therefore, there is also a 65 problem in that the legal user may be recognized as an illegal user.
SUMMARY OF THE INVENTION
The present invention is conceived to solve the aforementioned problems. It is an object of the present invention to provide a system and method for detecting a face of a user that is capable of quickly and correctly deciding whether an input facial image is occluded although a variety of facial images under:different conditions are inputted.
According to an aspect of the present invention for achieving the object, there is provided a system and method for detecting a face of a user, wherein it can be determined whether a facial image is occluded by extracting eigenvectors and weights from an input facial image of a user using PCA (Principal Component Analysis) and assigning the SVM (Support Vector Machines) to an algorithm for determining whether the image is occluded.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other objects and features of the present invention will become apparent from the following description of an embodiment given in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic block diagram of a system for detecting the face of a user according to the present invention;
FIG. 2 is a flowchart illustrating the method for detecting the face of a user according to the present invention;
FIG. 3 is a flowchart illustrating a method for authenticating a facial image to which the method for detecting the face according to the present invention is applied;
FIG. 4 is a diagram illustrating an embodiment of the present invention in which the size of the facial image is normalized; and
FIGS. 5a and 5b are pictures illustrating examples of a plurality of training images.
DETAILED DESCRIPTION OF THE
An embodiment of the present invention will now be described with reference to the accompanying drawings.
FIG. 1 is a block diagram of a system for detecting the face of a user according to the present invention. The system for detecting the face of a user comprises a memory unit 100, a facial image recognition unit 200, and a facial image decision unit 300. The memory unit 100 stores eigenvectors and weights extracted from a plurality of training images, which are classified into a normal facial image class and an occluded facial image class. The normal facial image class is obtained by extracting eigenvectors and weights from normal facial images to which illumination changes, countenance changes, beards, scaling shift and rotation changes are applied and storing the extracted eigenvectors and weights, and the occluded facial image class is obtained by extracting eigenvectors and weights from partly occluded facial images to which illumination changes, countenance changes, scaling shift and rotation changes are applied and storing the extracted eigenvectors and weights. That is, the normal facial images are identifiable facial images, and the occluded facial images are unidentifiable facial images because the faces are partly occluded with sunglasses, masks, mufflers and the like. Herein, the normal and occluded facial image classes are used to derive an occluding-decision algorithm, so as to decide whether the user facial images have been occluded. The normal facial image class has a value of 1, whereas the occluded facial image has
a value of -1. A process of deriving an algorithm for deciding whether facial images are occluded will be explained in detail with reference to the following mathematical expressions.
The facial image recognition unit 200 functions to extract eigenvectors and weights from an input facial image using PCA to classify the normal and occluded face using Support Vector Machines and comprises a monochrome part 210, a facial image detection part 220, a facial image normalization part 230, a facial image division part 240, and an eigenvector/weight extraction part 250 using SVM.
The monochrome part 210 converts an input color image into a monochrome image. The reason for this is that since color and brightness components are mixed together in the color image configured in an RGB (Red, Green, Blue) mode, error due to brightness changes may be generated upon the extraction of the eigenvectors.
The facial image detector 220 uses a technique for dividing the input image into background and facial regions, and detects the facial region from the input image using Gabor filters. Here, a method of detecting a facial region using the Gabor filters is performed by applying some sets of Gabor filters having various directionalities and frequencies into the input image and then detecting the facial region in accordance with response values thereof.
The facial image normalization part 230 performs corrections for image brightness due to illumination, facial image size due to distance from the camera, inclination of the facial image, and the like to normalize the facial region.
The facial image division part 240 divides the normalized facial region into a higher region centered on the eyes and a lower region centered on the nose and mouth. Here, the reason for causing the facial region to be divided into the higher and lower regions is to quickly and correctly extract the eigenvectors of the respective facial components by restricting the size of the search region for each of the facial components since eigenvectors may be extracted from wrong region if the search region is too wide to extract the respective facial components. Further, since peripheral regions are eliminated noise components can be reduced.
The eigenvector/weight extraction part 250 extracts eigenvectors and weights of the eyes, nose and mouth, which are major components of the face, using PCA according to the divided facial regions. Here, eigenvectors of the eyes, nose and mouth can be simultaneously extracted, because the facial region in which the eyes, nose and mouth are located is restricted upon extraction of the eigenvectors.
Hereinafter, the mathematical expressions used for extracting the eigenvectors and weights using PCA will be described.
Formula 1 is used to obtain an average facial image of the normal facial images and an average facial image of the occluded facial images,respectively. Here, N is the total number of normal facial images, and M is the total number of occluded facial images.
First, a method of extracting eigenvectors and weights based on the normal facial images will be explained.
r is applied to 'P in Formula 1 so as to obtain the average facial image, and a vector <]>, is then calculated by subtracting the average facial image Q¥) from the facial images (T)
That is, (P.^r-V.
Using the vector <]>, calculated as such, a covariance matrix is produced in accordance with Formula 2 below.
Using Formula 3, the weights (wy) are calculated.
Although only a method of extracting eigenvectors and weights from the normal facial images has been described above, a method of extracting eigenvectors and weights from the partly occluded facial images is performed in the same manner as the method of extracting eigenvectors and weights from the normal facial images. Further, eigenvectors and weights are extracted from the higher and lower regions in the facial region,respectively.
The facial image decision unit 300 decides whether the input facial image is occluded, through the occludingdecision algorithm that has been obtained from the training images stored in the memory unit 100. The occludingdecision algorithm, using Support Vector Machines, is expressed as the following Formula 4.
If a value obtained from the occluding-decision algorithm in Formula 4 is 1, the facial image is decided to be a normal one. On the other hand, if the value from the algorithm is -1, the facial image is determined to be a partly occluded one. This is because the algorithm for deciding whether the facial image is occluded has been configured after the normal and partly occluded facial images stored in the memory unit 100 are set to have class values of 1 and -1,respectively, and then trained.
In Formula 4 above, y„ X,, and b are set by substituting a set of the class value (1, in the present invention) of the normal training image, the eigenvectors and the weights, which are stored in the memory unit 100, and another set of the class value (-1, in the present invention) of the partly occluded facial images, eigenvectors and weights, into Formula 4,respectively. These values may vary as the training images stored in the memory unit 100 are updated.