WO1998003966A2 - System for object verification and identification - Google Patents

System for object verification and identification Download PDF

Info

Publication number
WO1998003966A2
WO1998003966A2 PCT/US1997/012716 US9712716W WO9803966A2 WO 1998003966 A2 WO1998003966 A2 WO 1998003966A2 US 9712716 W US9712716 W US 9712716W WO 9803966 A2 WO9803966 A2 WO 9803966A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
person
data
verification
data sets
Prior art date
Application number
PCT/US1997/012716
Other languages
French (fr)
Other versions
WO1998003966A3 (en
Inventor
Kedu Han
David B. Hertz
Lex Van Gelder
Original Assignee
Identification Technologies International, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Identification Technologies International, Inc. filed Critical Identification Technologies International, Inc.
Priority to AU38064/97A priority Critical patent/AU3806497A/en
Publication of WO1998003966A2 publication Critical patent/WO1998003966A2/en
Publication of WO1998003966A3 publication Critical patent/WO1998003966A3/en

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/253Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition visually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • This invention relates generally to information storage and retrieval computer and camera systems for obtaining and storing certain data relating to specific images and providing rapid access to that data for purposes of object targeting, enrollment, identification, verification, and classification.
  • image recognition provides a system that captures data relating to an image area of an object or person and then compares that captured image data to information stored in a computer memory.
  • Kodak's card-based facial security system An example of a system exhibiting these drawbacks is Kodak's card-based facial security system.
  • the Kodak system classifies fifty areas of the card holder's face, identifying each area with a 2-byte code. That data is then stored on the stripe of a magnetic card. The card user must then have his/her face compared with the stored facial code in order for a match to be made.
  • a drawback to this technique is that it requires the local computer to recognize multiple areas of the face, then classify each of these areas, and then compare the instant classification to the code stored on the card. Hence, a significant processing burden is required at the recognition station.
  • the Kodak system is relatively inflexible in the sense that it is limited only to those things that have been classified. Thus, for the Kodak system to operate on other objects, a whole new classification scheme needs to the developed. The classification effort is a significant, labor-intensive process.
  • portable media such as magnetic stripe cards, magnetic discs, printed bar codes, semi-conductor devices such as smart cards, or in data bases.
  • various independent attributes of an object such as, for example, a nose on a human face
  • the data sets are generated during registration or enrollment. These data sets are recorded on a card capable of carrying data, such as a magnetic strip or a 2d-barcode, that is issued to a holder.
  • the advantage of the system is that instead of maintaining a central database, the identification data is now decentralized and held on the small cards.
  • a dramatic reduction in the data that is required to be identified is achieved.
  • the reduction of the data, required to uniquely identify complex objects such as human faces also achieves a faster response in the identi ication process.
  • the comparison process involves the verification of data present on the ID card, with the data sets generated from the video or other image of the object that has been registered through an input device such as an electronic camera.
  • the comparison process utilizes a neural network that has been trained so as to recognize or identify a particular data set (such as human facial image component attributes) .
  • the training of the neural network is based on a process of polling the various attributes that are obtained at the identification station by the computer, against the component attribute data sets that are present on the ID card. The polling assumes that certain distinctive features, if in agreement with the data sets on the ID card, can override other less distinctive attributes. However, as the security needs of the application increases, the polling process increases the required precision of the comparisons.
  • a program for reducing the characteristics of an object image, for example, a human face, to a set of characteristic numbers; later recalling that set of characteristics for comparison with an input from an external source.
  • Keys to this set of numbers are encoded as indices and are stored on local source objects, such as 3-3/8" x 2" computer input cards.
  • the indices having been electronically posted to a central computer program, point to a second set of data retrieved from the computer program. A comparison of that set then occurs with a second set of similar stored data retrieved from a local source, such as the card.
  • FIG. 1 is a block diagram of the enrollment station forming the present invention
  • FIG. 2 is a block diagram of the verification station forming the present invention
  • FIGS. 3A and 3B are flow-chart diagrams showing the functions of the enrollment process
  • FIG. 4 is a flow chart diagram illustrating the one-on-one preprocessing steps of the enrollment process shown in FIGS. 3A-3B;
  • FIG. 5 is a flow chart diagram of the binarization routine of the enrollment process shown in Figs. 3A-3B;
  • FIG. 6 is a flow chart diagram of a first embodiment of the targeting process for the enrollment process shown in Figs. 3A-3B;
  • FIG. 7 is a flow chart diagram of the UD and CP coordinate estimation functions of the enrollment process shown in FIGS. 3A-3B;
  • FIG 8 is a flow chart diagram illustrating the area of interest defining function of the enrollment process shown in FIGS. 3A-3B;
  • FIG. 9 is a flow chart diagram illustrating the normalization procedure of the enrollment process shown in Figs. 3A-3B;
  • FIG. 10 is a flow chart diagram showing the transform step of the enrollment process shown in Figs. 3A-3B;
  • FIG. 11 is a flow chart of the second transformation process for the enrollment process of Figs. 3A-3B;
  • FIG. 12 is a flow chart of the output coding function for the enrollment process of Figs. 3A-3B;
  • FIG. 13 is a flow chart illustrating the encrypt function for determining useful parameter vectors for the enrollment process of Figs. 3A-3B;
  • FIGS. 14A-14B are flow charts of the process for image verification of the present invention;
  • FIG. 15 is a flow chart of the image verification pre-processing function
  • FIG. 16 is a flow chart of the image verification setup control function
  • FIG. 17 is a flow chart of the image verification data decryption function
  • FIG. 18 is a flow chart of the image verification parameter value comparison function.
  • FIG. 19 is a flow chart of the image verification identity decision function
  • FIG. 20 is a diagram showing the dimensional breakdown of the face
  • FIG. 21 is a top view of the array of infra-red light emitting diodes used to light a mini-camera apparatus;
  • FIG. 22 is a perspective transparent diagram of the mini-camera and infra-red lighting device and components thereof of the invention.
  • FIG. 23 is a circuit schematic diagram for the light array of FIG. 21;
  • FIG. 24 is a circuit schematic diagram for the mini-camera of FIG. 21;
  • FIGS. 25 (a) -(c) are respectively front, side and perspective views of a first embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
  • FIGS. 26 (a) -(c) are respectively front, side and perspective views of a second embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
  • FIGS. 27 (a) -(c) are respectively front, side and perspective views of a third embodiment of the housing for the mimcamera arrangement of FIG. 22;
  • FIG. 28 is a flow chart illustrating a second embodiment of the targettmg process for the present invention.
  • the first embodiment of present system is composed of two processes and two hardware systems.
  • the first process of this first embodiment is the enrollment process and is shown in Figs. 3A-13.
  • the purpose of the enrollment process is to code the image of the person or object and to reduce that image to a portable form, and format, such as a card with a magnetic strip, or in a database.
  • the second process of the first embodiment is shown in Figs. 14A-15.
  • This process is the verification process.
  • the verification process performs the tasks of taking a picture or image of the person or object, and comparing the captured image to the coded image obtained from the enrollment process. If the image of the person or object obtained during verification and the coded information obtained from the enrollment process match, identity is verified by the verification process.
  • the enrollment process and the verification process have elements in common, which will be described further below.
  • the two hardware systems used in the present invention are the enrollment station, shown in Fig. 1, and the verification station, shown in Fig. 2.
  • Fig. 1 is a block diagram of the enrollment station 100.
  • the object 10 represents the object that will be coded m the enrollment process for later verification during the verification process.
  • the object 10 is a face of a person.
  • the object under consideration may constitute a machine part under inspection, or a house, or a car key, or an automobile, or a hand.
  • the label on the object or its container carries data related to features of the object. These data permit an automatic verification that the object is correctly labeled, and therefore will be properly stored, packaged, and shipped.
  • Object types can vary, as long as an identifiable characteristic can be extracted, and is stored in the enrollment process.
  • one or more video cameras 20 or other cameras as set forth m this application and equipped with lighting devices 30, are used to record an image of the object 10.
  • the video camera or cameras 20 and lighting device 30 are ordinary devices that are readily available.
  • Panasonic camera, CCD Model No. GP- F602, or similar devices are used with either flash or continuous light sources.
  • the lighting devices can in this first embodiment comprise a ring lamp such as the MICROFLASH 5000, manufactured by VIVITAR Corp., located at Chatworth, California, or a standard photoqraphy lighting fixture. Other camera devices and lighting devices, however, can be substituted.
  • a flash device can be employed with a Panasonic camera.
  • An example of a flash unit is the SUNPAK Softlite 1400M, manufactured by TOCAD Company, Tokyo, Japan.
  • a continuous incandescent light source can be employed. This type of lighting device is particularly useful in conjunction with object identification/verification for quality control applications.
  • an LED lighting device can be employed in conjunction with a mini infra-red camera.
  • the output of the video camera 20 is connected via a port to computer 40.
  • the computer 40 can be a personal computer that includes a digitizer card 42 in an expansion port.
  • the computer also includes other standard components such as a CPU 44, a random access memory 46, and a permanent storage, m the form of a magnetic disk storage 48, a monitor
  • a personal computer having an Intel ® Pentium ® microprocessor designed to have a minimum processor clock speed of 90 MHz.
  • the computer has in this example 32 MBytes of Random Access memory and at least 1 Gigabyte of static storage.
  • any combination of clock speed and memory s ze types can be used m this system.
  • a conventional hard-drive can be used, although other static storage units (e.g., writable optical drives, EPROM) are also acceptable.
  • EPROM writable optical drives
  • a Microsoft Windows® operating system is used. It should be noted that the present invention is designed so that any computer having adequate processor clock speed (i.e. preferably a clock speed of at least 60 MHz) , and a sufficient RAM memory size of (i.e., 16 megabytes of RAM) can be employed.
  • the digitizer 42 used is a frame grabber card for transforming camera signals from analog to digital form. This card is inserted in the computer 40.
  • the card used is a CX100 Imagination Board, manufactured by Image National Corporation, located in Beaverton, Oregon.
  • any digitizer device for video input including direct input from digital video cameras can be used with the invention.
  • the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
  • Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device.
  • the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
  • Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device 70.
  • the output device 60 in the preferred embodiment is a card reader/writer, or a printer. Examples of output devices include magnetic or optical reader/writers or smart card reader/writers or barcode printers, etc.
  • that memory device 70 is a magnetic stripe card.
  • the output device 60 and card 70 are well known in the art. However, other portable memory devices such as optical cards, S-RAM storage devices, or carriers containing EPROM or UVPROM memories, can also be used. In addition, known barcodmg schemes can be employed to bar-code the information, where appropriate.
  • Fig. 2 is a block diagram of the verification station hardware 200.
  • the data of an enrolled person or object 210 appropriately lighted by lighting 30 will be compared to the coded representation of the object on the portable memory 70.
  • the object 210 represents the same object as the enrolled object 10 m Fig. 1.
  • the purpose of the verification station 200 is to output to an appropriate security system, a signal via external control 230.
  • the external control 230 can be an electronic access device, such as an electronic door lock, or a motor-driven gate, or any other mechanism.
  • the components in the verification station 200 are the same as the components of the enrollment station 100 with the exception of the input device 220 and the external control unit
  • the card 70 is inserted in the input device 220, such as a magnetic or optical card or a barcode reader, while the object 210 has its image recorded and processed by the computer 40. Image recordation and processing is done in an identical way as in the enrollment station discussed above A program in the computer 40 for the verification station compares the image data of the object 210 with the coded image data on the card 70. If there is a satisfactory match, the verification signal 230 is outputted indicating a match, or a failure to match.
  • the input device 220 such as a magnetic or optical card or a barcode reader
  • Figs. 3A and 3B are overview flow chart diagrams showing the process used by the enrollment station 100 in order to encode enrollment data onto the portable storage medium 70 or central database 72.
  • Each of the steps of Figs. 3A-3B are detailed in the flow charts of Figs. 4-13. Each overview step will be elaborated on below by reference to the later figures.
  • Cameras 20 provide input for the computer 40 which executes the preprocessing function 302.
  • This preprocessing function 302 is described in detail in Fig. 4.
  • the preprocessing function 302 is performed by the computer 40 in combination with a frame grabber digitizer 42.
  • the frame grabber digitizer 42 first transforms the analog signal at step 3002, then filters the digitized data at step 3004, and enhances the image at step 3006.
  • the filtering step 3004 filters out image noise using conventionally known techniques.
  • the enhancement step 3006 enhances image contrast according to the lighting conditions, using known techniques.
  • the output of these functions is the complete image 305 which is composed of a standardized noise-free, digital image matrix of 512 x 480 pixels.
  • the complete image 305 is used as an input to various other subroutines in the enrollment process which will be described below .
  • step 310 receives as the input matrix the complete image 305.
  • step 3102 a center image is taken from the complete image.
  • the coordinates of the upper left hand corner of the center matrix is defined as coordinates 128 x 120 and the coordinates of the bottom right hand corner of the center matrix is defined as coordinates 384 x 360.
  • This central image is then b arized. This process results an image where each pixel is either black or white, depending on whether a pixel exceeds or falls below a preset threshold.
  • the coordinates are chosen for the preferred embodiment to focus on the center of the image. However, if the object does not have distinguishing features in its central area, different coordinates can be used.
  • This output image is then made available to the targeting procedure 320 shown in Fig. 3A of the enrollment process.
  • the targeting procedure 320 is shown in more detail in Fig. 6.
  • the purpose of the targeting procedure is to find a distinguishing feature in the object in order to detect the presence of the object and determine the location of the object in the image matrix.
  • the distinguishing features looked at are the two irises of the eyes.
  • the input to the tarqet g function 320 is the bmarized image 3104 which is then processed by the labeling function 3202.
  • the labeling function 3202 locates and labels all areas that exhibit similar characteristics as irises.
  • the threshold set for the bmarization process 3102 is set to filter out gray scale levels that are not relevant.
  • the gray scale color typically associated with irises can be used as the indicator for the threshold.
  • the output of the labeling process 3202 comprises the object labels 3204.
  • each labeled object produced at step 3204 has the XY coordinates calculated for placement in the complete image matrix 305. This provides a geometric center of each object that was labeled in the previous step.
  • step 3206 the irises are located and are distinguished by their contrast with the surrounding area.
  • other contrasting areas may also be labeled. These contrasting areas, are for example this exemplified application, nostrils or lips.
  • Step 3208 involves looking at the XY coordinates of a pair of objects and then determining whether their absolute and relative locations are valid.
  • the validation step 3208 assumes that labeled objects, such as the irises, are appropriately positioned on a face. For example, the eyes cannot be on top of each other and must fall within acceptable distances from each other. Therefore, the validate coordinate step 3208 function determines those pairs of labeled objects that can possibly be irises.
  • the calculations for iris targeting consists of comparing the XY coordinates for each iris to determine if they are within a preset distance apart and on approximately the same horizontal axis.
  • the difference m the X coordinates are measured and compared to a prestored value to make sure that the irises are located at certain specific locations.
  • the coordinates Yl and Y2 represent the horizontal coordinates
  • XI and X2 represent the vertical coordinates.
  • Y2 and X2 in the preferred embodiment represent the left iris coordinate
  • Yl and XI the right iris.
  • a first calculation determines if Y2 is greater than Yl. In the second calculation, the result of Y2 minus Yl should be greater than a value of 40 pixels. The third calculation determines the absolute value of XI minus X2. In the preferred embodiment, that value should be less than 16 pixels. If all three equations are met, then at step 3208 the object's pair of irises is confirmed, and processing passes to step 3216.
  • an output message is sent at step 3212 to monitor 50 (Fig. 1) stating that the process has been unable to target the eyes.
  • a new image is acquired again and reprocessed, beginning at step 302.
  • the next step 3216 is to validate the object. This step compares the spots with the eye template to determine whether a cross correlation coefficient fits. If so, it confirms that the system successfully targeted the eyes.
  • any one input to the validate object step 3216 is determined at step 315.
  • This input is an average eye template value, which is an average of the iris position on the face across a wide population.
  • the other input, determined at step 305, which was discussed previously, is the complete image.
  • the complete image is a reference standardized noise-free image matrix of 512 x 480 pixels, 8-bit gray scale.
  • the validate object step 3216 performs a gray scale correlation using the complete image 305 and the average eye template 315 and the valid object XY coordinates. This complete image is compared to the average eye template at the valid XY coordinates. If the maximum correlation is above a preset threshold, the object is identified as an eye.
  • the correlation coefficient of two areas ⁇ A i3 ⁇ and (b l ⁇ ⁇ is calculated as:
  • a and b are pixels of the two areas.
  • the threshold of correlation is 0.9.
  • the outputs of this comparison are two "valid" iris values with the associated XY coordinates in the complete image matrix 305.
  • the outputted values are provided at 3218.
  • the system retrieves the calculated unit distance/center point by initiating the process set forth at step 325.
  • a detailed flow chart of this process is shown in Fig. 7.
  • the calculate unit distance and center point routine 325 establishes a unit distance for the center point in the image based on the coordinates of the iris provided from the targeting step 320.
  • the unit distance (UD) equals ( (X1-X2) exp 2 + (Y1-Y2) exp 2) exp .
  • the next step of the enrollment process shown in Fig. 3A is to define an area of interest at step 330.
  • the area of interest procedure 330 is shown in detail the flow chart diagram of Figure 8.
  • the function of step 3301 is to define the areas of interest on the object m relation to the unit distance (UD) and center point values (CX and CY) .
  • the areas of interest are predetermined depending on the object to be identified. In the preferred embodiment, eight areas of interest have been selected. These areas of interest are a one-dimensional horizontal bar on the forehead, a one dimensional vertical bar over the center of the face, a two-dimensional right and left eye section, a two-dimensional right and left eyebrow section, and a two-dimensional right and left cheek section.
  • the areas of interest for a face in the preferred embodiment are dissected into two one-dimensional areas and six two-dimensional areas of interest (see Fig. 20) .
  • Step 335 resizes the area of interest to a standard pixel size.
  • the standard pixel size for the one-dimensional pixel area of interest is 8 x 64 pixels.
  • the standard pixel size is 64 x 64 pixels.
  • the purpose of this normalization procedure step is to standardize the input to the transform procedures.
  • Step 340 shown in Fig. 3A then performs several transforms, each of which is applicable to particular areas of interest.
  • One of these transform processes is step 342.
  • step 342 applies transforms to the one- dimensional pixel arrays (representing the eight areas of interest) outputted from the resized areas of interest step.
  • FFTs fast-fourier transforms
  • DCT discrete cosine transform
  • each 64 x 64 pixel array is divided into 64 separate pixel arrays of 8 x 8 pixels at step 3440. Then each 8 x 8 pixel array is compressed using the DCT at step 3442. The output of the DCT for each 8 x 8 pixel array is a transformed array with the most significant cell in the upper left hand corner. Using all the 8 x 8 transformed arrays, ten 1 x 64 vectors arrays of the most significant cells are then created at step 3444. Other techniques can be employed, such as edge detection, Kohonen's and/or geometrical analysis. Step 346 in Fig. 3A, depicts these other alternative transforms which can be used to compress and analyze identified areas of interest.
  • each of the 64 transformed arrays comprises the first 1 x 64 vector array
  • the second most significant cells comprises the second 1 x 64 array
  • so on The result is that each 64 x 64 pixel area of interest is transformed into ten 1 x 64 vector arrays of the most significant transformed cells. These arrays are then sent to the coding routine 350.
  • each layer can be bmarized, so that if each cell's coefficient is greater than zero, then the value for that cell is equal to one. If that cell's value is less than zero, then it's bmarized value is equal to zero.
  • relatively few bytes for multiple layers are necessary. For example, if each layer is 8x16 bytes, then the bmarization will create and an 8x16 bit layer. For a 6-layer image, for example, 96 8-bit bytes (6x16) will be created for the captured image.
  • 6x16 96 8-bit bytes
  • Fig. 12 sets forth routine 350 in more detail.
  • input to the coding routine array are the sixty-two, 1 x 64 vector arrays produced by the transform routine at step 340 (Fig. 3A) .
  • one 1 x 64 vector array is inputted.
  • ten 1 x 64 vector arrays are inputted. Therefore, in the preferred embodiment, 62 1 x 64 vector arrays are inputted to routine 350.
  • the other input 355 is the eigenspace.
  • the use of eigenspaces is well-known the art as a method for determining the characteristics of an individual observation to a sample of the general population.
  • the first coding step 3502 calculates residuals of the vectors.
  • the residuals are the differences between the sixty-two vectors and the mean vectors estimated for a general population. Next, these residuals are projected into their sixty-two separate eigenspaces, permitting one per parameter. The result of this process provides the two most significant coordinates, per parameter, m their respective eigenspaces. In total, 124 coordinates are calculated.
  • Process step 3504 is repeated several times to insure a statistically appropriate sampling of the enrollment images calculates the mean and standard deviation of the 124 parameters coordinates generated at step 350 .
  • Step 3508 evaluates the coordinates with the smallest standard deviation and highest coefficient with the average of the population. Based on those criteria, the coordinates and their respective weights are then passed to the encryption process 370.
  • the encryption routine 370 of Fig. 3B is shown in detail in the flow chart of Fig. 13.
  • Such a routine is well known in the art.
  • the encryption algorithm shown at step 3702 determines usable parameters according to encryption criteria 3704 which are related to the mean and the standard deviation of the parameter coordinates.
  • the result is the encryption key and verification data which are written at step 3706 onto the portable storage 70.
  • a code or any other well known technique the art of recording information can be used. Therefore, the card 70 contains the coded information pertaining to the object that was enrolled. The enrollment process is now complete.
  • Fig. 14A & Fig. 14B show an overview of the verification process using the verification station hardware shown in Fig. 2. Most of the procedures the verification process are similar to the procedures previously discussed regarding the enrollment process. Thus, a detailed explanation is reserved for those processes that differ. A detailed description of the verification steps are set forth in Figs. 15-19.
  • a prerequisite to the verification process 400 is for the enrollment process to be repeated, up to a certain point.
  • the person that needs to be verified would go through step 302 (preprocessing) through step 350.
  • the output of step 350 the verification process provides parameter values corresponding to the images of the person or object to be verified.
  • card 70 which contains the data from the enrollment process, is inserted into a reader device which then decrypts the magnetic data, yielding process control data (Fig. 16 step 410) and parameter values that correspond to each area of interest (Fig. 17, process 420) .
  • the process control data instructs the machine on how to accomplish verification.
  • the parameters values determine what is to be verified.
  • the parameter values are compared to the output of the coding step 350 in the verification process (Fig.
  • One verification methodology for example can rely on increasing the Hamming distance between the enrolled image and the image to be verified.
  • the image vector stored in the card, or other non-volatile storage media is lined up, bit-by-bit to the generated image vector.
  • the bits for each vector are compared. For different bits, a value of "1" is generated, for identical bits, a "0".
  • a Hamming distance is then generated, as follows:
  • N total length of vecot
  • the HD value can be used as a threshold value, from which system sensitivity can be varied. If the values are set at around, for example .23, other sensitivities for retest, for example, or for reject can also be set. If for example, accept is .23 (HD) , retest is .24 - .74 and reject is .75 or greater, then it is possible that over time the retests will migrate to either direction (i.e., accept, reject).
  • Figs. 21-27 respectively illustrate a micro-camera assembly and LED lighting apparatus which provide numerous operational advantages both to the various embodiment of the invention, as well as to any other known image enrollment/recognition systems.
  • Fig. 21 illustrates a front-view of an array of light emitting diodes ("LEDs") 2102 located along the same plane of a plate 2104.
  • the arrangement of the LED's has a specific size, and intensity, to optimize lighting intensity of the target and capture of the image.
  • a configuration is shown for maximizing iris capture.
  • the LED's are designed to light the iris at a lower visible infra-red spectrum, rather than at the "heat detecting" spectrum.
  • the spectrum tends to have a wave length of approximately 880nm, although other low visible spectra are considered applicable. Infra-red spectrum light has been found to be optimal, since flash lighting is distracting, if not painful.
  • infra-red at low levels, standardizes facial fill so that all features of the face are equally exposed.
  • a further advantage to low level infra-red is that, when combined with an appropriate filter, ambient light is cut out altogether. Consequently, image wash-out is avoided.
  • the higher spectrum heat detecting level has been found to be less accurate in measuring biomet ⁇ c characteristics.
  • an array of nine LED's 2106 are arranged in a square that is angled a 45° relative to the horizontal axis 2108 of plate 2104.
  • Fig. 22 is a transparent perspective view of the microcamera device 2200 incorporating the aforedescribed LED array 2100.
  • the device includes four basic elements: a micro-camera lens 2202, a microcamera circuit board 2204, the aforedescribed IR LED array 2100 and the LED circuit board 2206. These four elements are contained in a housing 2208.
  • the housing 2208 is designed so that the lens and LED array are held m a potting material in order that the microcamera unit may be contained, and sealed. As a consequence, the microcamera can be used m underwater applications without substantial water leakage to circuit boards 2204 and 2205.
  • the potting has a sufficient circumferential clearance around the lens element 2202 in order to allow the lens to freely rotate.
  • the top surface of the housing 2210 contains a recess 2210 the top surface of which is co-planar with the top surfaces of the lens 2202 and LEDs 2102. Further, a pair of flanges are arranged parallel to each other and the longitudinal axis of housing 2208 so that a flat filter element (not shown) that is sized to fit between the flanges can slide across the top surface 2210 and be held in place by the flanges.
  • the filter comprises a sheet of mirrored glass or plastic that is near infra-red. The filter is thus able to cut off the visible light spectrum.
  • the mini camera housing includes a communications port 2220 which provides the image output to an external frame grabber.
  • the port 220 also connects to an external power supply for the m i-camera.
  • the port 220 may use any optimal wirmg-confirmation which in this embodiment a 9 pm DIN that can connect to the I/O port of a PC.
  • the camera device 220 has no potting.
  • a wall 2222 would be placed between the lens 2202 and LED's 2102 to avoid direct reflection on the filter by the LED's.
  • the camera lens 2202 and camera circuit 2204 are manufactured by Sony Corp. as a 1/3 CCD, 12 VDC Board Camera, Model No. UL-BC460/T8.
  • a schematic circuit board layout 2300 is shown for the LED array 2100 (elements D1-D8) .
  • the lighting for diodes D1-D8 is continuous but has a quick "on-off" to cover a video view. This on-off cycle is approximately l/30th of a second.
  • the flash component of the video view period is l/7000 ,n of a second. Since the period of lighting is so brief, the flash and the lighting exposure render sensitivity to movement of the subject pratically irrelevant. Flash nonetheless is essential since in security applications, movement of the subject occurs frequently. However, the flash can be changed to a continuous lighting mode, if desired.
  • Each of the IR LED's is a focused beam diode, which improves efficiency and also reduces power consumption.
  • pin connections 2301 are adapted to connect directly into a personal computer I/O port.
  • Fig. 24 is an illustration of the circuitry supporting the camera electronics 2400.
  • a constant power source of about 100 million amps is provided.
  • a 12 volt power supply is used along with a 5V control power supply.
  • FIGS. 21-24 a micro camera arrangement for image capture is created whereby the lighting is located below the camera. Moreover, the position at which the lighting is below such camera is critical, since a subject farther than 3 feet away from the lens will not be captured. Placement of the camera is also sensitive since direct sunlight, incandescent or halogen light will wash out features. Thus any direct light to the camera is problematic.
  • Figures 25a-c are different views (front, side, and perspective) of a housing designed to contain the camera-LED unit.
  • a recess 2502 is shown in the unit, through which the entire housing 2200 can be inserted.
  • the modular plug 2220 (Fig. 22) would also be connected through cable 2504 (Fig. 25c) to the PC I/O port (not shown) .
  • the housing 2504 includes a stand 2506 which pivots about axle 2508 in the direction of arrow 2510.
  • the camera can be supported m a substantially upright position (Fig. 25(c)) when placed with the stand in an extended position on a horizontal surface.
  • Figs. 26a-c show a second embodiment of the mini-camera housing 2600.
  • the housing includes a stand 2602, which in a closed position (as shown m Figs. 26a, 26b), completely covers the camera lens and LED's. When fully opened, however, which is accomplished by rotating the stand 2602 about axis 2604, the camera, and LED light unit are fully exposed, and is also supported upright by stand 2602 (Fig. 25c).
  • Figs. 27a-c are views of a third embodiment of the m icamera housing 2700.
  • a stand 2702 is partially cut away, to expose the camera lens only.
  • the LED array and the camera 2202 are both exposed for use.
  • Fig. 28 illustrates a second or alternative embodiment for the targeting process set forth Fig. 6 of this invention.
  • the advantage of the alternative technique is that it allows targeting without reference to fixed areas, by dynamically finding the image centers.
  • the process, 2800 begins at step 2802 where a desired area is isolated and captured by the camera. A histogram for this captured area is then computed by the computer 40. The computer then dynamically determines thresholds by calculating desired threshold barriers that are preprogrammed into the computer 40. For example, high and low rejects can be set to be above the lowest 5% and below the highest 5%, and high and low thresholds between the bottom 45% and below the top 45%. As a consequence, when the threshold is compared to the histogram at step 2808, a 10% middle portion of the histogram can be defined reflecting particular gray-scale characteristics .
  • the below, between, and above threshold values are then bmarized at bmarization step 2810 as shown in 28(c) .
  • the first bmarization step is the threshold comparison itself, which sets values as follows:
  • Fig. 28(d) represents the true binarized area of the targeted object 2812.
  • the targeted area is then geometrically tested at step 2814 on two candidate points based on preset values which define appropriate quadrants.
  • the points i ⁇ and x,,y, can be isolated based on preset template values. For example, if iris targeting is desired, eye templates can be set so that
  • an iteration loop can take three (3) images, binarize those values, average those binarized value and store the averaged value in the portable memory. As a result of this iteration process shown at steps 2822 and 2824, a high percentage of accuracy is achieved dynamically.

Abstract

A method and apparatus (2200) for identifying, or verifying the identity of objects, such as faces. The present system identifies an object, such as a face, from unique signals and values emitted by the attributes of components on the object such as eyes, eyebrows, hairline, mouth, cheeks, ears, and chin on a face (Fig. 20). The object image is analyzed to determine the different components. In turn, these components are then transformed through one of several transformations such as, geometric, cosine, or Kohonen's, to yield a unique description for each component. The data obtained from the object is then transferred to either a portable memory (fig. 1, Numeral 70), such as a magnetic stripe card, smart card, or a one- or two-dimensional bar-code card or to a data base (Fig. 1, Numeral 72). A decision process of comparing the two results, from the object image, and from the data encoded in ID card, or data base, is subsequently performed (Fig. 2).

Description

SYSTEM FOR OBJECT VERIFICATION AND IDENTIFICATION
FIELD OF INVENTION
This invention relates generally to information storage and retrieval computer and camera systems for obtaining and storing certain data relating to specific images and providing rapid access to that data for purposes of object targeting, enrollment, identification, verification, and classification.
BACKGROUND OF THE INVENTION
Many methods have been developed for commercial, military, and medical visual inspection applications. One particular area of interest for recent development efforts has been the area of image recognition. The basic idea behind image recognition is to provide a system that captures data relating to an image area of an object or person and then compares that captured image data to information stored in a computer memory.
However, many image recognition systems have drawbacks. First, they are characteristically slow, and subject to serious recognition and/or identification errors. This is, in part, due to the background of the art. The fact that prior methods have, for example, processed the entire human face, and have, used quantitative data obtained to effect the process of identification has resulted m very complex processor-bound systems. As a consequence, these systems are prone to error, and are cumbersome and costly. For example, the predominant facial recognition systems, which are Image Recognition, Pattern Recognition, and Retinal Recognition all rely on a central data storage, in the form of databases, which contain either a computerized copy of the entire facial image of the persons who are to be identified, or some abstracted form of the entire face or portions of the face to be reconnected later. The data for such systems is substantial for each face. Retrieval of data for these systems is time consuming, particularly as the number of enrolled persons increases .
An example of a system exhibiting these drawbacks is Kodak's card-based facial security system. As understood, the Kodak system classifies fifty areas of the card holder's face, identifying each area with a 2-byte code. That data is then stored on the stripe of a magnetic card. The card user must then have his/her face compared with the stored facial code in order for a match to be made. A drawback to this technique is that it requires the local computer to recognize multiple areas of the face, then classify each of these areas, and then compare the instant classification to the code stored on the card. Hence, a significant processing burden is required at the recognition station. Moreover, the Kodak system is relatively inflexible in the sense that it is limited only to those things that have been classified. Thus, for the Kodak system to operate on other objects, a whole new classification scheme needs to the developed. The classification effort is a significant, labor-intensive process.
Other known verification processes involve the comparison of large image structures stored m the system's database. The techniques used are various, but essentially work with the same data sets. The costs involved are high due to the complexity of the databases, the large size requirements for physical storage devices to contain these databases, and the necessity for fast computing speeds to process the information for real-time use.
Object recognition against a known data set is a relatively easy art for humans to master, but presents difficult problems for computers. For a technique that permits a computer to carry out a successful recognition process, the process must yield quantifiable numerical descriptors. This process of checking against the known data set would, of course, also involve a quantifiable registration and enrollment of the object. The data set generated during the enrollment process is used for checking against the generated data from the recognition and verification systems. Yet, the numerical data produced by the digitized object images, such as human facial images, can be large. Once about a thousand complete images are stored, the retrieval and comparison of the data sets can become uneconomical and time-consuming . To avoid these drawbacks, certain systems, such as exemplified by U.S. Patent No. 5,161,204 or U.S. Patent No. 5,164,992, have resorted to complex classification schemes in order to extract key features and to perform discrete calculations for identification and verification. However, such schemes increase the processor overhead that is necessary to extract components from the image field. They do not necessarily reduce the storage requirements necessary to retain a database of captured faces. Thus, there is a need in the art for a verification and identification system that allows for high reliability, high accuracy and low cost, but at the same time is efficient.
SUMMARY OF THE INVENTION
In view of the foregoing, there exists a need in the art for a system that does not attempt to develop identification and/or verification using entire object feature comparisons, but instead utilizes selected data sets from components of the object for comparing the stored data sets with input images. Such combinations of data sets are to be used to uniquely identify an object. The independence of the attribute data sets leads to a normative analysis that can compare the data sets generated from input with data sets from known objects or persons.
It is another object of this invention to provide a comparison process that involves polling various attribute data against a known data set. It is yet a further object of the invention to provide a system and method for obtaining, digitizing, and recording selected data sets of separate and independent component element image signals, sensed by photographic or electronic means, from a person or object, for subsequent comparison with similarly obtained component elements image signals of a person or object whereby the degree of similarity between the original and the subsequent data sets may be determined.
It is yet a further object of the invention to provide a method whereby data sets of sensed images may be recorded on portable media, such as magnetic stripe cards, magnetic discs, printed bar codes, semi-conductor devices such as smart cards, or in data bases.
It is an additional object of the invention to provide a method of selecting separate and independent component elements of persons or objects whereby the selected components provide image signals for recording quantitative structures or arrays of numbers that permit statistical identification of the individual person or object from among data sets previously obtained from the same person or similar objects.
It is yet a further object of the invention to provide a method whereby the image signal data sets of persons or objects are measured so as to permit the determination of a quantitative degree to which the objects or persons are similar to those originally measured. It is an additional object of the invention to provide a method whereby it may be determined whether component image signals obtained from a person or object, at any time subsequent to the recording of a set of such image signals from the person or object, differ, or do not differ, from the recorded component images by any, or by more than or less than, preselected quantitative values, permitting the system to indicate that the signals may be quantitatively, or qualitatively, verified, or not verified, as being from the recorded signals of the person or object.
It is a further object of this invention to provide a camera and infra-red LED array which is designed to optimize the lighting and imaging of a particular area of the object.
It is another object of the invention to provide a system whereby quantitative measures may be selected for inclusion in the digital data system for allowing the determination of chosen levels of difference between subsequently obtained component image signals and verification of such signals as being, or not being, from the recorded signals of a specific person or object.
It is another object to provide an identification and/ or image verification system that uses a method of transforming various independent attributes of an object, such as, for example, a nose on a human face, into a data set for a component. For the whole object the various components are independent of each other. The data sets are generated during registration or enrollment. These data sets are recorded on a card capable of carrying data, such as a magnetic strip or a 2d-barcode, that is issued to a holder. In effect, the advantage of the system is that instead of maintaining a central database, the identification data is now decentralized and held on the small cards. Also, in the process of transforming the object component attributes into data sets, a dramatic reduction in the data that is required to be identified is achieved. The reduction of the data, required to uniquely identify complex objects such as human faces, also achieves a faster response in the identi ication process.
It is a further object of the invention to obtain the data sets through various transformations that are specific to the object component attribute chosen. The comparison process involves the verification of data present on the ID card, with the data sets generated from the video or other image of the object that has been registered through an input device such as an electronic camera. The comparison process utilizes a neural network that has been trained so as to recognize or identify a particular data set (such as human facial image component attributes) . The training of the neural network is based on a process of polling the various attributes that are obtained at the identification station by the computer, against the component attribute data sets that are present on the ID card. The polling assumes that certain distinctive features, if in agreement with the data sets on the ID card, can override other less distinctive attributes. However, as the security needs of the application increases, the polling process increases the required precision of the comparisons.
In one embodiment, a program is provided for reducing the characteristics of an object image, for example, a human face, to a set of characteristic numbers; later recalling that set of characteristics for comparison with an input from an external source. Keys to this set of numbers are encoded as indices and are stored on local source objects, such as 3-3/8" x 2" computer input cards. The indices, having been electronically posted to a central computer program, point to a second set of data retrieved from the computer program. A comparison of that set then occurs with a second set of similar stored data retrieved from a local source, such as the card.
These and other advantages and features of the invention, the various embodiments, and other aspects of the invention, should become more apparent from the following description, drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of the enrollment station forming the present invention;
FIG. 2 is a block diagram of the verification station forming the present invention;
FIGS. 3A and 3B are flow-chart diagrams showing the functions of the enrollment process;
FIG. 4 is a flow chart diagram illustrating the one-on-one preprocessing steps of the enrollment process shown in FIGS. 3A-3B; FIG. 5 is a flow chart diagram of the binarization routine of the enrollment process shown in Figs. 3A-3B;
FIG. 6 is a flow chart diagram of a first embodiment of the targeting process for the enrollment process shown in Figs. 3A-3B;
FIG. 7 is a flow chart diagram of the UD and CP coordinate estimation functions of the enrollment process shown in FIGS. 3A-3B;
FIG 8 is a flow chart diagram illustrating the area of interest defining function of the enrollment process shown in FIGS. 3A-3B;
FIG. 9 is a flow chart diagram illustrating the normalization procedure of the enrollment process shown in Figs. 3A-3B;
FIG. 10 is a flow chart diagram showing the transform step of the enrollment process shown in Figs. 3A-3B;
FIG. 11 is a flow chart of the second transformation process for the enrollment process of Figs. 3A-3B;
FIG. 12 is a flow chart of the output coding function for the enrollment process of Figs. 3A-3B;
FIG. 13 is a flow chart illustrating the encrypt function for determining useful parameter vectors for the enrollment process of Figs. 3A-3B; FIGS. 14A-14B are flow charts of the process for image verification of the present invention;
FIG. 15 is a flow chart of the image verification pre-processing function;
FIG. 16 is a flow chart of the image verification setup control function;
FIG. 17 is a flow chart of the image verification data decryption function;
FIG. 18 is a flow chart of the image verification parameter value comparison function; and
FIG. 19 is a flow chart of the image verification identity decision function;
FIG. 20 is a diagram showing the dimensional breakdown of the face;
FIG. 21 is a top view of the array of infra-red light emitting diodes used to light a mini-camera apparatus;
FIG. 22 is a perspective transparent diagram of the mini-camera and infra-red lighting device and components thereof of the invention;
FIG. 23 is a circuit schematic diagram for the light array of FIG. 21;
FIG. 24 is a circuit schematic diagram for the mini-camera of FIG. 21; FIGS. 25 (a) -(c) are respectively front, side and perspective views of a first embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
FIGS. 26 (a) -(c) are respectively front, side and perspective views of a second embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
FIGS. 27 (a) -(c) are respectively front, side and perspective views of a third embodiment of the housing for the mimcamera arrangement of FIG. 22; and
FIG. 28 is a flow chart illustrating a second embodiment of the targettmg process for the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Referring now to the drawings wherein like reference numbers refer to like parts, the first embodiment of present system is composed of two processes and two hardware systems. The first process of this first embodiment is the enrollment process and is shown in Figs. 3A-13. The purpose of the enrollment process is to code the image of the person or object and to reduce that image to a portable form, and format, such as a card with a magnetic strip, or in a database. The second process of the first embodiment is shown in Figs. 14A-15. This process is the verification process. The verification process performs the tasks of taking a picture or image of the person or object, and comparing the captured image to the coded image obtained from the enrollment process. If the image of the person or object obtained during verification and the coded information obtained from the enrollment process match, identity is verified by the verification process. The enrollment process and the verification process have elements in common, which will be described further below.
The two hardware systems used in the present invention are the enrollment station, shown in Fig. 1, and the verification station, shown in Fig. 2.
Fig. 1 is a block diagram of the enrollment station 100. The object 10 represents the object that will be coded m the enrollment process for later verification during the verification process. In a preferred embodiment, the object 10 is a face of a person. However, any object with distinguishing features can be used with this invention. For example, the object under consideration may constitute a machine part under inspection, or a house, or a car key, or an automobile, or a hand. As another example, in warehouse operations, it is essential that objects are correctly labeled, stored, and shipped. The label on the object or its container carries data related to features of the object. These data permit an automatic verification that the object is correctly labeled, and therefore will be properly stored, packaged, and shipped. Object types can vary, as long as an identifiable characteristic can be extracted, and is stored in the enrollment process. In order to enroll the person or object, one or more video cameras 20 or other cameras as set forth m this application and equipped with lighting devices 30, are used to record an image of the object 10. The video camera or cameras 20 and lighting device 30 are ordinary devices that are readily available. In the first embodiment, Panasonic camera, CCD Model No. GP- F602, or similar devices are used with either flash or continuous light sources. The lighting devices can in this first embodiment comprise a ring lamp such as the MICROFLASH 5000, manufactured by VIVITAR Corp., located at Chatworth, California, or a standard photoqraphy lighting fixture. Other camera devices and lighting devices, however, can be substituted. For example, a flash device can be employed with a Panasonic camera. An example of a flash unit is the SUNPAK Softlite 1400M, manufactured by TOCAD Company, Tokyo, Japan. Alternatively, a continuous incandescent light source can be employed. This type of lighting device is particularly useful in conjunction with object identification/verification for quality control applications. Finally, as will be described m Figs. 21-27 of this application, an LED lighting device can be employed in conjunction with a mini infra-red camera.
The output of the video camera 20 is connected via a port to computer 40. The computer 40 can be a personal computer that includes a digitizer card 42 in an expansion port. The computer also includes other standard components such as a CPU 44, a random access memory 46, and a permanent storage, m the form of a magnetic disk storage 48, a monitor
50, and a keyboard 52. In the preferred embodiment, a personal computer is used having an Intel® Pentium® microprocessor designed to have a minimum processor clock speed of 90 MHz. The computer has in this example 32 MBytes of Random Access memory and at least 1 Gigabyte of static storage. However, any combination of clock speed and memory s ze types can be used m this system. A conventional hard-drive can be used, although other static storage units (e.g., writable optical drives, EPROM) are also acceptable. A Microsoft Windows® operating system is used. It should be noted that the present invention is designed so that any computer having adequate processor clock speed (i.e. preferably a clock speed of at least 60 MHz) , and a sufficient RAM memory size of (i.e., 16 megabytes of RAM) can be employed.
The digitizer 42 used is a frame grabber card for transforming camera signals from analog to digital form. This card is inserted in the computer 40. In the preferred embodiment, the card used is a CX100 Imagination Board, manufactured by Image Nation Corporation, located in Beaverton, Oregon. However, any digitizer device for video input including direct input from digital video cameras can be used with the invention.
The output device 60 receives the data from the computer 40 which is a coded representation of the object 10. Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device. The output device 60 receives the data from the computer 40 which is a coded representation of the object 10. Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device 70. The output device 60 in the preferred embodiment is a card reader/writer, or a printer. Examples of output devices include magnetic or optical reader/writers or smart card reader/writers or barcode printers, etc.
In the preferred embodiment, that memory device 70 is a magnetic stripe card. The output device 60 and card 70 are well known in the art. However, other portable memory devices such as optical cards, S-RAM storage devices, or carriers containing EPROM or UVPROM memories, can also be used. In addition, known barcodmg schemes can be employed to bar-code the information, where appropriate.
Fig. 2 is a block diagram of the verification station hardware 200. The data of an enrolled person or object 210 appropriately lighted by lighting 30 will be compared to the coded representation of the object on the portable memory 70. The object 210 represents the same object as the enrolled object 10 m Fig. 1. The purpose of the verification station 200 is to output to an appropriate security system, a signal via external control 230. For example, the external control 230 can be an electronic access device, such as an electronic door lock, or a motor-driven gate, or any other mechanism. The components in the verification station 200 are the same as the components of the enrollment station 100 with the exception of the input device 220 and the external control unit
230.
The card 70 is inserted in the input device 220, such as a magnetic or optical card or a barcode reader, while the object 210 has its image recorded and processed by the computer 40. Image recordation and processing is done in an identical way as in the enrollment station discussed above A program in the computer 40 for the verification station compares the image data of the object 210 with the coded image data on the card 70. If there is a satisfactory match, the verification signal 230 is outputted indicating a match, or a failure to match.
Figs. 3A and 3B are overview flow chart diagrams showing the process used by the enrollment station 100 in order to encode enrollment data onto the portable storage medium 70 or central database 72. Each of the steps of Figs. 3A-3B are detailed in the flow charts of Figs. 4-13. Each overview step will be elaborated on below by reference to the later figures.
Cameras 20 provide input for the computer 40 which executes the preprocessing function 302. This preprocessing function 302 is described in detail in Fig. 4.
Specifically, in Fig. 4, the preprocessing function 302 is performed by the computer 40 in combination with a frame grabber digitizer 42. The frame grabber digitizer 42 first transforms the analog signal at step 3002, then filters the digitized data at step 3004, and enhances the image at step 3006. The filtering step 3004 filters out image noise using conventionally known techniques. The enhancement step 3006 enhances image contrast according to the lighting conditions, using known techniques. The output of these functions is the complete image 305 which is composed of a standardized noise-free, digital image matrix of 512 x 480 pixels. The complete image 305 is used as an input to various other subroutines in the enrollment process which will be described below .
Returning to Fig. 3A, the next step in the enrollment process is the bmarization process 310. This process is illustrated in detail m Fig. 5. The function of the bmarization process 310 is to convert 8-bit 256 gray scale pixels of the input matrix into a two-level color output. As shown Fig. 5, step 310 receives as the input matrix the complete image 305. At step 3102, a center image is taken from the complete image. The coordinates of the upper left hand corner of the center matrix is defined as coordinates 128 x 120 and the coordinates of the bottom right hand corner of the center matrix is defined as coordinates 384 x 360. This central image is then b arized. This process results an image where each pixel is either black or white, depending on whether a pixel exceeds or falls below a preset threshold.
As noted, the coordinates are chosen for the preferred embodiment to focus on the center of the image. However, if the object does not have distinguishing features in its central area, different coordinates can be used. This output image is then made available to the targeting procedure 320 shown in Fig. 3A of the enrollment process.
An alternative embodiment for the targeting process is also shown in Fig. 28. The targeting procedure 320 is shown in more detail in Fig. 6. The purpose of the targeting procedure is to find a distinguishing feature in the object in order to detect the presence of the object and determine the location of the object in the image matrix. In the preferred embodiment, the distinguishing features looked at are the two irises of the eyes. As shown in Fig. 6, the input to the tarqet g function 320 is the bmarized image 3104 which is then processed by the labeling function 3202. In the preferred embodiment, the labeling function 3202 locates and labels all areas that exhibit similar characteristics as irises. In other words, the threshold set for the bmarization process 3102 is set to filter out gray scale levels that are not relevant. Thus, the gray scale color typically associated with irises can be used as the indicator for the threshold. The output of the labeling process 3202 comprises the object labels 3204.
Next, the coordinate calculation process 3206 is activated. In this step, each labeled object produced at step 3204 has the XY coordinates calculated for placement in the complete image matrix 305. This provides a geometric center of each object that was labeled in the previous step.
Thus, in the first embodiment, in step 3206 the irises are located and are distinguished by their contrast with the surrounding area. In addition, other contrasting areas may also be labeled. These contrasting areas, are for example this exemplified application, nostrils or lips.
The next step the targeting process 320 of Fig. 6 is the coordinate validation step 3208. Step 3208 involves looking at the XY coordinates of a pair of objects and then determining whether their absolute and relative locations are valid. The validation step 3208 assumes that labeled objects, such as the irises, are appropriately positioned on a face. For example, the eyes cannot be on top of each other and must fall within acceptable distances from each other. Therefore, the validate coordinate step 3208 function determines those pairs of labeled objects that can possibly be irises.
Specifically, the calculations for iris targeting consists of comparing the XY coordinates for each iris to determine if they are within a preset distance apart and on approximately the same horizontal axis. In addition, the difference m the X coordinates are measured and compared to a prestored value to make sure that the irises are located at certain specific locations. In the preferred embodiment, the coordinates Yl and Y2 represent the horizontal coordinates, and XI and X2 represent the vertical coordinates. Thus, Y2 and X2 in the preferred embodiment represent the left iris coordinate, and Yl and XI, the right iris.
A first calculation determines if Y2 is greater than Yl. In the second calculation, the result of Y2 minus Yl should be greater than a value of 40 pixels. The third calculation determines the absolute value of XI minus X2. In the preferred embodiment, that value should be less than 16 pixels. If all three equations are met, then at step 3208 the object's pair of irises is confirmed, and processing passes to step 3216.
However, if no valid targets are found (failure to pass at least one of the above three equations) an output message is sent at step 3212 to monitor 50 (Fig. 1) stating that the process has been unable to target the eyes. A new image is acquired again and reprocessed, beginning at step 302.
The next step 3216 is to validate the object. This step compares the spots with the eye template to determine whether a cross correlation coefficient fits. If so, it confirms that the system successfully targeted the eyes.
Any one input to the validate object step 3216 is determined at step 315. This input is an average eye template value, which is an average of the iris position on the face across a wide population. The other input, determined at step 305, which was discussed previously, is the complete image. As noted, the complete image is a reference standardized noise-free image matrix of 512 x 480 pixels, 8-bit gray scale. The validate object step 3216 performs a gray scale correlation using the complete image 305 and the average eye template 315 and the valid object XY coordinates. This complete image is compared to the average eye template at the valid XY coordinates. If the maximum correlation is above a preset threshold, the object is identified as an eye. The correlation coefficient of two areas {Ai3} and (b} is calculated as:
P = ΣIJAIJb,J\ J∑IJA ∑l) b
where A and b are pixels of the two areas.
The threshold of correlation is 0.9.
The outputs of this comparison are two "valid" iris values with the associated XY coordinates in the complete image matrix 305. The outputted values are provided at 3218.
Returning to Fig. 3A, following the targeting process 320, the system then retrieves the calculated unit distance/center point by initiating the process set forth at step 325. A detailed flow chart of this process is shown in Fig. 7.
Referring to Fig. 7, the calculate unit distance and center point routine 325 establishes a unit distance for the center point in the image based on the coordinates of the iris provided from the targeting step 320. The unit distance (UD) equals ( (X1-X2) exp 2 + (Y1-Y2) exp 2) exp . The center point (CD) equals, for the X coordinate, CX = (XI + X2)/2 and for the Y coordinated CY = (XI + Yl)/2.
The next step of the enrollment process shown in Fig. 3A, is to define an area of interest at step 330. The area of interest procedure 330 is shown in detail the flow chart diagram of Figure 8.
Referring to Fig. 8, the function of step 3301 is to define the areas of interest on the object m relation to the unit distance (UD) and center point values (CX and CY) . The areas of interest are predetermined depending on the object to be identified. In the preferred embodiment, eight areas of interest have been selected. These areas of interest are a one-dimensional horizontal bar on the forehead, a one dimensional vertical bar over the center of the face, a two-dimensional right and left eye section, a two-dimensional right and left eyebrow section, and a two-dimensional right and left cheek section. Essentially, the areas of interest for a face in the preferred embodiment are dissected into two one-dimensional areas and six two-dimensional areas of interest (see Fig. 20) .
Returning to Fig. 3A, once the areas of interest are determined, the next step is to normalize the areas of interest at step 335. Step 335, shown detail in Figure 9, resizes the area of interest to a standard pixel size. In the preferred embodiment, the standard pixel size for the one-dimensional pixel area of interest is 8 x 64 pixels. For the two-dimensional areas of interest, the standard pixel size is 64 x 64 pixels. The purpose of this normalization procedure step is to standardize the input to the transform procedures.
Step 340 shown in Fig. 3A then performs several transforms, each of which is applicable to particular areas of interest. One of these transform processes is step 342. Specifically, step 342 applies transforms to the one- dimensional pixel arrays (representing the eight areas of interest) outputted from the resized areas of interest step.
As seen in Fig. 10, eight fast-fourier transforms (FFTs) , as known m the art, are performed at step 3420 on the one-dimensional pixel arrays. The results of these transforms are averaged at step 3422 into a 1 X 64 vector array representing the spatial distribution of that area of interest. Processing then returns to Fig. 3A.
Another transform is performed at step 344 specifically applied to the dimensional areas of interest. The transform used is a discrete cosine transform (DCT), as known the art (see Fig. 11) .
Referring to Fig. 11, each 64 x 64 pixel array is divided into 64 separate pixel arrays of 8 x 8 pixels at step 3440. Then each 8 x 8 pixel array is compressed using the DCT at step 3442. The output of the DCT for each 8 x 8 pixel array is a transformed array with the most significant cell in the upper left hand corner. Using all the 8 x 8 transformed arrays, ten 1 x 64 vectors arrays of the most significant cells are then created at step 3444. Other techniques can be employed, such as edge detection, Kohonen's and/or geometrical analysis. Step 346 in Fig. 3A, depicts these other alternative transforms which can be used to compress and analyze identified areas of interest. For example, the most significant cell of each of the 64 transformed arrays comprises the first 1 x 64 vector array, the second most significant cells comprises the second 1 x 64 array, and so on. The result is that each 64 x 64 pixel area of interest is transformed into ten 1 x 64 vector arrays of the most significant transformed cells. These arrays are then sent to the coding routine 350.
As an alternative to the above, each layer can be bmarized, so that if each cell's coefficient is greater than zero, then the value for that cell is equal to one. If that cell's value is less than zero, then it's bmarized value is equal to zero. As a consequence, relatively few bytes for multiple layers are necessary. For example, if each layer is 8x16 bytes, then the bmarization will create and an 8x16 bit layer. For a 6-layer image, for example, 96 8-bit bytes (6x16) will be created for the captured image. Thus, under this alternative bmarization step a very small amount of memory is necessary for the UD/CP values.
Referring now to Fig. 3B, the coding routine 350 is shown. Fig. 12 sets forth routine 350 in more detail. Referring to Fig. 12, input to the coding routine array are the sixty-two, 1 x 64 vector arrays produced by the transform routine at step 340 (Fig. 3A) . For each one-dimensional area of interest, one 1 x 64 vector array is inputted. For each two-dimensional area of interest, ten 1 x 64 vector arrays are inputted. Therefore, in the preferred embodiment, 62 1 x 64 vector arrays are inputted to routine 350. The other input 355 is the eigenspace. The use of eigenspaces is well-known the art as a method for determining the characteristics of an individual observation to a sample of the general population. See, for example, Kirby, M and Sirovich, L. Application for Karhunen-Loeve Procedure for the Characterization of Human Faces, IEEE Trans. Patt. Anal. Machine Intell., Vol. 12, pp. 103-108, 1990; Turk, M. and Pentland, A., Eigenfaces for Recognition Journal and Cognitive Neurosciences, Vol. 3, pp. 71-86, 1991; Gonzalez, R.C. and Woods, R.E., Digital Image Processing, Addison-Wesley, 1992.
The first coding step 3502 calculates residuals of the vectors. The residuals are the differences between the sixty-two vectors and the mean vectors estimated for a general population. Next, these residuals are projected into their sixty-two separate eigenspaces, permitting one per parameter. The result of this process provides the two most significant coordinates, per parameter, m their respective eigenspaces. In total, 124 coordinates are calculated. Process step 3504 is repeated several times to insure a statistically appropriate sampling of the enrollment images calculates the mean and standard deviation of the 124 parameters coordinates generated at step 350 . Step 3508 then evaluates the coordinates with the smallest standard deviation and highest coefficient with the average of the population. Based on those criteria, the coordinates and their respective weights are then passed to the encryption process 370. The encryption routine 370 of Fig. 3B is shown in detail in the flow chart of Fig. 13. Such a routine is well known in the art. For example, the encryption algorithm shown at step 3702 determines usable parameters according to encryption criteria 3704 which are related to the mean and the standard deviation of the parameter coordinates. The result is the encryption key and verification data which are written at step 3706 onto the portable storage 70. However, a code or any other well known technique the art of recording information can be used. Therefore, the card 70 contains the coded information pertaining to the object that was enrolled. The enrollment process is now complete.
Fig. 14A & Fig. 14B show an overview of the verification process using the verification station hardware shown in Fig. 2. Most of the procedures the verification process are similar to the procedures previously discussed regarding the enrollment process. Thus, a detailed explanation is reserved for those processes that differ. A detailed description of the verification steps are set forth in Figs. 15-19.
A prerequisite to the verification process 400 is for the enrollment process to be repeated, up to a certain point. Specifically, m the preferred embodiment, the person that needs to be verified would go through step 302 (preprocessing) through step 350. The output of step 350 the verification process provides parameter values corresponding to the images of the person or object to be verified. At the same time, card 70, which contains the data from the enrollment process, is inserted into a reader device which then decrypts the magnetic data, yielding process control data (Fig. 16 step 410) and parameter values that correspond to each area of interest (Fig. 17, process 420) . The process control data instructs the machine on how to accomplish verification. The parameters values determine what is to be verified. The parameter values are compared to the output of the coding step 350 in the verification process (Fig. 18 process step 430). Statistical analyses, as is generally known in the art, is used to accomplish verification. Other methods, however, such as ruled-based identification decision processes or fuzzy logic systems, may be used addition to the straight-forward statistical analyses. Furthermore, a degree of correlation between the two values can be varied depending on the degree of sophistication of the verification technique desired. If at the decision making step (Fig. 19 process 440) the parameters of the card match the parameters of the photograph image, verification has been achieved and an output is made to the external control unit process 450.
One verification methodology for example can rely on increasing the Hamming distance between the enrolled image and the image to be verified. In this instance, the image vector stored in the card, or other non-volatile storage media is lined up, bit-by-bit to the generated image vector. The bits for each vector are compared. For different bits, a value of "1" is generated, for identical bits, a "0". A Hamming distance is then generated, as follows:
where n = total number of bits the vector
where N = total length of vecot
(normalized) Hamming Distance (HD) = n
N
The HD value can be used as a threshold value, from which system sensitivity can be varied. If the values are set at around, for example .23, other sensitivities for retest, for example, or for reject can also be set. If for example, accept is .23 (HD) , retest is .24 - .74 and reject is .75 or greater, then it is possible that over time the retests will migrate to either direction (i.e., accept, reject).
Figs. 21-27 respectively illustrate a micro-camera assembly and LED lighting apparatus which provide numerous operational advantages both to the various embodiment of the invention, as well as to any other known image enrollment/recognition systems.
In particular, Fig. 21 illustrates a front-view of an array of light emitting diodes ("LEDs") 2102 located along the same plane of a plate 2104. The arrangement of the LED's has a specific size, and intensity, to optimize lighting intensity of the target and capture of the image. In this diagram, a configuration is shown for maximizing iris capture. The LED's are designed to light the iris at a lower visible infra-red spectrum, rather than at the "heat detecting" spectrum. The spectrum tends to have a wave length of approximately 880nm, although other low visible spectra are considered applicable. Infra-red spectrum light has been found to be optimal, since flash lighting is distracting, if not painful. Moreover, infra-red, at low levels, standardizes facial fill so that all features of the face are equally exposed. A further advantage to low level infra-red is that, when combined with an appropriate filter, ambient light is cut out altogether. Consequently, image wash-out is avoided. Finally, the higher spectrum heat detecting level has been found to be less accurate in measuring biometπc characteristics.
As shown Fig. 21, an array of nine LED's 2106 are arranged in a square that is angled a 45° relative to the horizontal axis 2108 of plate 2104.
Fig. 22 is a transparent perspective view of the microcamera device 2200 incorporating the aforedescribed LED array 2100. The device includes four basic elements: a micro-camera lens 2202, a microcamera circuit board 2204, the aforedescribed IR LED array 2100 and the LED circuit board 2206. These four elements are contained in a housing 2208.
The housing 2208 is designed so that the lens and LED array are held m a potting material in order that the microcamera unit may be contained, and sealed. As a consequence, the microcamera can be used m underwater applications without substantial water leakage to circuit boards 2204 and 2205. The potting has a sufficient circumferential clearance around the lens element 2202 in order to allow the lens to freely rotate.
The top surface of the housing 2210 contains a recess 2210 the top surface of which is co-planar with the top surfaces of the lens 2202 and LEDs 2102. Further, a pair of flanges are arranged parallel to each other and the longitudinal axis of housing 2208 so that a flat filter element (not shown) that is sized to fit between the flanges can slide across the top surface 2210 and be held in place by the flanges. The filter comprises a sheet of mirrored glass or plastic that is near infra-red. The filter is thus able to cut off the visible light spectrum.
The mini camera housing includes a communications port 2220 which provides the image output to an external frame grabber. The port 220 also connects to an external power supply for the m i-camera. The port 220 may use any optimal wirmg-confirmation which in this embodiment a 9 pm DIN that can connect to the I/O port of a PC.
In a second embodiment, the camera device 220 has no potting. However in this embodiment, a wall 2222 would be placed between the lens 2202 and LED's 2102 to avoid direct reflection on the filter by the LED's.
The camera lens 2202 and camera circuit 2204 are manufactured by Sony Corp. as a 1/3 CCD, 12 VDC Board Camera, Model No. UL-BC460/T8. Referring now to Fig. 23, a schematic circuit board layout 2300 is shown for the LED array 2100 (elements D1-D8) . In this arrangement, the lighting for diodes D1-D8 is continuous but has a quick "on-off" to cover a video view. This on-off cycle is approximately l/30th of a second. Moreover, the flash component of the video view period is l/7000,n of a second. Since the period of lighting is so brief, the flash and the lighting exposure render sensitivity to movement of the subject pratically irrelevant. Flash nonetheless is essential since in security applications, movement of the subject occurs frequently. However, the flash can be changed to a continuous lighting mode, if desired.
Each of the IR LED's is a focused beam diode, which improves efficiency and also reduces power consumption. As noted previously, pin connections 2301 are adapted to connect directly into a personal computer I/O port.
Fig. 24 is an illustration of the circuitry supporting the camera electronics 2400. A constant power source of about 100 million amps is provided. A 12 volt power supply is used along with a 5V control power supply.
As a consequence of the arrangement of FIGS. 21-24 a micro camera arrangement for image capture is created whereby the lighting is located below the camera. Moreover, the position at which the lighting is below such camera is critical, since a subject farther than 3 feet away from the lens will not be captured. Placement of the camera is also sensitive since direct sunlight, incandescent or halogen light will wash out features. Thus any direct light to the camera is problematic.
Figures 25a-c are different views (front, side, and perspective) of a housing designed to contain the camera-LED unit. A recess 2502 is shown in the unit, through which the entire housing 2200 can be inserted. The modular plug 2220 (Fig. 22) would also be connected through cable 2504 (Fig. 25c) to the PC I/O port (not shown) . In the side view of Fig. 25(b), the housing 2504 includes a stand 2506 which pivots about axle 2508 in the direction of arrow 2510. As a consequence, the camera can be supported m a substantially upright position (Fig. 25(c)) when placed with the stand in an extended position on a horizontal surface.
Figs. 26a-c show a second embodiment of the mini-camera housing 2600. As shown, the housing includes a stand 2602, which in a closed position (as shown m Figs. 26a, 26b), completely covers the camera lens and LED's. When fully opened, however, which is accomplished by rotating the stand 2602 about axis 2604, the camera, and LED light unit are fully exposed, and is also supported upright by stand 2602 (Fig. 25c).
Finally, Figs. 27a-c are views of a third embodiment of the m icamera housing 2700. In this embodiment, a stand 2702 is partially cut away, to expose the camera lens only. However, when the stand is opened (Fig. 27(c)) the LED array and the camera 2202 are both exposed for use. Fig. 28 illustrates a second or alternative embodiment for the targeting process set forth Fig. 6 of this invention. The advantage of the alternative technique is that it allows targeting without reference to fixed areas, by dynamically finding the image centers.
The process, 2800 begins at step 2802 where a desired area is isolated and captured by the camera. A histogram for this captured area is then computed by the computer 40. The computer then dynamically determines thresholds by calculating desired threshold barriers that are preprogrammed into the computer 40. For example, high and low rejects can be set to be above the lowest 5% and below the highest 5%, and high and low thresholds between the bottom 45% and below the top 45%. As a consequence, when the threshold is compared to the histogram at step 2808, a 10% middle portion of the histogram can be defined reflecting particular gray-scale characteristics .
The below, between, and above threshold values are then bmarized at bmarization step 2810 as shown in 28(c) . The first bmarization step is the threshold comparison itself, which sets values as follows:
where: HT = high threshold
LT = low threshold
any value > HT=1 any value < HT=0
any value < LT=1
any value > LT=0
the binarized values are then compared, and a majority fill, as shown in Fig. 28(d) occurs, which represents the true binarized area of the targeted object 2812.
The targeted area is then geometrically tested at step 2814 on two candidate points based on preset values which define appropriate quadrants. The points i^ and x,,y, can be isolated based on preset template values. For example, if iris targeting is desired, eye templates can be set so that |X:-X1!>40 and IY2-Y <10. Once correlation of the templates has occurred at step 2816, the image is divided, and the lower portions of the image are dropped-out, leaving only the side-by-side quadrants.
In the event that targeting cannot locate the desired targets with the appropriate threshold characteristics, the system can keep expanding the image area in order to locate the desired points. Specifically an iteration loop can take three (3) images, binarize those values, average those binarized value and store the averaged value in the portable memory. As a result of this iteration process shown at steps 2822 and 2824, a high percentage of accuracy is achieved dynamically.
The above description and drawings are only illustrative of preferred embodiments which achieve the objects, features and advantages of the present invention, and it is not intended that the present invention be limited thereto. Any modifications of the present invention which comes within the spirit and scope of the following claims is considered part of the present
invention.

Claims

WΘ claim :
1. A system and method for obtaining, digitizing, and recording selected data sets of separate and independent component element image signals, sensed by photographic or electronic means, from a person or object, for subsequent comparison with similarly obtained component elements image signals of a person or object whereby the degree of similarity between the original and the subsequent data sets may be determined.
2. A method, as in claim 1, whereby data sets of sensed images may be recorded on portable media, such as magnetic stripe cards, magnetic discs, printed bar codes on paper or similar materials, semi-conductor devices such as smart cards, or in data bases.
3. A method, as in claim 1, of selecting separate and independent component elements of persons or objects whereby the selected components provide image signals for recording quantitative structure or array of numbers that permit statistical identification of the individual person or object from among data sets previously obtained from the same person or similar objects.
4. A method, as in claim 3, whereby the image signal data sets of persons or objects are measured so as to permit the determination of a quantitative degree to which the objects or persons are similar to those originally measured.
5. A method as, in claim 3, whereby it may be determined whether component image signals obtained from a person or object, at any time subsequent to the recording of a set of such image signals from the person or object, differ, or do not differ, from the recorded component images by any, or by more than or less than, preselected quantitative values, permitting the system to indicate that the signals may be quantitatively, or qualitatively, verified, or not verified, as being from the recorded signals of the person or object.
6. A system, as in claim 1, whereby quantitative measures may be selected for inclusion in the digital data system for allowing the determination of chosen levels of difference between subsequently obtained component image signals and verification of a such signals as being, or not being, from the recorded signals of a specific person or object.
PCT/US1997/012716 1996-07-19 1997-07-18 System for object verification and identification WO1998003966A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU38064/97A AU3806497A (en) 1996-07-19 1997-07-18 System for object verification and identification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68470796A 1996-07-19 1996-07-19
US08/684,707 1996-07-19

Publications (2)

Publication Number Publication Date
WO1998003966A2 true WO1998003966A2 (en) 1998-01-29
WO1998003966A3 WO1998003966A3 (en) 1998-04-30

Family

ID=24749226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/012716 WO1998003966A2 (en) 1996-07-19 1997-07-18 System for object verification and identification

Country Status (2)

Country Link
AU (1) AU3806497A (en)
WO (1) WO1998003966A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038333A (en) * 1998-03-16 2000-03-14 Hewlett-Packard Company Person identifier and management system
WO2001078021A3 (en) * 2000-04-07 2002-02-28 Micro Dot Security Systems Inc Biometric authentication card, system and method
WO2009035377A2 (en) * 2007-09-13 2009-03-19 Institute Of Applied Physics Ras Method and device for facial identification of a person

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4712103A (en) * 1985-12-03 1987-12-08 Motohiro Gotanda Door lock control system
US4754487A (en) * 1986-05-27 1988-06-28 Image Recall Systems, Inc. Picture storage and retrieval system for various limited storage mediums
US4975969A (en) * 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
US5410609A (en) * 1991-08-09 1995-04-25 Matsushita Electric Industrial Co., Ltd. Apparatus for identification of individuals
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system
US5466918A (en) * 1993-10-29 1995-11-14 Eastman Kodak Company Method and apparatus for image compression, storage, and retrieval on magnetic transaction cards

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4712103A (en) * 1985-12-03 1987-12-08 Motohiro Gotanda Door lock control system
US4754487A (en) * 1986-05-27 1988-06-28 Image Recall Systems, Inc. Picture storage and retrieval system for various limited storage mediums
US4975969A (en) * 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
US5410609A (en) * 1991-08-09 1995-04-25 Matsushita Electric Industrial Co., Ltd. Apparatus for identification of individuals
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system
US5466918A (en) * 1993-10-29 1995-11-14 Eastman Kodak Company Method and apparatus for image compression, storage, and retrieval on magnetic transaction cards

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038333A (en) * 1998-03-16 2000-03-14 Hewlett-Packard Company Person identifier and management system
WO2001078021A3 (en) * 2000-04-07 2002-02-28 Micro Dot Security Systems Inc Biometric authentication card, system and method
WO2009035377A2 (en) * 2007-09-13 2009-03-19 Institute Of Applied Physics Ras Method and device for facial identification of a person
WO2009035377A3 (en) * 2007-09-13 2009-05-07 Inst Of Applied Physics Ras Method and device for facial identification of a person

Also Published As

Publication number Publication date
AU3806497A (en) 1998-02-10
WO1998003966A3 (en) 1998-04-30

Similar Documents

Publication Publication Date Title
Beymer Face recognition under varying pose
Hamouz et al. Feature-based affine-invariant localization of faces
Beymer Face recognition under varying pose
Datta et al. Face detection and recognition: theory and practice
US7715596B2 (en) Method for controlling photographs of people
JP5955133B2 (en) Face image authentication device
JP2000512047A (en) Biometric recognition using neural network classification
KR100756047B1 (en) Apparatus for recognizing a biological face and method therefor
Akarun et al. 3D face recognition for biometric applications
Beymer Pose-Invariant face recognition using real and virtual views
Bagherian et al. Facial feature extraction for face recognition: a review
Tsai et al. Face detection using eigenface and neural network
Hamouz et al. Affine-invariant face detection and localization using gmm-based feature detector and enhanced appearance model
Fang et al. A colour histogram based approach to human face detection
WO1998003966A2 (en) System for object verification and identification
WO1997005566A1 (en) System for object verification and identification
WO1997005566A9 (en) System for object verification and identification
Mekami et al. Towards a new approach for real time face detection and normalization
Popoola et al. Comparative analysis of selected facial recognition algorithms
EP1615160A2 (en) Apparatus for and method of feature extraction for image recognition
Sun et al. Dual camera based feature for face spoofing detection
Jain et al. Face recognition
JP4606955B2 (en) Video recognition system, video recognition method, video correction system, and video correction method
AYDIN et al. FACE RECOGNITION APPROACH BY USING DLIB AND K-NN
Soltanpour 3D Face Recognition Using Local Feature Based Methods

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 98507154

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA