WO2005109321A1 - System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor - Google Patents

System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor Download PDF

Info

Publication number
WO2005109321A1
WO2005109321A1 PCT/US2005/009161 US2005009161W WO2005109321A1 WO 2005109321 A1 WO2005109321 A1 WO 2005109321A1 US 2005009161 W US2005009161 W US 2005009161W WO 2005109321 A1 WO2005109321 A1 WO 2005109321A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingeφrint
image
sensor
swipe
template
Prior art date
Application number
PCT/US2005/009161
Other languages
French (fr)
Inventor
Robert Weixiu Du
Chinping Yang
Chon In Kou
Original Assignee
Sony Corporation
Sony Electronics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/927,599 external-priority patent/US7212658B2/en
Priority claimed from US10/927,178 external-priority patent/US7194116B2/en
Application filed by Sony Corporation, Sony Electronics, Inc. filed Critical Sony Corporation
Publication of WO2005109321A1 publication Critical patent/WO2005109321A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1335Combining adjacent partial images (e.g. slices) to create a composite input or reference pattern; Tracking a sweeping finger movement

Definitions

  • the present invention relates to personal identification using biometrics and, more specifically, to a method for reconstructing a fingerprint image from a plurality of image frames captured using a swipe fingerprint sensor.
  • Biometric authentication which is the science of verifying a person's identity based on personal characteristics (e.g., voice patterns, facial characteristics, fingerprints) has become an important tool for identifying individuals. Fingerprint authentication is often used for identification because of the relative ease, non-intrusiveness, and general public acceptance in acquiring fingerprints.
  • One way of reducing sensor size and cost is to rapidly sample data from a small area capacitive sensor element as a finger is moved (or “swiped") over the sensor element.
  • the small area sensor element is generally wider but shorter than the finge ⁇ rint being imaged. Sampling generates a number of image frames as a finger is swiped over the sensor element, each frame being an image of a fraction of the finge ⁇ rint. The swipe sensor system then reconstructs the image frames into a complete finge ⁇ rint image.
  • swipe finge ⁇ rint sensors are relatively inexpensive and are readily installed on most portable electronic devices, the amount of computation required to reconstruct the f ⁇ nge ⁇ rint image is much greater than the computation required to process a finge ⁇ rint captured as a single image.
  • Swipe sensor computation requirements increase system costs and result in poor identification response time. Computational requirements are further increased because of variations in a digit's swipe speed and the need to accommodate various finger positions during the swipe as the system reconstructs a complete finge ⁇ rint image from the frames generated during the swipe.
  • the sensor system must determine the finger's swipe speed so as to extract only the new portion of each succeeding frame as the system reconstructs the finge ⁇ rint image.
  • current swipe sensors must be coupled to a robust computing system that is able to reconstruct the finge ⁇ rint image from the image frames in real or near-real time.
  • finge ⁇ rints for identification pu ⁇ oses
  • the output of the sensor is typically compared to a library of known finge ⁇ rints using pattern recognition techniques. It is generally recognized that the core area of a finge ⁇ rint is the most reliable for identification pu ⁇ oses. With the image acquired by a large area sensor, the core area is consistently located in the general center of the image. With a swipe sensor, however, it is difficult to locate this core area.
  • the core location of the image reconstructed by a swipe sensor cannot be guaranteed to be located in the neighborhood of the image center due to the way a digit may be positioned as it is swiped over the sensor element.
  • finge ⁇ rint verification has been limited to stationary applications requiring a high degree of security and widespread adoption of finge ⁇ rint identification has been limited.
  • a finge ⁇ rint identification system that is inexpensive, that efficiently assembles swipe sensor frames into a finge ⁇ rint image, that locates the core area of the reconstructed finge ⁇ rint image, that authenticates the captured finge ⁇ rint image in real or near-real time, and that performs these tasks using the limited computing resources of portable electronic devices.
  • a low cost finge ⁇ rint identification system and method is provided. More specifically, the present invention relates to reconstructing a finge ⁇ rint image from a plurality of frames of image data, obtained from a swipe sensor finge ⁇ rint identification system. Each frame comprises a plurality of lines with each line comprising a plurality of pixels.
  • the frames are transferred to a host where the finge ⁇ rint image is reconstructed. After a first frame F is stored in a reconstructed image matrix (denoted I), the new data portion of each subsequent frame is stored in the image matrix until a complete image of the finge ⁇ rint is obtained.
  • the finge ⁇ rint image reconstruction process of the present invention determines how many lines in each frame is new data. This determination is based on a motion estimate that is obtained for each frame after the first frame. To reduce computational overhead, the motion estimate process initially decimates each frame by reducing the number of pixels in each row. Decimation reduces subsequent computational requirements without reducing resolution in motion estimation and enables real-time processing even if system resources are limited. The decimated frame is then normalized and a correlation process determines the amount of overlap between consecutive frames. The correlation- process generates a delay factor that indicates how many new lines have moved into each frame relative to the immediately preceding frame. The correlation process continues until there are no further frames to add to the reconstructed matrix.
  • the present invention further provides an enrollment mode and an identification mode.
  • the enrollment mode is used to build a database of templates that represent authorized or known individuals.
  • a finge ⁇ rint image is processed and compared to templates in the database. If a match is found, the user is authenticated. If no match is found, that condition is noted and the user is provided the opportunity to enroll.
  • a user interface is used to assist the user in use of the system.
  • the present invention further crops the reconstructed frame, removes noise components and then extracts a small, core portion of the finge ⁇ rint.
  • the core portion of the finge ⁇ rint image is used to generate a template for the database when the system is operating in the enrollment mode.
  • the core portion is compared to the template to determine if there is a match.
  • the reconstructed method efficiently converts the frames of a finger scan into an accurate image of the finge ⁇ rint. Then, once the image is obtained, the enrollment and identification modes are well suited for implementation in portable electronic devices such as cellular telephones, PDAs, portable computers or other electronic devices.
  • Figure 1 a simplified block diagram illustrating one exemplary embodiment of a finge ⁇ rint identification system in accordance with an embodiment of the present invention.
  • Figure 2 illustrates an exemplary swipe finge ⁇ rint sensor in accordance with an embodiment of the present invention.
  • Figure 3 shows one method for reconstructing a finge ⁇ rint image from a plurality of frames acquired from the swipe finge ⁇ rint sensor in accordance with an embodiment of the present invention.
  • Figure 4 illustrates the formation of the extracted arrays used to calculate the delay factor between frames in accordance with an embodiment of the present invention.
  • Figure 5 shows an intermediate finge ⁇ rint image buffer and a current frame acquired from the swipe finge ⁇ rint sensor in accordance with an embodiment of the present invention.
  • Figure 6 shows an updated finge ⁇ rint image buffer in accordance with an embodiment of the present invention.
  • Figure 7 illustrates an exemplary memory map showing the components for enrolling and identifying the finge ⁇ rints of a user in accordance with an embodiment of the present invention..
  • Figures 8 and 8A-8C show the enrollment mode of operation in accordance with an embodiment of the present invention.
  • Figure 9 shows the identification mode of operation in accordance with an embodiment of the present invention.
  • Figure 10 is a diagrammatic perspective view of an illustrative electronic device that includes a finge ⁇ rint identification system in accordance with an embodiment of the present invention.
  • Figure 11 is a diagrammatic view of an illustrative system that includes an electronic device having a swipe sensor and a computing platform remote from the electronic device for storing and identifying finge ⁇ rints in accordance with an embodiment of the present invention.
  • System 100 includes a microprocessor module 102 and a finge ⁇ rint sensor module 104 that operates under the control of microprocessor 102.
  • Finge ⁇ rint sensor module 104 includes a swipe sensor 106, which includes a swipe sensor stripe 200 ( Figure 2), over which a finger is moved, and associated electronic circuits.
  • the sensor stripe area of swipe sensor 106 is much smaller than the surface area of a typical finge ⁇ rint so that, as the finger is moved ("swiped") across the sensor stripe, partial finge ⁇ rint images are acquired sequentially in time.
  • Finge ⁇ rint sensor module 104 also includes finger motion detector 108, analog to digital converter (ADC) 110, and data buffer 112 that receives data from swipe sensor 106 and motion detector 108.
  • Data buffer 112 is illustrative of various single and multiple buffer (e.g., "double-buffering") configurations.
  • Microprocessor module 102 includes an execution unit 114, such as a
  • Microprocessor module 102 also includes memory 116.
  • memory 116 is a random access memory (RAM).
  • RAM random access memory
  • memory 116 is a combination of volatile (e.g., static or dynamic RAM) and non- volatile (e.g., ROM or Flash EEPROM).
  • Communication module 118 provides the interface between sensor module 104 and microprocessor module 102.
  • commumcation module 118 is a peripheral interface module such as a universal serial bus (USB), an RS-232 serial port, or any other bus, whether serial or parallel, that accepts data from a peripheral.
  • USB universal serial bus
  • RS-232 serial port any other bus, whether serial or parallel, that accepts data from a peripheral.
  • UI module 122 enables system 100 to communicate with a user using various ways known in the electronic arts.
  • UI module 122 includes an output, such as a video display or light emitting diodes, and/or an input, such as a keypad, a keyboard, or a mouse.
  • system 100 prompts a user to place a finger on swipe sensor 106's sensor stripe and to swipe the finger in a specified direction. If the swipe results in an error, system 100 instructs the user to repeat the swipe. In some instances, system 100 instructs the user how to move the finger across the sensor by displaying, e.g., a video clip.
  • motion detector 108 If motion detector 108 detects the presence of a finger about to be swiped, motion detector 108 transmits an interrupt signal to microprocessor module 102 (e.g., an interrupt signal that execution unit 114 detects). In response to the received interrupt signal, execution unit 114 accesses executable code stored in memory 116 and the two-way communication between sensor unit 104 and microprocessor module 102 is established.
  • microprocessor module 102 e.g., an interrupt signal that execution unit 114 detects.
  • execution unit 114 accesses executable code stored in memory 116 and the two-way communication between sensor unit 104 and microprocessor module 102 is established.
  • swipe sensor 106 As a finger is swiped over the sensor stripe, swipe sensor 106 generates an analog signal that carries the partial finge ⁇ rint image data frames for a finge ⁇ rint image.
  • ADC 110 receives and converts the analog signal from swipe sensor 106 into a digital signal that is directed to data buffer 112.
  • Data buffer 112 stores data associated with one or more of the captured finge ⁇ rint image data frames received from ADC 110.
  • Image data from data buffer 112 is then transferred to microprocessor module 102, which performs signal processing functions in real or near-real time to reconstruct and identify the complete finge ⁇ rint image.
  • the image frame data in data buffer 112 is transferred to microprocessor module 102 in small chunks (e.g., one pixel row at a time, as described in more detail below) to reduce the amount of memory required in sensor module 104.
  • execution unit 114 initiates the transfer of data from buffer 112 to memory 116. Alternately, data from swipe sensor 106 begins to be transferred to execution unit 114 if the beginning of a finger swipe is detected. Execution unit 114 stops receiving data generated by swipe sensor 106 if the swipe is completed, if no finger is present, if finger motion over swipe sensor 106 stops, or if the swipe duration exceeds a maximally allowed time specified by the system (i.e., a system timeout feature).
  • sensor module 104 remains in a quiescent state until motion detector 108 detects motion.
  • motion detector 108 detects a finger being moved it triggers sensor module 104 into full power operation.
  • the motion detection activates a communication link with microprocessor module 102.
  • microprocessor module 102 Once activated, partial finge ⁇ rint image frame data in data buffer 112 is transferred to microprocessor module 102, which performs signal-processing functions to reconstruct the finge ⁇ rint image, as described in detail below
  • the first operating mode is the enrollment mode, in which an individual is enrolled in identification system 100.
  • the enrollment mode is selected, several of the individual's finge ⁇ rint images are captured, together with other identifying information such as the individual's name, physical description, address, and photograph.
  • Each finge ⁇ rint image captured by sensor module 104 is verified and processed by microprocessor module 102 to generate a template of the individual's finge ⁇ rint.
  • the template is stored (e.g., under control of database module 120) for later use during identification when system 100 is operating in the second mode.
  • the second system 100 operating mode is the identification mode, in which system 100 determines if an individual is identified.
  • sensor module 104 acquires a finge ⁇ rint image that is processed by microprocessor module 102. If the acquired image meets one or more predetermined criteria, it is compared to the library of stored finge ⁇ rint image templates. If there is a match between the acquired image and a stored image template, then the individual has been successfully identified. The results of the comparison may then be further acted upon by microprocessor module 102 or by a second electronic device or system.
  • microprocessor module 102 controls the electronic door lock to unlock the door.
  • Swipe finge ⁇ rint sensor stripe 200 comprises an array of picture element (“pixel”) capacitive sensors, such as pixel 202, that are arranged in a plurality of rows, illustrated in Figure 2 as rows ri through ⁇ M , and a plurality of columns, illustrated in Figure 2 as columns c ⁇ through CN. The intersection of each row and column defines the location of a pixel capacitive sensor.
  • pixel picture element
  • Sensor stripe 200 may have any number of rows of pixels.
  • Figure 2 shows sensor 200 having 12 rows to illustrate the invention, but it is common for sensor element 200 to have between 12 and 36 pixel rows. Some embodiments use either 16 or 24 pixel rows.
  • sensor stripe 200 may have various numbers of pixel columns. In one illustrative embodiment, each row r ⁇ of sensor stripe 200 has 192 pixels. In other embodiments, sensor stripe 200 columns have 128 pixels if sensor stripe 200 has 16 or 24 rows. The number of columns is generally such that sensor stripe 200 is wider than finger 204, as shown by phantom line in Figure 2, to be swiped across it.
  • the number of columns can be lessened such that sensor stripe 200 is somewhat narrower than a finger to be swiped across it, as long as sufficient finge ⁇ rint image data is captured for effective identification.
  • Foinge ⁇ rint image core area is discussed in more detail below.
  • the number of pixels in each row and column will typically depend on various design parameters such as the desired resolution, data processing capability available to reassemble the finge ⁇ rint image frames, anticipated maximum digit swipe speed, and production cost constraints.
  • Arrow -206 shown in Figure 2 illustrates a direction of finger 204 movement, as sensed by motion detector 108.
  • Figure 2 shows sensor stripe 200 oriented such that the pixel columns are generally parallel to the finger (e.g., the finge ⁇ rint image is reconstructed from bottom-to-top), in other embodiments sensor stripe 200 may. be oriented such that the pixel columns are generally pe ⁇ endicular to the finger (e.g., the finge ⁇ rint image is reconstructed from right-to-left).
  • finge ⁇ rint identification system 100 acquires at least two finge ⁇ rint image data frames as the finger is swiped across sensor stripe 200.
  • Each finge ⁇ rint image frame represents a fraction of finger 204 's finge ⁇ rint topology.
  • sensor element 200 is made of 24 rows, each row having 128 pixels.
  • Each finge ⁇ rint image frame is therefore made of an array of 3,072 pixels.
  • a plurality of finge ⁇ rint image frames is acquired as the finger is swiped across sensor stripe 200.
  • Data buffer 112 has sufficient depth to store finge ⁇ rint image data frames between each data transfer to microprocessor module 102.
  • data buffer 112 includes a first buffer portion large enough to store only one row of 128 pixels. So that the next line of scanned image does not overwrite the currently stored image, a second buffer portion is needed to store the next line of image data while the first line is transferred to microprocessor module 102. This scheme is often referred to as double buffering. By rapidly sampling the analog signal generated by swipe sensor 106, two or more finge ⁇ rint image frames are captured.
  • swipe sensor 106 is positioned to accept digit swipes from right to left, left to right, bottom to top, or in various other directional orientations, depending on how swipe sensor 106 is physically positioned or on a desired design feature. In some instances, a combination of swipe directions may be used during enrollment or identifications.
  • finger 204 is swiped across sensor stripe 200 in the
  • the capacitive pixels of sensor 200 are rapidly sampled, thereby generating finge ⁇ rint image frames of the complete finge ⁇ rint.
  • more than one hundred frames are captured from the time finger 204 first contacts sensor stripe 200 until finger 204 is no longer in contact with sensor stripe 200.
  • system 100 accepts finger movement speed as fast as 20 centimeters per second. It will be appreciated that the number of generated finge ⁇ rint image frames will vary depending on how fast the finger is swiped and the length of the finger. It will also be appreciated that the swipe rate may vary during the swipe. For example, if the swipe is paused for a fraction of a second, many frames may contain identical or nearly identical data.
  • the ' acquired finge ⁇ rint image frames are assembled to form a complete finge ⁇ rint image.
  • the sample rate is fast enough to ensure that the finge ⁇ rint image is over-sampled during a swipe. Such over-sampling ensures that a portion of one finge ⁇ rint image frame contains information identical to that in a portion of the next subsequent finge ⁇ rint image frame.
  • This matching data is used to align and reassemble the finge ⁇ rint image frames into a complete finge ⁇ rint image.
  • microprocessor module 102 assembles the finge ⁇ rint image frames in real time, such that only the most recent two sampled finge ⁇ rint image frames are required to be stored in host memory 116. In this embodiment, slow finger swipe speed will not tax system memory resources.
  • microprocessor module 102 receives and stores all captured finge ⁇ rint image data frames before assembling them into a complete finge ⁇ rint image.
  • Finge ⁇ rint image reconstruction is done in some embodiments by using a process based on three image data frames represented as matrices.
  • swipe sensor stripe 200 illustratively has M rows and N columns of pixels 202.
  • Each of the three image data frames is associated with sensor stripe 200' s pixel matrix dimensions of M rows and N columns.
  • the following description is based on a finge ⁇ rint image data frame of 12 rows and 192 columns (i.e., 2304 pixels), a matrix size that is illustrative of various matrix sizes within the scope of the invention.
  • the reconstructed finge ⁇ rint image will have the same width (e.g., 192 pixels) as the finge ⁇ rint image data frame.
  • the first finge ⁇ rint image frame that is used for finge ⁇ rint image reconstruction is the most recent finge ⁇ rint image frame F k (the "prior frame") from which rows have been added to the reconstructed finge ⁇ rint image.
  • the second finge ⁇ rint image frame that is used is the next finge ⁇ rint image frame Fk+i (the "next frame") from which rows will be added to the reconstructed finge ⁇ rint image.
  • next frame F k+ i is just received at microprocessor module 102 from sensor module 104 in time for processing.
  • a copy of prior frame F k is held in memory (e.g., memory 116) until next frame F k+ i is processed and becomes the new prior frame.
  • the third finge ⁇ rint image frame is an
  • the extracted frame F k (the "extracted frame") that is extracted from the reconstructed finge ⁇ rint image.
  • the extracted frame F k is made of the most recent rows added to the reconstructed finge ⁇ rint image.
  • microprocessor module 102 determines the number of new image data rows from F k +i to be added to the reconstructed finge ⁇ rint image buffer. The process continues until the final new image data rows are added from the last finge ⁇ rint image data frame received at microprocessor module 102 to the reconstructed finge ⁇ rint image.
  • each line / of a finge ⁇ rint image frame F k may be represented in matrix notation as:
  • the frame extracted from the reconstructed fi ⁇ ge ⁇ rint image matrix / may be represented as:
  • FIGS 3-6 considered together, illustrate embodiments of finge ⁇ rint image reassembly from the sampled finge ⁇ rint image frames.
  • One portion of memory 116 acts as a received finge ⁇ rint image frame buffer that holds one or more sampled fmge ⁇ rint image frame data sets received from sensor module 104.
  • Another portion of memory 116 acts as a reconstructed image buffer that holds the complete finge ⁇ rint image data I as it is assembled by microprocessor module 102.
  • the finge ⁇ rint image reconstruction begins at 300 as the first sampled finge ⁇ rint image frame data Fj is received into the finge ⁇ rint image frame buffer of memory 116.
  • this is the first finge ⁇ rint image frame data, it can be transferred directly into the reconstructed image frame buffer as shown at 302. ' In other instances, the process described below can be used with values initialized to form a "prior" frame and in the reconstructed image frame buffer. At the conclusion of 302, at least Mrows exist in the reconstructed finge ⁇ rint image buffer.
  • extracted frame F k is created from data in the reconstructed finge ⁇ rint image buffer. Then, prior frame F k and extracted frame F k are decimated to form two smaller matrices, represented as F k and F k , an operation that assists processing speed during calculations described below.
  • This operation is diagrammatically illustrated in Figure 4, which shows prior frame 402 and extracted frame 404 each decimated to form associated decimated prior frame 406 and decimated extracted frame 408.
  • the next frame F k+ i is received and decimated using a like manner and is represented as F d +1 .
  • decimated arrays are each comprise a M X D matrix where M equals the number of rows of sensor stripe 200 and D equals the decimated number of columns (or pixels per line).
  • Decimating the matrices into D columns reduces the computation load o microprocessor module 102 by reducing the number of columns carried forward. For example, if the matrices each have 192 columns, the associated decimated matrices may each have, e.g., only 16 columns. Decimation should occur in real time to facilitate sensor use.
  • the frame may be decimated by looking at only the central 16 columns or by selecting the first column and every 10th or 12th column thereafter.
  • an average of a selected number of columns e.g., ten
  • the sum of every — pixels is taken using a sliding window.
  • the averaging operation functions as a low-pass filter in the horizontal direction such that the low-pass smoothing alleviates any change in signal characteristic due to horizontal shift created by a non-linear swipe. Decimation is not required, however, and in some instances when design considerations and data processing capability allows, the three matrices are processed in accordance with the invention without such decimation.
  • matrices F k , F k , and F k+1 are normalized. Then, one correlation coefficient matrix is calculated using normalized F k . and F k+l , and a second correlation coefficient matrix is calculated using normalized F d and F k+l . Next, two sets of correlation functions are computed by averaging the ⁇ diagonal of the correlation coefficient matrices. These two sets of correlation functions correspond to the correlation between the new frame F k+ i and the prior frame F k , and the correlation between the new frame F k+ i and the extracted frame F k .
  • Figure 4 shows illustrative correlation engine 416 calculating a correlation coefficient matrix and correlation function ⁇ 1 from F k and F k+l , and illustrative correlation engine 418 calculating a correlation coefficient matrix and correlation function ⁇ 2' from F d and F k+l .
  • Correlation engines 416 and 418 then compute the delay (i.e., estimated finger motion between frames) between frame F k 402 and frame F k+ i 414 to determine the number of rows from frame F k+ i 414 that should be appended to the finge ⁇ rint image data stored in the reconstructed finge ⁇ rint image buffer.
  • the delay i.e., estimated finger motion between frames
  • T denotes a transpose matrix
  • P e $K xM is a diagonal matrix with the z -th element being defined as:
  • the P matrix is a 16 by 16 matrix that is used to normalize each row to uniform energy.
  • the motion or delay across the swipe sensor is then calculated by: ⁇ max ⁇ *- ⁇ ma ⁇ 5 ⁇ ma ⁇ i- q. 9 where the function fQ can be a function of the weighted average or the average of the arguments.
  • the pu ⁇ ose of averaging the two delay estimates is to improve the overall estimation quality.
  • variable ⁇ max indicates how many new lines have moved into the new frame. Accordingly, the top ⁇ max lines from the new frame F k+l are moved into the reconstructed image matrix (I) or:
  • FIGS. 5 and 6 are diagrammatic views that further illustrate an embodiment of finge ⁇ rint image reconstruction as described above with reference to Figures 3 and 4.
  • M rows of the finge ⁇ rint image data most recently added to reconstructed finge ⁇ rint image buffer 500 are defined as extracted frame F k .
  • Finge ⁇ rint image buffer 500 will eventually hold the complete reconstructed finge ⁇ rint image I.
  • the next finge ⁇ rint image frame F k+ i also has M rows of image data.
  • the top Int( ⁇ ) rows of next finge ⁇ rint image frame F k+ i form new data portion 502.
  • the remaining rows 504 of next finge ⁇ rint image frame F k+ i generally match the top M- Int( ⁇ ) rows of extracted frame F k .
  • Figure 6 shows that the top Int( ⁇ max ) rows of next finge ⁇ rint image frame
  • F k+ i that is, new portion 502 have been added to reconstructed finge ⁇ rint image buffer 500 as described above.
  • New data portion 502 and data overlap portion 602 are then defined as the extracted frame F k to be used during the next iteration of the finge ⁇ rint image reconstruction process described above.
  • microprocessor module 102 then proceeds to use the. image / to form a finge ⁇ rint image template, if operating in the enrollment mode, or to compare the image / to the library of existing f ⁇ nge ⁇ rint image templates, if operating in the identification mode. .
  • FIG. 7 illustrates a memory map 116 of one embodiment of the present invention.
  • Finge ⁇ rint image frame data received from sensor module 104 is held in image data buffer 702.
  • the reconstructed finge ⁇ rint image is stored in reconstructed finge ⁇ rint image buffer 500.
  • Execution unit 114 uses executable code in memory space 704 to enroll a finge ⁇ rint for later identification use, and uses executable code in memory space 706 to determine if an acquired finge ⁇ rint image matches an enrolled image.
  • Finge ⁇ rint image templates that are built during the enrollment process and that are used during the identification process are stored in finge ⁇ rint image template buffer 708.
  • Database management system code required for database module 120, accessible by execution unit 114, is stored in memory space 710.
  • the database management code is adapted for use as embedded code in the portable electronic device (e.g., cellular telephones, personal digital assistants, etc.) that hosts memory 116.
  • Template buffer 708 memory space and/or memory space 710 may reside in, for example, host flash memory, an external memory card, or other high capacity data storage device or devices.
  • memory 116 may also contain operating system code in memory space 712 and one or more application programs in memory space 714 to control other peripheral devices (not shown) such as, by way of example, an access control system that restricts access to use of an electronic device or entry to a physical location.
  • Application programs 714 may also include software commands that, when executed, control the user interface module 122 to inform and instruct the user. Once a user is enrolled, they may invoke an application program by merely swiping their finger over sensor 106.
  • Communication module 118 code may reside in memory space 716 as an application program interface or in memory space 712 as part of operating system code.
  • the memory map depicted in Figure 7 is illustrative of various memory configurations distributed within or among various memory types.
  • FIG 8 assembled from Figures 8A-8C, is a flow diagram illustrating one embodiment of a method for acquiring a finge ⁇ rint in the enrollment mode for subsequent use in the identification mode.
  • Each user to be identified is enrolled by acquiring one or more known finge ⁇ rint images and generating a template that will be managed by database management system, together with other identifying information associated with the enrolled user, and stored in finge ⁇ rint template buffer 708.
  • multiple images centered around the finge ⁇ rint image core area are acquired and used to construct the finge ⁇ rint image template used in the identification mode.
  • the number of acquired finge ⁇ rint images will vary depending upon the degree of accuracy required for a particular application.
  • At least three finge ⁇ rint images are required to generate a finge ⁇ rint image template (in one instance four images are used).
  • the enrollment process begins at 802 in Figure 8A as user interface module 122 outputs an instruction to a user to swipe a finger across sensor stripe 200.
  • the finge ⁇ rint image frames acquired during the user's finger swipe are transferred to memory 116 as described above.
  • the enrollment process initiates the execution of executable code 704.
  • only a few of the most recently acquired finge ⁇ rint image frames are saved in image data buffer 702.
  • only the two most recently acquired finge ⁇ rint image frames are saved. Once finge ⁇ rint image frames have been used to add data to the reconstructed finge ⁇ rint image they may be discarded.
  • executable code 704 begins to reconstruct the finge ⁇ rint image from the plurality of finge ⁇ rint image frames being received into memory 716.
  • the finge ⁇ rint image reconstruction begins in real time and is primarily directed to detecting overlapping finge ⁇ rint image frame portions and adding non-overlapping portions as new data to the reconstructed finge ⁇ rint image, as described above.
  • initial quality verification is performed at 808.
  • the quality verification process applies a set of statistical rules to determine if the reconstructed image contains sufficient data and is capable of being further processed.
  • the image quality verification process uses a two-stage statistical pattern recognition. Pattern recognition is well known in the art and is an engineering selection that will depend on whether the application requires high accuracy or a fast analysis.
  • a statistical database is generated from a collection of known good and bad images. The statistical features of the good and bad images are extracted and a statistical model is created for both good and bad populations.
  • the statistical database is independently generated by each identification system in other embodiments, it may be preloaded into the identification system from an existing database structure.
  • the same statistical features are extracted from the newly reconstructed finge ⁇ rint image and are compared to the good and bad statistical models. If the reconstructed finge ⁇ rint image has characteristics similar to those of a good image, enrollment continues. If the reconstructed finge ⁇ rint image has characteristics similar to those of a bad image, the image is considered to have unacceptable quality, the image is discarded, and the user is instructed to repeat the finger swipe as shown at 810.
  • the verified, reconstructed image is cropped.
  • Image cropping accounts for, e.g., very long images with only a portion containing finge ⁇ rint data. It will be appreciated that passing a very large image to subsequent processing will consume system resources and result in decreased performance. Cropping strips off and discards non-core finge ⁇ rint and finger data. The cropped image will primarily contain data obtained from the core portion of the finger.
  • the cropped image is pre-processed, e.g., to remove noise components or to enhance image quality.
  • a 2-D low-pass filter can be used to remove high frequency noise
  • a 2-D median filter can remove spike-like interference.
  • the core area of the cropped and pre- processed finge ⁇ rint image is identified because this area is generally accepted to be the most reliable for identification. Unlike the image generated by an area finge ⁇ rint sensor, the core area of the reconstructed finge ⁇ rint image cannot be guaranteed to be located in the neighborhood of the image center. Thus, the executable code scans the cropped finge ⁇ rint image to identify the core area.
  • the core area typically exhibits one or more characteristic patterns that can be identified using methods such as orientation field analysis.
  • the finge ⁇ rint image may be further cropped to eliminate non-essential portions of the image.
  • the final cropped finge ⁇ rint image core area can be as small as a 64x64 pixel image.
  • a second quality verification is preformed to ensure that the cropped image of the core area is of sufficient size to enable identification. If the cropped image of the core area is too small, the image is discarded and another finge ⁇ rint image is acquired, as indicated at 812. Small images may occur due to very slow finger movement, during which only a small portion of the finger is scanned before scanning time-out. Small images may also occur if the swiped finger is off the center so that the cropped image contains only a small amount of useful data.
  • One exemplary criterion for small image rejection states that if more than 20- percent of desired region around the core area is not captured, the image is rejected.
  • an optional second order pre-processing is performed at 822.
  • This second pre-processing performs any necessary signal processing functions that may be required to generate a finge ⁇ rint image template. Since the cropped image of the finge ⁇ rint image core area is relatively small compared to the data captured by sensor 106, system resource requirements are significantly reduced.
  • pre-processing at 822 is completed, the image of the core region is stored in template buffer 710 as indicated at 824.
  • user interface module 122 For each finge ⁇ rint image to be acquired, user interface module 122 outputs the appropriate instruction to the user. For example, the user may be instructed to swipe their right index finger (or, alternatively, any finger the user may choose) across the sensor, to repeat the swipe as necessary to obtain high quality multiple finge ⁇ rint images, and to be told that finge ⁇ rint image capture and enrollment has been successful.
  • a finge ⁇ rint image template is generated as indicated at 828.
  • a correlation filter technique is employed to form a composite of the multiple cropped finge ⁇ rint images. Multiple correlation filters may be used to construct a single finge ⁇ rint image template.
  • An advantage of the correlation filter technique is that it requires a relatively small image size to get reliable identification performance. Use of the correlation filter technique on relatively small finge ⁇ rint image sizes during the enrollment and identification modes reduces system resource requirements over, for instance, area sensor requirements in which captured finge ⁇ rint images tend to be relatively larger.
  • FIG. 9 is a flow diagram illustrating an embodiment of a process for the identification mode. If, for instance, motion detector 108 detects a finger, then the identification process begins if the enrollment mode has not been previously activated. The identification process begins at 902 with the acquisition of finge ⁇ rint image frames at 904 and reconstruction of the finge ⁇ rint image to be used for identification at 906 in accordance with the invention, as described above.
  • the finge ⁇ rint image quality is verified. If the finge ⁇ rint image is poor, the image data is dumped and processing stops at 910. Consequently, the user remains unidentified and an application program continues to, e.g., deny access to one or more device functions.
  • the reconstructed finge ⁇ rint image is cropped to strip out peripheral finge ⁇ rint image data that does not include finge ⁇ rint image data to be used for identification.
  • preprocessing at 914 removes, e.g., noise components, or other introduced artifacts and non-linearities.
  • the core area of interest of the acquired finge ⁇ rint image is extracted.
  • the finge ⁇ rint image's extracted core area of interest image size is verified. If the image size has degraded, the process moves to 910 and further processing is stopped. If, however, the image size is verified as adequate, at 920 a second image pre-processing is undertaken, and the necessary signal processing functions are performed to condition the extracted, cropped finge ⁇ rint image in same manner as that used to generate finge ⁇ rint image templates, as described above.
  • a pattern matching algorithm is used to compare the extracted, cropped, and pre-processed finge ⁇ rint image with one or more stored finge ⁇ rint image templates.
  • a match i.e., the core area of the f ⁇ nge ⁇ rint image acquired for identification is substantially similar to a stored finge ⁇ rint image template
  • the user who swiped his or finger is identified as being the one whose identification data is associated with the matching finge ⁇ rint image template. Consequently, an application program may, e.g., allow the identified user to access one or more device features, or to access an area.
  • a two- dimensional cross-correlation function between the extracted, cropped, pre-processed finge ⁇ rint image and the' finge ⁇ rint image template is performed. If the comparison exceeds a pre-determined threshold, the two images are deemed to match. If a match is not found, the user who swiped his or her finger remains unidentified. Consequently, e.g., an application program continues to deny access to the unidentified user.
  • a correlation filter may be used with a peak-to-side lobe ratio (PSR) of a 2-D correlation function compared to a pre-specified threshold. If the PSR value is larger than the threshold, a match is declared. If the PSR value is smaller than the threshold, a mismatch is declared.
  • PSR peak-to-side lobe ratio
  • Finge ⁇ rint image processing and identification in accordance with the present invention allows sensor system 100 to be used in many applications, since such processing and identification are accurate, reliable, efficient, and inexpensive. Finge ⁇ rints are accepted as a reliable biometric way of identifying people.
  • the present invention provides accurate finge ⁇ rint identification as illustrated by the various cropping, pre-processing, template generation, and image comparison processes described above. Further, system resource requirements (e.g., memory, microprocessor cycles) of the various embodiments of the present invention are relatively small. As a result, user enrollment and subsequent identification tasks are executed in real time and with small power consumption.
  • FIG 10 is a diagrammatic perspective view of an illustrative electronic device 1002 in which finge ⁇ rint identification system 100 is installed. As illustrated in Figure 10, electronic device 1002 is portable and finge ⁇ rint identification system 100 operates as a self-contained unit within electronic device 100. In other illustrative embodiments discussed below, electronic device and finge ⁇ rint identification system 100 are communicatively linked to one or more remote stations.
  • Examples of portable electronic devices 100 include cellular telephone handsets, personal digital assistants (hand-held computer that enables personal information to be organized), laptop computers (e.g., VAIO manufactured by Sony Co ⁇ oration), portable music players (e.g., WALKMAN devices manufactured by Sony Co ⁇ oration), digital cameras, camcorders, and portable gaming consoles (e.g., PSP manufactured by Sony Co ⁇ oration). Examples of fixed electronic devices 100 are given below in text associated with Figure 11.
  • sensor stripe 202 may be located in various positions on electronic device 1002. In some instances sensor stripe 202 is positioned in a shallow channel 1004 to assist the user in properly moving the finger over sensor stripe 202.
  • Figure 10 shows the channel 1004 and sensor stripe 202 combination variously positioned on top 1006, side 1008, or end 1010 of electronic device 1002.
  • the channel 1004 and sensor stripe 202 is ergonomically positioned so as to allow the user to easily swipe his or her finger but to not interfere with device functions such as illustrative output display 1012 or illustrative keypad 1014.
  • more than one sensor stripe 202 may be positioned on a single electronic device 1002 (e.g., to allow for convenient left- or right-hand operation, or to allow for simultaneous swipe of multiple fingers by one or more users).
  • FIG 11 is a diagrammatic view of an illustrative system 1100 that includes electronic device 1002, one or more devices or computing platforms remote from electronic device 1002 (collectively termed a "remote station"), and finge ⁇ rint identification system 100.
  • finge ⁇ rint identification system 100 is contained within electronic device 1002.
  • finge ⁇ rint identification system is distributed among two or more remote devices.
  • electronic device 1002 communicates via link 1102 (e.g., wired, wireless) with remote station 1104.
  • Remote station 1104 may include or perform one or more of the functions described above for microprocessor module 102.
  • a large number of finge ⁇ rint image templates may be stored and managed by database 1106 in a nation-wide identification system (e.g., one in which multiple electronic devices 1002 access station 1104 to perform finge ⁇ rint identifications).
  • Figure 11 further illustrates embodiments in which electronic device 1002 communicates via communications link 1108 (e.g., wired, wireless) with a second electronic device 1110.
  • remote station 1104 and the second electronic device 1110 may communicate directly via communications link 1112 (e.g., wired, wireless).
  • remote station 1104 is a computing platform in second electronic device 1110.
  • electronic device 1002 is fixed on a wall.
  • a user swipes their finger over sensor unit 106 in electronic device 1002, and the f ⁇ nge ⁇ rint swipe information is sent via communications link 1102 to remote station 1104.
  • Remote station 1104 receives the sampled finge ⁇ rint image frames, reconstructs and processes the finge ⁇ rint image, and then compares the user's finge ⁇ rint with finge ⁇ rint image templates stored in database 1106. If a match is found, remote station 1104 communicates with second electronic device 1110, either directly via communications link 1112 or indirectly via communications link 1102, electronic device 1002, and communications link 1108 so as to authorize second electronic device 1110 to open a door adjacent the wall on which electronic device 1002 is fixed.
  • a similar user identification function matches a user with a credit or other transactional card (e.g., FELICA manufactured by Sony Co ⁇ oration) to facilitate a commercial transaction.
  • Remote station 1104 compares card information input at second electronic device 1110 and a user finge ⁇ rint image input at electronic device 1002 to determine if the transaction is authorized.
  • Electronic device 1002 may be a peripheral device communicatively coupled with a personal computer (e.g., stand alone, or inco ⁇ orated into a pointing device such as a mouse).
  • a personal computer e.g., stand alone, or inco ⁇ orated into a pointing device such as a mouse.
  • the method described herein may be implemented in any suitable programming language can be used to implement the routines of the present invention including C, C++, Java, assembly language, etc. Different programming techniques , can be employed such as procedural or object oriented.
  • the routines can execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, multiple steps shown as sequential in this specification can be performed at the same time.
  • the sequence of operations ' described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc.
  • the routines can operate in an operating system environment or as stand-alone routines occupying all, or a substantial part, of the system processing.
  • memory for pu ⁇ oses of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport a program or data for use by or in connection with the instruction execution system, apparatus, system, or device.
  • the memory can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
  • Embodiments of the invention may be implemented by using a programmed general pu ⁇ ose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used.
  • the functions of the present invention can be achieved by any means as is known in the art.
  • Distributed, or networked systems, components and circuits can be used.
  • Communication, or transfer, of data may be wired, wireless, or by any other means.
  • any signal arrows in the figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted.
  • the term “or” as used herein is generally intended to- mean “and/or” unless otherwise indicated. Combinations of components or steps will also be considered as being noted, where terminology is foreseen as rendering the ability to separate or combine is unclear.

Abstract

In accordance with an embodiment of the present invention, an efficient and accurate system and method are described for detecting a fingerprint from swipe sensor system. A swipe sensor module (104) is coupled to a microprocessor module (102). The swipe sensor module (104) collects fingerprint image data as a plurality of frames and passes the frames of data to the microprocessor module (102). The microprocessor module (102) assembles the frames of data into a complete image of the fingerprint that is cropped and processed to remove noise and artifacts. The cropped image is used to generate a template in a first instance and to compare an extracted portion of the cropped image to existing templates in a second instance.

Description

SYSTEM FOR FINGERPRINT IMAGE RECONSTRUCTION BASED ON MOTION ESTIMATE ACROSS A NARROW FINGERPRINT SENSOR
Cross Reference to Related Applications
[01] ' This application claims priority from provisional U.S. Patent Application No. 60/565,256 entitled "FINGERPRINT IDENTIFICATION SYSTEM USING A SWIPE FRINGERPRINT SENSOR" filed 23 April 2004 (Attorney Docket No. 020699-100900US) and from provisional U.S. Patent Application No. 60/564,875 entitled "FINGERPRINT IMAGE RECONSTRUCTED BASED ON MOTION ESTIMATE ACROSS A NARROW FRTNGERPRTNT SENSOR " filed 23 April 2004 (Attorney Docket No. 020699-101000US), both of which are incorporated by reference.
[02] This application is also claims priority to the following patent applications, which is incorporated by reference herein: US Patent Application Ser. No. 10/927,599, entitled " SYSTEM FOR FINGERPRINT IMAGE RECONSTRUCTION BASED ON MOTION ESTIMATE ACROSS A NARROW FRINGERPRINT SENSOR," filed on August 25, 2004(Attorney Docket No.: 020699- 100910US) and US Patent Application Ser. No. 10/927,178, entitled "FINGERPRINT IMAGE RECONSTRUCTION BASED ON MOTION ESTIMATE ACROSS A NARROW FRINGERPRINT SENSOR," filed August 25, 2004 (Attorney Docket No.: 020699-101010US).
Background Of The Invention
1. Field of the invention.
[03] The present invention relates to personal identification using biometrics and, more specifically, to a method for reconstructing a fingerprint image from a plurality of image frames captured using a swipe fingerprint sensor.
2. Related art. [04] Identification of individuals is an important issue for, e.g., law enforcement and security purposes, for area or device access control, and for identity fraud prevention. Biometric authentication, which is the science of verifying a person's identity based on personal characteristics (e.g., voice patterns, facial characteristics, fingerprints) has become an important tool for identifying individuals. Fingerprint authentication is often used for identification because of the relative ease, non-intrusiveness, and general public acceptance in acquiring fingerprints.
[05] To address the need for rapid identification, fingerprint recognition systems have been developed that use electronic sensors to measure fingerprint ridges with a capacitive image capture system. One type of system captures the fingerprint as a single image. To use such sensors, an individual places a finger (any of the five manual digits) on the sensor element and holds the finger motionless until the system captures a good quality fingerprint image. But the cost of the capacitive fingeφrint sensor is proportional to the sensor element area so there is a compelling need to minimize the sensor element area while at the same time ensuring that no relevant portion of the fingeφrint is omitted during image capture. Further, large sensors require substantial area to install and are impracticable for many portable applications such as to verify the owner of portable electronic devices such as personal digital assistants or cellular telephones.
[06] One way of reducing sensor size and cost is to rapidly sample data from a small area capacitive sensor element as a finger is moved (or "swiped") over the sensor element. In these "swipe" sensors, the small area sensor element is generally wider but shorter than the fingeφrint being imaged. Sampling generates a number of image frames as a finger is swiped over the sensor element, each frame being an image of a fraction of the fingeφrint. The swipe sensor system then reconstructs the image frames into a complete fingeφrint image.
[07] While swipe fingeφrint sensors are relatively inexpensive and are readily installed on most portable electronic devices, the amount of computation required to reconstruct the fϊngeφrint image is much greater than the computation required to process a fingeφrint captured as a single image. Swipe sensor computation requirements increase system costs and result in poor identification response time. Computational requirements are further increased because of variations in a digit's swipe speed and the need to accommodate various finger positions during the swipe as the system reconstructs a complete fingeφrint image from the frames generated during the swipe. The sensor system must determine the finger's swipe speed so as to extract only the new portion of each succeeding frame as the system reconstructs the fingeφrint image. Thus, for effective use for identification, current swipe sensors must be coupled to a robust computing system that is able to reconstruct the fingeφrint image from the image frames in real or near-real time.
[08] ' Another major drawback to the use of fingeφrints for identification puφoses arises from the difficulty in associating the captured image with a particular individual, especially in portable applications. The output of the sensor is typically compared to a library of known fingeφrints using pattern recognition techniques. It is generally recognized that the core area of a fingeφrint is the most reliable for identification puφoses. With the image acquired by a large area sensor, the core area is consistently located in the general center of the image. With a swipe sensor, however, it is difficult to locate this core area. Unlike the image generated by a large area fingeφrint sensor, the core location of the image reconstructed by a swipe sensor cannot be guaranteed to be located in the neighborhood of the image center due to the way a digit may be positioned as it is swiped over the sensor element.
[09] For this reason, the use of fingeφrint verification has been limited to stationary applications requiring a high degree of security and widespread adoption of fingeφrint identification has been limited. What is needed is a fingeφrint identification system that is inexpensive, that efficiently assembles swipe sensor frames into a fingeφrint image, that locates the core area of the reconstructed fingeφrint image, that authenticates the captured fingeφrint image in real or near-real time, and that performs these tasks using the limited computing resources of portable electronic devices.
Summary of Embodiments of the Invention [10] In accordance with an embodiment of the present invention, a low cost fingeφrint identification system and method is provided. More specifically, the present invention relates to reconstructing a fingeφrint image from a plurality of frames of image data, obtained from a swipe sensor fingeφrint identification system. Each frame comprises a plurality of lines with each line comprising a plurality of pixels.
[11] The frames are transferred to a host where the fingeφrint image is reconstructed. After a first frame F is stored in a reconstructed image matrix (denoted I), the new data portion of each subsequent frame is stored in the image matrix until a complete image of the fingeφrint is obtained.
[12] The fingeφrint image reconstruction process of the present invention determines how many lines in each frame is new data. This determination is based on a motion estimate that is obtained for each frame after the first frame. To reduce computational overhead, the motion estimate process initially decimates each frame by reducing the number of pixels in each row. Decimation reduces subsequent computational requirements without reducing resolution in motion estimation and enables real-time processing even if system resources are limited. The decimated frame is then normalized and a correlation process determines the amount of overlap between consecutive frames. The correlation- process generates a delay factor that indicates how many new lines have moved into each frame relative to the immediately preceding frame. The correlation process continues until there are no further frames to add to the reconstructed matrix.
[13] The present invention further provides an enrollment mode and an identification mode. The enrollment mode is used to build a database of templates that represent authorized or known individuals. In the identification mode, a fingeφrint image is processed and compared to templates in the database. If a match is found, the user is authenticated. If no match is found, that condition is noted and the user is provided the opportunity to enroll. A user interface is used to assist the user in use of the system. [14] Because the fingeφrint image is reconstructed from a plurality of frames of data, and because of the potential for poor image quality arising from improper orientation, improper speed of a swipe or for other reasons, the present invention includes an image quality check to make sure that the image contains adequate information to arrive at a user identification. Further, to minimize computational resources, the present invention further crops the reconstructed frame, removes noise components and then extracts a small, core portion of the fingeφrint. The core portion of the fingeφrint image is used to generate a template for the database when the system is operating in the enrollment mode. When the system is operating in the identification mode, the core portion is compared to the template to determine if there is a match.
[15] Advantageously, the reconstructed method efficiently converts the frames of a finger scan into an accurate image of the fingeφrint. Then, once the image is obtained, the enrollment and identification modes are well suited for implementation in portable electronic devices such as cellular telephones, PDAs, portable computers or other electronic devices. These and other features as well as advantages that categorize the present invention will be apparent from a reading of the following detailed description and review of the associated drawings.
Brief Description of the Drawings
[16] Figure 1 a simplified block diagram illustrating one exemplary embodiment of a fingeφrint identification system in accordance with an embodiment of the present invention.
[17] Figure 2 illustrates an exemplary swipe fingeφrint sensor in accordance with an embodiment of the present invention.
[18] Figure 3 shows one method for reconstructing a fingeφrint image from a plurality of frames acquired from the swipe fingeφrint sensor in accordance with an embodiment of the present invention. [19] Figure 4 illustrates the formation of the extracted arrays used to calculate the delay factor between frames in accordance with an embodiment of the present invention.
[20] Figure 5 shows an intermediate fingeφrint image buffer and a current frame acquired from the swipe fingeφrint sensor in accordance with an embodiment of the present invention.
[21] Figure 6 shows an updated fingeφrint image buffer in accordance with an embodiment of the present invention.
[22] Figure 7 illustrates an exemplary memory map showing the components for enrolling and identifying the fingeφrints of a user in accordance with an embodiment of the present invention..
[23] Figures 8 and 8A-8C show the enrollment mode of operation in accordance with an embodiment of the present invention.
[24] Figure 9 shows the identification mode of operation in accordance with an embodiment of the present invention.
[25] Figure 10 is a diagrammatic perspective view of an illustrative electronic device that includes a fingeφrint identification system in accordance with an embodiment of the present invention.
[26] Figure 11 is a diagrammatic view of an illustrative system that includes an electronic device having a swipe sensor and a computing platform remote from the electronic device for storing and identifying fingeφrints in accordance with an embodiment of the present invention.
Detailed Description of Embodiments of the Invention [27] Referring now to the drawings more particularly by reference numbers, an exemplary embodiment of a fingeφrint identification system 100 is illustrated in Figure 1. System 100 includes a microprocessor module 102 and a fingeφrint sensor module 104 that operates under the control of microprocessor 102.
[28] Fingeφrint sensor module 104 includes a swipe sensor 106, which includes a swipe sensor stripe 200 (Figure 2), over which a finger is moved, and associated electronic circuits. In some embodiments, the sensor stripe area of swipe sensor 106 is much smaller than the surface area of a typical fingeφrint so that, as the finger is moved ("swiped") across the sensor stripe, partial fingeφrint images are acquired sequentially in time. Fingeφrint sensor module 104 also includes finger motion detector 108, analog to digital converter (ADC) 110, and data buffer 112 that receives data from swipe sensor 106 and motion detector 108. Data buffer 112 is illustrative of various single and multiple buffer (e.g., "double-buffering") configurations.
[29] Microprocessor module 102 includes an execution unit 114, such as a
PENTIUM brand microprocessor commercially available from INTEL CORPORATION. Microprocessor module 102 also includes memory 116. In some embodiments memory 116 is a random access memory (RAM). In some embodiments, memory 116 is a combination of volatile (e.g., static or dynamic RAM) and non- volatile (e.g., ROM or Flash EEPROM). Communication module 118 provides the interface between sensor module 104 and microprocessor module 102. In some instances, commumcation module 118 is a peripheral interface module such as a universal serial bus (USB), an RS-232 serial port, or any other bus, whether serial or parallel, that accepts data from a peripheral. In instances in which a dedicated microprocessor module 102 is implemented on a single semiconductor substrate together with sensor module 104, communication module 118 functions as, e.g., a bus arbiter. Database module 120 manages one or more fingeφrint image templates stored (e.g., in memory 116) for use during identification of a particular individual. A user interface (UI) module 122 enables system 100 to communicate with a user using various ways known in the electronic arts. In some embodiments, UI module 122 includes an output, such as a video display or light emitting diodes, and/or an input, such as a keypad, a keyboard, or a mouse. Thus, in some instances system 100 prompts a user to place a finger on swipe sensor 106's sensor stripe and to swipe the finger in a specified direction. If the swipe results in an error, system 100 instructs the user to repeat the swipe. In some instances, system 100 instructs the user how to move the finger across the sensor by displaying, e.g., a video clip.
[30] If motion detector 108 detects the presence of a finger about to be swiped, motion detector 108 transmits an interrupt signal to microprocessor module 102 (e.g., an interrupt signal that execution unit 114 detects). In response to the received interrupt signal, execution unit 114 accesses executable code stored in memory 116 and the two-way communication between sensor unit 104 and microprocessor module 102 is established.
[31] As a finger is swiped over the sensor stripe, swipe sensor 106 generates an analog signal that carries the partial fingeφrint image data frames for a fingeφrint image. ADC 110 receives and converts the analog signal from swipe sensor 106 into a digital signal that is directed to data buffer 112. Data buffer 112 stores data associated with one or more of the captured fingeφrint image data frames received from ADC 110. Image data from data buffer 112 is then transferred to microprocessor module 102, which performs signal processing functions in real or near-real time to reconstruct and identify the complete fingeφrint image. In some embodiments, the image frame data in data buffer 112 is transferred to microprocessor module 102 in small chunks (e.g., one pixel row at a time, as described in more detail below) to reduce the amount of memory required in sensor module 104.
[32] If motion detector 108 indicates to microprocessor module 102 that the finger swipe is complete, execution unit 114 initiates the transfer of data from buffer 112 to memory 116. Alternately, data from swipe sensor 106 begins to be transferred to execution unit 114 if the beginning of a finger swipe is detected. Execution unit 114 stops receiving data generated by swipe sensor 106 if the swipe is completed, if no finger is present, if finger motion over swipe sensor 106 stops, or if the swipe duration exceeds a maximally allowed time specified by the system (i.e., a system timeout feature).
[33] As a power saving feature, in some embodiments sensor module 104 remains in a quiescent state until motion detector 108 detects motion. When motion detector 108 detects a finger being moved it triggers sensor module 104 into full power operation. At substantially the same time, the motion detection activates a communication link with microprocessor module 102. Once activated, partial fingeφrint image frame data in data buffer 112 is transferred to microprocessor module 102, which performs signal-processing functions to reconstruct the fingeφrint image, as described in detail below
[34] There are two system 100 operating modes. The first operating mode is the enrollment mode, in which an individual is enrolled in identification system 100. When the enrollment mode is selected, several of the individual's fingeφrint images are captured, together with other identifying information such as the individual's name, physical description, address, and photograph. Each fingeφrint image captured by sensor module 104 is verified and processed by microprocessor module 102 to generate a template of the individual's fingeφrint. The template is stored (e.g., under control of database module 120) for later use during identification when system 100 is operating in the second mode.
[35] The second system 100 operating mode is the identification mode, in which system 100 determines if an individual is identified. When the identification mode is selected, sensor module 104 acquires a fingeφrint image that is processed by microprocessor module 102. If the acquired image meets one or more predetermined criteria, it is compared to the library of stored fingeφrint image templates. If there is a match between the acquired image and a stored image template, then the individual has been successfully identified. The results of the comparison may then be further acted upon by microprocessor module 102 or by a second electronic device or system. By way of example, if microprocessor module 102 is coupled to an electronic door lock, and an individual with previous authorization desires to open the door, microprocessor module 102 controls the electronic door lock to unlock the door. [36] An exemplary swipe fingeφrint sensor stripe 200, part of swipe sensor
106, is illustrated in Figure 2. Swipe fingeφrint sensor stripe 200 comprises an array of picture element ("pixel") capacitive sensors, such as pixel 202, that are arranged in a plurality of rows, illustrated in Figure 2 as rows ri through ΓM, and a plurality of columns, illustrated in Figure 2 as columns c\ through CN. The intersection of each row and column defines the location of a pixel capacitive sensor.
[37] Sensor stripe 200 may have any number of rows of pixels. Figure 2 shows sensor 200 having 12 rows to illustrate the invention, but it is common for sensor element 200 to have between 12 and 36 pixel rows. Some embodiments use either 16 or 24 pixel rows. Likewise, sensor stripe 200 may have various numbers of pixel columns. In one illustrative embodiment, each row r^ of sensor stripe 200 has 192 pixels. In other embodiments, sensor stripe 200 columns have 128 pixels if sensor stripe 200 has 16 or 24 rows. The number of columns is generally such that sensor stripe 200 is wider than finger 204, as shown by phantom line in Figure 2, to be swiped across it. In some embodiments, however, the number of columns can be lessened such that sensor stripe 200 is somewhat narrower than a finger to be swiped across it, as long as sufficient fingeφrint image data is captured for effective identification. (Fingeφrint image core area is discussed in more detail below.) The number of pixels in each row and column will typically depend on various design parameters such as the desired resolution, data processing capability available to reassemble the fingeφrint image frames, anticipated maximum digit swipe speed, and production cost constraints.
[38] Arrow -206 shown in Figure 2 illustrates a direction of finger 204 movement, as sensed by motion detector 108. Although Figure 2 shows sensor stripe 200 oriented such that the pixel columns are generally parallel to the finger (e.g., the fingeφrint image is reconstructed from bottom-to-top), in other embodiments sensor stripe 200 may. be oriented such that the pixel columns are generally peφendicular to the finger (e.g., the fingeφrint image is reconstructed from right-to-left). [39] Because the area of swipe sensor 106's sensor stripe 200 is less than the fingeφrint area from which data is generated, fingeφrint identification system 100 acquires at least two fingeφrint image data frames as the finger is swiped across sensor stripe 200. Each fingeφrint image frame represents a fraction of finger 204 's fingeφrint topology. To illustrate, in one case sensor element 200 is made of 24 rows, each row having 128 pixels. Each fingeφrint image frame is therefore made of an array of 3,072 pixels. A plurality of fingeφrint image frames is acquired as the finger is swiped across sensor stripe 200. Data buffer 112 has sufficient depth to store fingeφrint image data frames between each data transfer to microprocessor module 102. In one embodiment, data buffer 112 includes a first buffer portion large enough to store only one row of 128 pixels. So that the next line of scanned image does not overwrite the currently stored image, a second buffer portion is needed to store the next line of image data while the first line is transferred to microprocessor module 102. This scheme is often referred to as double buffering. By rapidly sampling the analog signal generated by swipe sensor 106, two or more fingeφrint image frames are captured.
[40] As the finger is moved from top to bottom over sensor stripe 200, the initially captured fingeφrint image frames represent a middle portion of the fingeφrint and the last several frames represent the tip of the fingeφrint. In other embodiments swipe sensor 106 is positioned to accept digit swipes from right to left, left to right, bottom to top, or in various other directional orientations, depending on how swipe sensor 106 is physically positioned or on a desired design feature. In some instances, a combination of swipe directions may be used during enrollment or identifications.
[41] During operation, finger 204 is swiped across sensor stripe 200 in the
direction of arrow 206. When finger motion detector 108 detects the presence of finger 204, the capacitive pixels of sensor 200 are rapidly sampled, thereby generating fingeφrint image frames of the complete fingeφrint. In one illustrative embodiment, more than one hundred frames are captured from the time finger 204 first contacts sensor stripe 200 until finger 204 is no longer in contact with sensor stripe 200. In one illustrative embodiment, system 100 accepts finger movement speed as fast as 20 centimeters per second. It will be appreciated that the number of generated fingeφrint image frames will vary depending on how fast the finger is swiped and the length of the finger. It will also be appreciated that the swipe rate may vary during the swipe. For example, if the swipe is paused for a fraction of a second, many frames may contain identical or nearly identical data.
[42] The' acquired fingeφrint image frames are assembled to form a complete fingeφrint image. The sample rate is fast enough to ensure that the fingeφrint image is over-sampled during a swipe. Such over-sampling ensures that a portion of one fingeφrint image frame contains information identical to that in a portion of the next subsequent fingeφrint image frame. This matching data is used to align and reassemble the fingeφrint image frames into a complete fingeφrint image. In one illustrative embodiment microprocessor module 102 assembles the fingeφrint image frames in real time, such that only the most recent two sampled fingeφrint image frames are required to be stored in host memory 116. In this embodiment, slow finger swipe speed will not tax system memory resources. In another illustrative embodiment, microprocessor module 102 receives and stores all captured fingeφrint image data frames before assembling them into a complete fingeφrint image.
[43] Fingeφrint image reconstruction is done in some embodiments by using a process based on three image data frames represented as matrices. As described above, swipe sensor stripe 200 illustratively has M rows and N columns of pixels 202.
[44] Each of the three image data frames is associated with sensor stripe 200' s pixel matrix dimensions of M rows and N columns. The following description is based on a fingeφrint image data frame of 12 rows and 192 columns (i.e., 2304 pixels), a matrix size that is illustrative of various matrix sizes within the scope of the invention. The reconstructed fingeφrint image will have the same width (e.g., 192 pixels) as the fingeφrint image data frame.
[45] The first fingeφrint image frame that is used for fingeφrint image reconstruction is the most recent fingeφrint image frame Fk (the "prior frame") from which rows have been added to the reconstructed fingeφrint image. The second fingeφrint image frame that is used is the next fingeφrint image frame Fk+i (the "next frame") from which rows will be added to the reconstructed fingeφrint image. During real or near real time processing, next frame Fk+i is just received at microprocessor module 102 from sensor module 104 in time for processing. A copy of prior frame Fk is held in memory (e.g., memory 116) until next frame Fk+i is processed and becomes the new prior frame. The third fingeφrint image frame is an
M x N fmgeφrint image frame Fk (the "extracted frame") that is extracted from the reconstructed fingeφrint image. The extracted frame Fk is made of the most recent rows added to the reconstructed fingeφrint image.
[46] As a finger passes over sensor stripe 200, the sampled fingeφrint image data frames will have overlapping data. By computing correlations between Fk and
Fk+i, and between Fk and Fk+i, microprocessor module 102 determines the number of new image data rows from Fk+i to be added to the reconstructed fingeφrint image buffer. The process continues until the final new image data rows are added from the last fingeφrint image data frame received at microprocessor module 102 to the reconstructed fingeφrint image.
[47] In general, each line / of a fingeφrint image frame Fk may be represented in matrix notation as:
Figure imgf000015_0001
where is they .■fh p. ixel in the i _.th - line.
[48] The k frame may be represented in matrix notation in terms of line vectors as: l L2 k F, = Eq. 2 lk lk
where if represents the z,th line in the k^ data frame. The next frame Fk+i is similarly • represented.
[49] The frame extracted from the reconstructed fiπgeφrint image matrix / may be represented as:
Fk =I(l :M, :) Eq. 3
where 1 Vindicates that the most recently added rows are extracted from the reconstructed fingeφrint image matrix I, and the second ":" indicates that all of the N column elements are extracted.
[50] Ideally, the information in Fk and Fk is identical since both represent the same Mrows of fingeφrint image data. In practice, however, there are variations due to uneven finger swipe speeds, data transcription errors (noise), and other real world problems such as quantization error arising from non-integer movement and normalization error. A reassembly process in accordance with the present invention makes allowances for such real world difficulties.
[51] Figures 3-6, considered together, illustrate embodiments of fingeφrint image reassembly from the sampled fingeφrint image frames. One portion of memory 116 acts as a received fingeφrint image frame buffer that holds one or more sampled fmgeφrint image frame data sets received from sensor module 104. Another portion of memory 116 acts as a reconstructed image buffer that holds the complete fingeφrint image data I as it is assembled by microprocessor module 102. [52] As shown in Figure 3, the fingeφrint image reconstruction begins at 300 as the first sampled fingeφrint image frame data Fj is received into the fingeφrint image frame buffer of memory 116. Since this is the first fingeφrint image frame data, it can be transferred directly into the reconstructed image frame buffer as shown at 302. ' In other instances, the process described below can be used with values initialized to form a "prior" frame and in the reconstructed image frame buffer. At the conclusion of 302, at least Mrows exist in the reconstructed fingeφrint image buffer.
[53] At 304, extracted frame Fk is created from data in the reconstructed fingeφrint image buffer. Then, prior frame Fk and extracted frame Fk are decimated to form two smaller matrices, represented as Fk and Fk , an operation that assists processing speed during calculations described below. This operation is diagrammatically illustrated in Figure 4, which shows prior frame 402 and extracted frame 404 each decimated to form associated decimated prior frame 406 and decimated extracted frame 408. Referring again to Figure 3, at 306 the next frame Fk+i is received and decimated using a like manner and is represented as Fd +1. Figure
4 illustrates next frame 414 decimated and shown in two instantiations as decimated next frames 410 and 412. The decimated arrays are each comprise a M X D matrix where M equals the number of rows of sensor stripe 200 and D equals the decimated number of columns (or pixels per line). Decimating the matrices into D columns reduces the computation load o microprocessor module 102 by reducing the number of columns carried forward. For example, if the matrices each have 192 columns, the associated decimated matrices may each have, e.g., only 16 columns. Decimation should occur in real time to facilitate sensor use.
[54] There are many possible decimation methods. For example, the frame may be decimated by looking at only the central 16 columns or by selecting the first column and every 10th or 12th column thereafter. In other illustrative embodiments, an average of a selected number of columns (e.g., ten) is used to form the decimated M matrices, or the sum of every — pixels is taken using a sliding window. Although 16 these averaging processes requires slightly more computation than a non-averaging process, the averaging process is more robust and results in better compensation for non-linear or slanted finger swiping. The averaging operation functions as a low-pass filter in the horizontal direction such that the low-pass smoothing alleviates any change in signal characteristic due to horizontal shift created by a non-linear swipe. Decimation is not required, however, and in some instances when design considerations and data processing capability allows, the three matrices are processed in accordance with the invention without such decimation.
[55] Referring again to Figure 3, in 308, matrices Fk , Fk , and Fk+1 are normalized. Then, one correlation coefficient matrix is calculated using normalized Fk . and Fk+l , and a second correlation coefficient matrix is calculated using normalized Fd and Fk+l . Next, two sets of correlation functions are computed by averaging the τ diagonal of the correlation coefficient matrices. These two sets of correlation functions correspond to the correlation between the new frame Fk+i and the prior frame Fk, and the correlation between the new frame Fk+i and the extracted frame Fk.
[56] Figure 4 shows illustrative correlation engine 416 calculating a correlation coefficient matrix and correlation function τ1 from Fk and Fk+l , and illustrative correlation engine 418 calculating a correlation coefficient matrix and correlation function τ2' from Fd and Fk+l . Correlation engines 416 and 418 then compute the delay (i.e., estimated finger motion between frames) between frame Fk 402 and frame Fk+i 414 to determine the number of rows from frame Fk+i 414 that should be appended to the fingeφrint image data stored in the reconstructed fingeφrint image buffer. Although shown as separate elements to illustrate the invention, one skilled in the art will appreciate that a single correlation engine 416 may be utilized in practice. Similarly, although two copies of the decimated next frame Fk+i 410 and 412 are illustrated, it will be appreciated that a single decimated next frame Fk+i 410 may be correlated to both decimated frames 406 and 408. It will also be appreciated that arrays 406-412, as well as the correlation engines 416 and 418, are stored as data or as coded instructions in memory 116, and are either accessed as data or executed as instructions by execution unit 114. [57] Referring to Figure 3, at 310 the peak correlation locations are found to determine how many lines in the new frame are to be moved into the reconstructed image matrix. In one embodiment, correlation is calculated using the following equations:
CkM1 = PkFd(Fd +l)TPk+l Eq. 4 and
Figure imgf000019_0001
where T denotes a transpose matrix and P e $K xM is a diagonal matrix with the z-th element being defined as:
Figure imgf000019_0002
and <$lMxM denotes a real MxM vector space. In one embodiment, the P matrix is a 16 by 16 matrix that is used to normalize each row to uniform energy.
[58] The resulting correlation functions R (τ) and R (τ) in the Y direction are then calculated where τ will vary from zero to M-1. These functions are obtained by averaging the τth diagonal of the correlation coefficient matrices Ck 1 and Ck k+l and finding the peak correlation locations in accordance with the functions: τ = argmax(i? ( τ)) Eq. 7 and τmax =arSmax(^ ( τ)) Eq. 8
[59] The motion or delay across the swipe sensor is then calculated by: τmax *- τmaχ 5 τmaχ i- q. 9 where the function fQ can be a function of the weighted average or the average of the arguments. The puφose of averaging the two delay estimates is to improve the overall estimation quality.
[60] The variable τmax indicates how many new lines have moved into the new frame. Accordingly, the top τmax lines from the new frame Fk+l are moved into the reconstructed image matrix (I) or:
Figure imgf000020_0001
as shown at 314.
[61] At 312 the delay factor, τmax , is converted to an integer and checked to determine whether it is greater than zero. If τmax > 0, then the top Int(τmax ) rows of the next frame Fk+i are appended to the most recently added fingeφrint image data rows in the reconstructed image buffer as indicated at step 314. If τmax = 0, then no rows of the next frame Fk+i will be appended to the reconstructed image buffer because a zero reading indicates that there has been no movement of the finger between samples and process flow proceeds to 316. If the user moves the finger at a detectable rate, the Int(τmax ) value will be always smaller than number of rows M. It will appreciated that the delay factor τ can be considered an image delay or offset as a finger is moved over sensor stripe 200 during fingeφrint image sampling.
[62] If Fk+i is the last fingeφrint image frame sampled by sample module 104, then in 316 the fingeφrint image reconstruction process terminates 318. If not, then the fingeφrint image reconstruction process returns to 304. The fingeφrint image frame Fk+i that has been processed becomes the prior fingeφrint image frame Fk and the next new fingeφrint image frame Fk+i is processed, as described above, until the finger swipe is complete. [63] Figures 5 and 6 are diagrammatic views that further illustrate an embodiment of fingeφrint image reconstruction as described above with reference to Figures 3 and 4. As shown in Figure 5, M rows of the fingeφrint image data most recently added to reconstructed fingeφrint image buffer 500 (e.g., memory space in memory 116) are defined as extracted frame Fk . Fingeφrint image buffer 500 will eventually hold the complete reconstructed fingeφrint image I. As shown in Figure 5, the next fingeφrint image frame Fk+i also has M rows of image data. The top Int(τ) rows of next fingeφrint image frame Fk+i form new data portion 502. The remaining rows 504 of next fingeφrint image frame Fk+i generally match the top M- Int(τ) rows of extracted frame Fk .
[64] Figure 6 shows that the top Int(τmax ) rows of next fingeφrint image frame
Fk+i (that is, new portion 502) have been added to reconstructed fingeφrint image buffer 500 as described above. There is a fingeφrint image data overlap portion 602 of M- Int(τmax ), in which the fingeφrint image data already stored in reconstructed fingeφrint image buffer 500 is retained. New data portion 502 and data overlap portion 602 are then defined as the extracted frame Fk to be used during the next iteration of the fingeφrint image reconstruction process described above. Once all of the frames have been integrated into reconstructed image buffer, a complete image / of the fϊngeφrint of the finger swiped across the surface of sensor stripe 200 will be stored in the reconstructed fingeφrint image buffer 500. This reconstructed fingeφrint image / is then available for subsequent clean-up processing by execution unit 114 to, for example, remove noise or distortion. After such clean-up image processing (if any), microprocessor module 102 then proceeds to use the. image / to form a fingeφrint image template, if operating in the enrollment mode, or to compare the image / to the library of existing fϊngeφrint image templates, if operating in the identification mode. .
[65] Figure 7 illustrates a memory map 116 of one embodiment of the present invention. Fingeφrint image frame data received from sensor module 104 is held in image data buffer 702. As the fingeφrint image frame data is processed as described above, the reconstructed fingeφrint image is stored in reconstructed fingeφrint image buffer 500. Execution unit 114 uses executable code in memory space 704 to enroll a fingeφrint for later identification use, and uses executable code in memory space 706 to determine if an acquired fingeφrint image matches an enrolled image. Fingeφrint image templates that are built during the enrollment process and that are used during the identification process are stored in fingeφrint image template buffer 708. Database management system code required for database module 120, accessible by execution unit 114, is stored in memory space 710. In one instance the database management code is adapted for use as embedded code in the portable electronic device (e.g., cellular telephones, personal digital assistants, etc.) that hosts memory 116. Template buffer 708 memory space and/or memory space 710 may reside in, for example, host flash memory, an external memory card, or other high capacity data storage device or devices. As illustrated in Figure 7, memory 116 may also contain operating system code in memory space 712 and one or more application programs in memory space 714 to control other peripheral devices (not shown) such as, by way of example, an access control system that restricts access to use of an electronic device or entry to a physical location. Application programs 714 may also include software commands that, when executed, control the user interface module 122 to inform and instruct the user. Once a user is enrolled, they may invoke an application program by merely swiping their finger over sensor 106. Communication module 118 code may reside in memory space 716 as an application program interface or in memory space 712 as part of operating system code. The memory map depicted in Figure 7 is illustrative of various memory configurations distributed within or among various memory types.
[66] Figure 8, assembled from Figures 8A-8C, is a flow diagram illustrating one embodiment of a method for acquiring a fingeφrint in the enrollment mode for subsequent use in the identification mode. Each user to be identified is enrolled by acquiring one or more known fingeφrint images and generating a template that will be managed by database management system, together with other identifying information associated with the enrolled user, and stored in fingeφrint template buffer 708. During the enrollment process, multiple images centered around the fingeφrint image core area are acquired and used to construct the fingeφrint image template used in the identification mode. The number of acquired fingeφrint images will vary depending upon the degree of accuracy required for a particular application. In some embodiments at least three fingeφrint images are required to generate a fingeφrint image template (in one instance four images are used). The enrollment process begins at 802 in Figure 8A as user interface module 122 outputs an instruction to a user to swipe a finger across sensor stripe 200.
[67] At 804, the fingeφrint image frames acquired during the user's finger swipe are transferred to memory 116 as described above. Once the first two fingeφrint image frames are in memory, the enrollment process initiates the execution of executable code 704. In one embodiment, only a few of the most recently acquired fingeφrint image frames are saved in image data buffer 702. In another embodiment, only the two most recently acquired fingeφrint image frames are saved. Once fingeφrint image frames have been used to add data to the reconstructed fingeφrint image they may be discarded.
[68] At 806, executable code 704 begins to reconstruct the fingeφrint image from the plurality of fingeφrint image frames being received into memory 716. The fingeφrint image reconstruction begins in real time and is primarily directed to detecting overlapping fingeφrint image frame portions and adding non-overlapping portions as new data to the reconstructed fingeφrint image, as described above.
[69] Once the fingeφrint image has been reconstructed, initial quality verification is performed at 808. The quality verification process applies a set of statistical rules to determine if the reconstructed image contains sufficient data and is capable of being further processed. In one embodiment, the image quality verification process uses a two-stage statistical pattern recognition. Pattern recognition is well known in the art and is an engineering selection that will depend on whether the application requires high accuracy or a fast analysis. In the first stage, a statistical database is generated from a collection of known good and bad images. The statistical features of the good and bad images are extracted and a statistical model is created for both good and bad populations. Although the statistical database is independently generated by each identification system in other embodiments, it may be preloaded into the identification system from an existing database structure. In the second stage, the same statistical features are extracted from the newly reconstructed fingeφrint image and are compared to the good and bad statistical models. If the reconstructed fingeφrint image has characteristics similar to those of a good image, enrollment continues. If the reconstructed fingeφrint image has characteristics similar to those of a bad image, the image is considered to have unacceptable quality, the image is discarded, and the user is instructed to repeat the finger swipe as shown at 810.
[70] At 812 the verified, reconstructed image is cropped. Image cropping accounts for, e.g., very long images with only a portion containing fingeφrint data. It will be appreciated that passing a very large image to subsequent processing will consume system resources and result in decreased performance. Cropping strips off and discards non-core fingeφrint and finger data. The cropped image will primarily contain data obtained from the core portion of the finger.
[71] At 814 the cropped image is pre-processed, e.g., to remove noise components or to enhance image quality. For example, a 2-D low-pass filter can be used to remove high frequency noise, or a 2-D median filter can remove spike-like interference.
[72] Referring to Figure 8B, at 816. the core area of the cropped and pre- processed fingeφrint image is identified because this area is generally accepted to be the most reliable for identification. Unlike the image generated by an area fingeφrint sensor, the core area of the reconstructed fingeφrint image cannot be guaranteed to be located in the neighborhood of the image center. Thus, the executable code scans the cropped fingeφrint image to identify the core area. The core area typically exhibits one or more characteristic patterns that can be identified using methods such as orientation field analysis. Once the core area is located, the fingeφrint image may be further cropped to eliminate non-essential portions of the image. The final cropped fingeφrint image core area can be as small as a 64x64 pixel image.
[73] At 818, a second quality verification is preformed to ensure that the cropped image of the core area is of sufficient size to enable identification. If the cropped image of the core area is too small, the image is discarded and another fingeφrint image is acquired, as indicated at 812. Small images may occur due to very slow finger movement, during which only a small portion of the finger is scanned before scanning time-out. Small images may also occur if the swiped finger is off the center so that the cropped image contains only a small amount of useful data. One exemplary criterion for small image rejection states that if more than 20- percent of desired region around the core area is not captured, the image is rejected.
[74] If, however, the cropped fingeφrint image core area passes the quality control verification at 818, an optional second order pre-processing is performed at 822. This second pre-processing performs any necessary signal processing functions that may be required to generate a fingeφrint image template. Since the cropped image of the fingeφrint image core area is relatively small compared to the data captured by sensor 106, system resource requirements are significantly reduced. When pre-processing at 822 is completed, the image of the core region is stored in template buffer 710 as indicated at 824.
[75] As indicated in Figure 8C at 826, multiple fingeφrint images are acquired.
For each fingeφrint image to be acquired, user interface module 122 outputs the appropriate instruction to the user. For example, the user may be instructed to swipe their right index finger (or, alternatively, any finger the user may choose) across the sensor, to repeat the swipe as necessary to obtain high quality multiple fingeφrint images, and to be told that fingeφrint image capture and enrollment has been successful. The fingeφrint image acquisition loops between 804 and 826 until the specified number of fingeφrint images has been acquired.
[76] Once the multiple fingeφrint images are acquired, a fingeφrint image template is generated as indicated at 828. In one embodiment, a correlation filter technique is employed to form a composite of the multiple cropped fingeφrint images. Multiple correlation filters may be used to construct a single fingeφrint image template. An advantage of the correlation filter technique is that it requires a relatively small image size to get reliable identification performance. Use of the correlation filter technique on relatively small fingeφrint image sizes during the enrollment and identification modes reduces system resource requirements over, for instance, area sensor requirements in which captured fingeφrint images tend to be relatively larger.
[77] Finally, at 830 the newly generated fingeφrint image template and associated user identifying data are stored to a database.
[78] Figure 9 is a flow diagram illustrating an embodiment of a process for the identification mode. If, for instance, motion detector 108 detects a finger, then the identification process begins if the enrollment mode has not been previously activated. The identification process begins at 902 with the acquisition of fingeφrint image frames at 904 and reconstruction of the fingeφrint image to be used for identification at 906 in accordance with the invention, as described above.
[79] At 908 the fingeφrint image quality is verified. If the fingeφrint image is poor, the image data is dumped and processing stops at 910. Consequently, the user remains unidentified and an application program continues to, e.g., deny access to one or more device functions.
[80] If the image quality is verified, at 912 the reconstructed fingeφrint image is cropped to strip out peripheral fingeφrint image data that does not include fingeφrint image data to be used for identification. After cropping at 912, preprocessing at 914 removes, e.g., noise components, or other introduced artifacts and non-linearities.
[81] At 916 the core area of interest of the acquired fingeφrint image is extracted. At 918 the fingeφrint image's extracted core area of interest image size is verified. If the image size has degraded, the process moves to 910 and further processing is stopped. If, however, the image size is verified as adequate, at 920 a second image pre-processing is undertaken, and the necessary signal processing functions are performed to condition the extracted, cropped fingeφrint image in same manner as that used to generate fingeφrint image templates, as described above. [82] At 922 a pattern matching algorithm is used to compare the extracted, cropped, and pre-processed fingeφrint image with one or more stored fingeφrint image templates. If a match is found (i.e., the core area of the fϊngeφrint image acquired for identification is substantially similar to a stored fingeφrint image template), the user who swiped his or finger is identified as being the one whose identification data is associated with the matching fingeφrint image template. Consequently, an application program may, e.g., allow the identified user to access one or more device features, or to access an area. In one embodiment, a two- dimensional cross-correlation function between the extracted, cropped, pre-processed fingeφrint image and the' fingeφrint image template is performed. If the comparison exceeds a pre-determined threshold, the two images are deemed to match. If a match is not found, the user who swiped his or her finger remains unidentified. Consequently, e.g., an application program continues to deny access to the unidentified user.
[83] Various pattern matching methods may be used. For example, a correlation filter may be used with a peak-to-side lobe ratio (PSR) of a 2-D correlation function compared to a pre-specified threshold. If the PSR value is larger than the threshold, a match is declared. If the PSR value is smaller than the threshold, a mismatch is declared.
[84] Fingeφrint image processing and identification in accordance with the present invention allows sensor system 100 to be used in many applications, since such processing and identification are accurate, reliable, efficient, and inexpensive. Fingeφrints are accepted as a reliable biometric way of identifying people. The present invention provides accurate fingeφrint identification as illustrated by the various cropping, pre-processing, template generation, and image comparison processes described above. Further, system resource requirements (e.g., memory, microprocessor cycles) of the various embodiments of the present invention are relatively small. As a result, user enrollment and subsequent identification tasks are executed in real time and with small power consumption. [85] Figure 10 is a diagrammatic perspective view of an illustrative electronic device 1002 in which fingeφrint identification system 100 is installed. As illustrated in Figure 10, electronic device 1002 is portable and fingeφrint identification system 100 operates as a self-contained unit within electronic device 100. In other illustrative embodiments discussed below, electronic device and fingeφrint identification system 100 are communicatively linked to one or more remote stations. Examples of portable electronic devices 100 include cellular telephone handsets, personal digital assistants (hand-held computer that enables personal information to be organized), laptop computers (e.g., VAIO manufactured by Sony Coφoration), portable music players (e.g., WALKMAN devices manufactured by Sony Coφoration), digital cameras, camcorders, and portable gaming consoles (e.g., PSP manufactured by Sony Coφoration). Examples of fixed electronic devices 100 are given below in text associated with Figure 11.
[86] As shown in Figure 10, sensor stripe 202 may be located in various positions on electronic device 1002. In some instances sensor stripe 202 is positioned in a shallow channel 1004 to assist the user in properly moving the finger over sensor stripe 202. Figure 10 shows the channel 1004 and sensor stripe 202 combination variously positioned on top 1006, side 1008, or end 1010 of electronic device 1002. The channel 1004 and sensor stripe 202 is ergonomically positioned so as to allow the user to easily swipe his or her finger but to not interfere with device functions such as illustrative output display 1012 or illustrative keypad 1014. In some instances more than one sensor stripe 202 may be positioned on a single electronic device 1002 (e.g., to allow for convenient left- or right-hand operation, or to allow for simultaneous swipe of multiple fingers by one or more users).
[87] Figure 11 is a diagrammatic view of an illustrative system 1100 that includes electronic device 1002, one or more devices or computing platforms remote from electronic device 1002 (collectively termed a "remote station"), and fingeφrint identification system 100. In some instances, fingeφrint identification system 100 is contained within electronic device 1002. In other instances, fingeφrint identification system is distributed among two or more remote devices. For example, as shown in Figure 11, electronic device 1002 communicates via link 1102 (e.g., wired, wireless) with remote station 1104. Remote station 1104 may include or perform one or more of the functions described above for microprocessor module 102. For instance, a large number of fingeφrint image templates may be stored and managed by database 1106 in a nation-wide identification system (e.g., one in which multiple electronic devices 1002 access station 1104 to perform fingeφrint identifications). Figure 11 further illustrates embodiments in which electronic device 1002 communicates via communications link 1108 (e.g., wired, wireless) with a second electronic device 1110. In such embodiments remote station 1104 and the second electronic device 1110 may communicate directly via communications link 1112 (e.g., wired, wireless). In some instances, remote station 1104 is a computing platform in second electronic device 1110. Several examples illustrate such functions.
[88] In one case, electronic device 1002 is fixed on a wall. A user swipes their finger over sensor unit 106 in electronic device 1002, and the fϊngeφrint swipe information is sent via communications link 1102 to remote station 1104. Remote station 1104 receives the sampled fingeφrint image frames, reconstructs and processes the fingeφrint image, and then compares the user's fingeφrint with fingeφrint image templates stored in database 1106. If a match is found, remote station 1104 communicates with second electronic device 1110, either directly via communications link 1112 or indirectly via communications link 1102, electronic device 1002, and communications link 1108 so as to authorize second electronic device 1110 to open a door adjacent the wall on which electronic device 1002 is fixed.
[89] In another case, a similar user identification function matches a user with a credit or other transactional card (e.g., FELICA manufactured by Sony Coφoration) to facilitate a commercial transaction. Remote station 1104 compares card information input at second electronic device 1110 and a user fingeφrint image input at electronic device 1002 to determine if the transaction is authorized.
[90] Other illustrative applications include use of various fingeφrint identification system 100 embodiments in law enforcement (e.g, police, department of motor vehicles), physical access control (e.g., building or aiφort security, vehicle access and operation), and data access control (e.g., commercial and non-commercial personal or financial records). Electronic device 1002 may be a peripheral device communicatively coupled with a personal computer (e.g., stand alone, or incoφorated into a pointing device such as a mouse).
[91] Although the invention has been described with respect to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of the invention.
[92] The method described herein may be implemented in any suitable programming language can be used to implement the routines of the present invention including C, C++, Java, assembly language, etc. Different programming techniques , can be employed such as procedural or object oriented. The routines can execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, multiple steps shown as sequential in this specification can be performed at the same time. The sequence of operations ' described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc. The routines can operate in an operating system environment or as stand-alone routines occupying all, or a substantial part, of the system processing.
[93] In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention.
[94] As used herein "memory" for puφoses of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport a program or data for use by or in connection with the instruction execution system, apparatus, system, or device. The memory can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.'
• - [95] Reference throughout this specification to "one embodiment," "an embodiment," or "a specific embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention and not necessarily in all embodiments. Thus, respective appearances of the phrases such as "in one embodiment," "in an embodiment," or "in a specific embodiment" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any specific embodiment of the present invention may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the present invention.
[96] Embodiments of the invention may be implemented by using a programmed general puφose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of the present invention can be achieved by any means as is known in the art. Distributed, or networked systems, components and circuits can be used. Communication, or transfer, of data may be wired, wireless, or by any other means.
[97] It will also be appreciated that one or more of the elements depicted in the . figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope of the present invention to implement a program or code that can be stored in a machine-readable medium to permit a computer to perform any of the methods described above.
[98] Additionally, any signal arrows in the figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted. Furthermore, the term "or" as used herein is generally intended to- mean "and/or" unless otherwise indicated. Combinations of components or steps will also be considered as being noted, where terminology is foreseen as rendering the ability to separate or combine is unclear.
[99] As used in the description herein and throughout the claims that follow,
"a," "an," and "the" includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[100] The foregoing description of illustrated embodiments of the present invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein. While specific embodiments of, and examples for, the invention are described herein for illustrative puφoses only, various equivalent modifications are possible within the spirit and scope of the present invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the present invention in light of the foregoing description of illustrated embodiments of the present invention and are to be included within the spirit and scope of the present invention. Thus, while the present invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the present invention. It is intended that the invention not be limited to the particular terms used in following claims and/or .to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include any and all embodiments and equivalents falling within the scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for acquiring biometric information comprising the steps of: acquiring a plurality of frames of data representing a sampled output of a swipe fingeφrint sensor; assembling said plurality of frames to form an image of a fingeφrint; selecting a portion of said image of a fingeφrint having the core of said fingeφrint; comparing said core portion of said image of a fingeφrint to a template using a correlation filter; and determining if said core portion of said image of a fingeφrint matches said template.
2. A system to implement the method of Claim 1.
3. A computer to implement the method of Claim 1.
4. A computer-readable medium having instructions for assisting in the implementation of the method of Claim 1.
5. The method of claim 1 wherein said method further comprises the step of removing artifacts and noise from said image of a fingeφrint before said comparison and determining steps.
6. The method of claim 5 wherein said method further comprises the step of removing a portion of said image of a fingeφrint, said removed portion comprising a portion of the non-fingeφrint portion of a finger.
7. The method of claim 6 wherein said method further comprises an enrollment step.
8. The method of claim 7 where said enrollment step includes the steps of: acquiring a plurality of images of a fingeφrint; cropping each of said images of a fingeφrint; removing noise and artifacts from each of said cropped images of said fingeφrint; exfracting a core portion from each of said cropped images of said fingeφrint; combining said core portions acquired from each of said cropped images of said fingeφrints to form a template and saving said template to a database.
9. The method of claim 8 wherein said method further comprises the step of associating said template with information identifying the person associated with said finger.
10. A system to implement the method of Claim 9.
11. A computer to implement the method of Claim 9.
12. A computer-readable medium having instructions for assisting in the implementation of the method of Claim 9.
13. A fingeφrint system comprising: a microprocessor; a swipe fϊngeφrint sensor module coupled to said microprocessor; said swipe fingeφrint sensor module having: a motion detector for detecting the presence of a finger and the direction of a finger swipe; a swipe sensor for generating an analog signal representing a fingeφrint; an analog to digital converter, coupled to said swipe sensor, said analog to digital converter adapted to sample said analog sensor to generate a plurality of frames of data representing said fingeφrint; and an image buffer for temporarily storing sample frames of data; and a database coupled to said microprocessor having a plurality of templates and identifying information; said microprocessor adapted to execute code for performing an enrollment process for generating a template for storage in said database in a first instance and for performing an identification process for comparing a fϊngeφrint to said plurality of templates in a second instance; and said microprocessor adapted to execute code for performing a selected function when a fingeφrint matches one of said templates.
14. The system of claim 13 further comprising an interface means coupling said microprocessor to said swipe fingeφrint sensor module.
15. The system of claim 13 wherein said image buffer stores a single line of image data obtained from said swipe sensor.
16. The system of claim 13 wherein said microprocessor is adapted to execute a cropping process to limit image data to a portion of a finger having a fϊngeφrint when executing code to perform said identification process.
17. The system of claim 16 wherein said microprocessor is adapted to filter the cropped image data when executing code to perform said identification process.
18. The system of claim 17 wherein said microprocessor is adapted to extract an area of interest from said cropped image data when executing code to perform said identification process.
19. The system of claim 18 wherein said microprocessor is adapted to condition the extracted portion of said cropped image data when executing code to perform said identification process.
20. The system of claim 19 wherein said microprocessor is adapted to collect a plurality of images of said fϊngeφrint when executing code to perform said identification process and to combine said plurality of images into a template.
21. The system of claim 20 wherein at least three images are combined to form a template.
22. A method for authenticating a fingeφrint image comprising the steps of: forming an image of a fingeφrint in memory; selecting a portion of said image of a fingeφrint having the core of said fingeφrint; executing a pattern-matching algorithm to determine if said core portion of said image of a fingeφrint matches said template.
23. The method of claim 22, wherein said pattern matching algorithm executing step further comprises the steps of: calculating a two-dimensional FFT of said image of a fingeφrint; calculating a two-dimensional cross-correlation function between said image of a fingeφrint and a template; and calculating a peak-to-side lobe ratio (PSR) of the two-dimensional correlation function results; and comparing said PSR to a pre-specified threshold to determine if said image of a fϊngeφrint matches said template.
24. The method of claim 23, wherein said template is generated from a plurality of images of a fingeφrint during an enrollment process.
25. A system to implement the method of Claim 23.
26. A computer to implement the method of Claim 23.
27. A computer-readable medium having instructions for assisting in the implementation of the method of Claim 23.
PCT/US2005/009161 2004-04-23 2005-03-18 System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor WO2005109321A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US56525604P 2004-04-23 2004-04-23
US56487504P 2004-04-23 2004-04-23
US60/565,256 2004-04-23
US60/564,875 2004-04-23
US10/927,599 US7212658B2 (en) 2004-04-23 2004-08-25 System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
US10/927,599 2004-08-25
US10/927,178 US7194116B2 (en) 2004-04-23 2004-08-25 Fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
US10/927,178 2004-08-25

Publications (1)

Publication Number Publication Date
WO2005109321A1 true WO2005109321A1 (en) 2005-11-17

Family

ID=35320410

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2005/009090 WO2005109320A1 (en) 2004-04-23 2005-03-18 Fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
PCT/US2005/009161 WO2005109321A1 (en) 2004-04-23 2005-03-18 System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2005/009090 WO2005109320A1 (en) 2004-04-23 2005-03-18 Fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor

Country Status (1)

Country Link
WO (2) WO2005109320A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102395995B (en) * 2009-04-13 2014-04-23 富士通株式会社 Biometric information registration device, biometric information registration method, biometric authentication device and biometric authentication method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202687A1 (en) * 2002-04-29 2003-10-30 Laurence Hamid Method for preventing false acceptance of latent fingerprint images
US6681034B1 (en) * 1999-07-15 2004-01-20 Precise Biometrics Method and system for fingerprint template matching
US20050018925A1 (en) * 2003-05-29 2005-01-27 Vijayakumar Bhagavatula Reduced complexity correlation filters

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7197168B2 (en) * 2001-07-12 2007-03-27 Atrua Technologies, Inc. Method and system for biometric image assembly from multiple partial biometric frame scans
US20030123714A1 (en) * 2001-11-06 2003-07-03 O'gorman Lawrence Method and system for capturing fingerprints from multiple swipe images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681034B1 (en) * 1999-07-15 2004-01-20 Precise Biometrics Method and system for fingerprint template matching
US20030202687A1 (en) * 2002-04-29 2003-10-30 Laurence Hamid Method for preventing false acceptance of latent fingerprint images
US20050018925A1 (en) * 2003-05-29 2005-01-27 Vijayakumar Bhagavatula Reduced complexity correlation filters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG W. ET AL: "Core-Based Structure Matching Algorithm of Fingerprint Verification", IEEE, vol. 1, 2002, pages 70 - 74, XP010613277 *

Also Published As

Publication number Publication date
WO2005109320A1 (en) 2005-11-17

Similar Documents

Publication Publication Date Title
US7212658B2 (en) System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
US7194116B2 (en) Fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
US7054470B2 (en) System and method for distortion characterization in fingerprint and palm-print image sequences and using this distortion as a behavioral biometrics
CN103003826B (en) Biometric authentication apparatus and method
US5982913A (en) Method of verification using a subset of claimant&#39;s fingerprint
JP4785985B2 (en) Sequential image alignment
US8126215B2 (en) Registration and collation of a rolled finger blood vessel image
EP1339008B1 (en) Authentication method, and program and apparatus therefor
US20030123714A1 (en) Method and system for capturing fingerprints from multiple swipe images
Okumura et al. A study on biometric authentication based on arm sweep action with acceleration sensor
US8824746B2 (en) Biometric information processing device, biometric-information processing method, and computer-readable storage medium
US11232280B2 (en) Method of extracting features from a fingerprint represented by an input image
EP1239403A2 (en) Method and system for identity verification using multiple simultaneously scanned biometric images
WO2004061752A2 (en) Fingerprint security systems in handheld electronic devices and methods therefor
JP2005531935A (en) Method and system for biometric image assembly from multiple partial biometric frame scans
KR100641434B1 (en) Mobile station having fingerprint recognition means and operating method thereof
WO2005008568A1 (en) Method for acquiring a fingerprint image by sliding and rolling a finger
EP2148295A1 (en) Vein pattern management system, vein pattern registration device, vein pattern authentication device, vein pattern registration method, vein pattern authentication method, program, and vein data structure
WO2007018545A2 (en) Protometric authentication system
US20060045315A1 (en) Method and apparatus for acquiring biological information
Han et al. Embedded palmprint recognition system on mobile devices
JP3902473B2 (en) Identification method using biometric information
JPH10275233A (en) Information processing system, pointing device and information processor
WO2005109321A1 (en) System for fingerprint image reconstruction based on motion estimate across a narrow fingerprint sensor
Sanchez-Reillo et al. Fingerprint verification using smart cards for access control systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase