CA1325480C - Image recognition system and method - Google Patents
Image recognition system and methodInfo
- Publication number
- CA1325480C CA1325480C CA000604168A CA604168A CA1325480C CA 1325480 C CA1325480 C CA 1325480C CA 000604168 A CA000604168 A CA 000604168A CA 604168 A CA604168 A CA 604168A CA 1325480 C CA1325480 C CA 1325480C
- Authority
- CA
- Canada
- Prior art keywords
- image
- pattern
- signature
- recited
- signatures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/45—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying users
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42201—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
Abstract
IMAGE RECOGNITION SYSTEM AND METHOD
Abstract of the Disclosure An image recognition system and method are provided for identifying a pattern of a plurality of predetermined patterns in a video image. A plurality of feature image signatures are stored corresponding to each of the plurality of predetermined patterns. A
universal feature image signature is stored that in-cludes each of the stored feature image signatures. A
predefined series of portions of a captured video image is sequentially compared with the universal feature im-age signature to identify matching portions. Each of the identified matching video image portions is com-pared with the stored feature image signatures to iden-tify the predetermined pattern.
Abstract of the Disclosure An image recognition system and method are provided for identifying a pattern of a plurality of predetermined patterns in a video image. A plurality of feature image signatures are stored corresponding to each of the plurality of predetermined patterns. A
universal feature image signature is stored that in-cludes each of the stored feature image signatures. A
predefined series of portions of a captured video image is sequentially compared with the universal feature im-age signature to identify matching portions. Each of the identified matching video image portions is com-pared with the stored feature image signatures to iden-tify the predetermined pattern.
Description
`
132~80 GE R~COGNITION ~YST~ AN~ M~T~OD
BACKGRPUND OF T~E INVENTIO~ -A. Field oP thç Inven~ion The present invent on relates qenerally to ima~e recognition systems, and more particularly to image recognition 5yst2m6 and methods for use with television audience mea~urement and marketing data collection systems.
7 -~J~9rib5ion Qf th~ iQr Ax~
~anual sy6tems for determining the view-ing/listening habits of the public are prone to inaccu~
racies resulting from the entry o ~rroneous data that ~ay be intentionally or unintentionally entered and are ~low in acquiring data.
; United State6 Patent No. 3,056,135 to Currey et al. i~sued September 25, 1962 and assigned to the same a~ignee as the present application descri~es a method and apparatus for automatically determining the listening habits of wave signal receiYer users. The method disclosed in Currey et al. provides a record of:the number and types of person~ using a wave signal rec~iver by ~onitoring the operational conditions oP the rec~iver and utilizing both ~; strategically placed switches for counting the number of per~ona ent~ring, leaving and within a particular area and a photographic recorder for periodically recording the composition of the audience. A ~ailable magazine provides a record of both the audience composition and the receiver oper~tion information ~or manual proce6sing by a survey or-ganization. Thu~ a disadvantaga i6 that acquiring data is ,.~ : .
, . ~ : :
~ ~ 32~0 slow and fur~her many viewing audience members object ~o b~ing identified from the photoqraphic record.
United state6 Patent No. 4,644,509 to Kiewit et al. issued February 17, 1987 and a6signed to the same a~signee as the present application di6closes an ultrasonic, pulse-echo method and appa]^atu~ for determinin~
the number o~ persons in the audience and the composition of the audience of a radio receiver and/or a television receiver. Pirst and ~econd ref lected ultrasonic wave maps ~ 10 of the monitored area are collected, first without people - and second with people who may be pre~ent in the monitored area. The first collected background defining map is su~tracted from the second collected map to obtain a resulting map. The resulting map is processed to identify clusters having a minimum intensity. A cluster size of the thus identified clusters is utilized to identify clusters corresponding to people in an audience. While this arrangement is effective for counting viewing audience memb~rs, individual audience me~bers can not be identified.
various image recognition arrangements and systems are known for recognizing patterns within a captured video image. However, the conventional pattern ~ recognition systems are impractical and uneconomical for '`!' identifying individual audience members of a viewing 25 audienca due to the vast information storage and computing requirements that would be needed in the con~entional ,~ 8y tems. It is desirable to providP an image recognition sy6te~ having the capability to identify individual members o~ the viewing audience.
30 SUM~ARY OF ~ VE~ION
. It is an object of the present invention to provide a method and system for determining the viewing habit~ of the public that overcome many of the di~advantages of the prior art systems.
It i~ an object of the present invention to :~ provide an image recognition method and ~ystem ~or : identifying predetermined individual members oE a viewing audience in a monitored area.
. .
., .
. ' ~ . ~ , :
32~8~
It is an object of the present invention to provide an image recognition method and system for identifying a pattern of a plurality of predeter~ined patterns in a video image.
It is an object of the present invention to provide an image recognition method and 6ystem for i~enti~ying a pattern of a plurality of pr~determined patterns in a video image utilizing impro~ed feature signature extraction and storage techniquesO
There~ore, in accordance with a pref~rred em-bodiment of the invention, t~ere are provided an image recognition method and system for identifying a pattern of a plurality of predetermined patterns in a video image. A
plurality of feature image signatures ~re stored corresponding to each of the plurality of predeterminad patterns. A universal feature image signature i6 stored that includes each of the ~tored feature image signatures.
A predefined series of portions of a captured video image is sequentially compared with the universal feature ima~e sig~ature to identify matching portions. Each of the identi~ied matching video image portions i~ co~pared with the stored feature image signatures to identify the predetermined pattern.
In accordance with a feature of the invention, each of th~ plurality of feature image signatures and the universal feature image signature are stored in a distinct ~e~ory space of a predetermined capacity. The feature image s~gnatures are generated by processing a plurality of - video images of the pattern to be identified. A signature from each of the proces~ed video images is extracted and stored in the corresponding predetermined memory space for the particular pattern and in the predetermined ~emory space for the universal feature i~age signature.
A feature image signature is stored corre~ponding ~ 35 to each predeter~ined individual member~s face of a viewing -~ audience. An audience scanner include~ audience locating circuitry for locating individual audi~nce members in the monitored area. A video imaqe ~g captured and proces6ed : .
" .
.
:
i ~ .
~ 3 2 ~ ~ 8 642~7-660 for a first one of the located individual audience membars. A
portion of the processed video image is identified that matches a stored universal image signature that includes each of the feature image signatures. The identified portion is compared with the stored feature image signatures to identify the audience member.
These steps are repeated to identify all of the located individual audience members in the monitored area.
According to one aspect, the present invention provides an image recognition system for identifying a predetermined pattern of a plurality o~ predetermined patterns in a video image comprising: means for storing a plurality of pattern image signatures, each o~ said pattern image signatures corresponding to one of the plurality o~ predetermined patterns; means for storiny a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; means for sequentially comparing a predefined series of portions of the video image with sald universal pattern image signature and for ldentifying matching video image portions; and means for comparing each of said identified matching video imaga portions with said stored pattern image signatures to identify the predetermined pattern.
According to another aspect, the present inventlon provides a method of identifying a predetermined pattern of a plurality of predetermined patterns in a video image comprising the steps of: storing a plurality of pattern image signatures, each o~ said pattern image signatures correæponding to one of the plurality of predetermined patterns; storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and identifying matching video image portions; and comparing each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
According to yet another aspect, the present invention .
~323~80 provides a method of yenerating a pattern image signature for uæe in an image recognition system for identifying a predetermined pattern of a plurality of predetermined patterns in a video image, the image recognition system including a distinct pattern imaye signature memory æpace for storing each pattern ima~e signature corresponding to one of the plurality of predetermined patternæ
and a universal image memory space for storing a universal pa~tern image signature, the universal pattern image signature corresponding to a composite signature of each of the pattern image signatures; said method comprising the steps of: capturing a video image of the predetermined pattern; processing said captured video image to providç a digitized image signal; identifying a feature value from said digitized image signal for each of a plurality of predeiined feature positions; identifying a memory location correspondlng to said identified feature value for each of a plurallty of predefined feature positions; storing a binary digit one in said identified memory locations in the pattern image signature memory space corresponding to the predetermined pattern;
and storing a binary digit one in corresponding memory locations in said universal image memory space.
According to still another aspect, the present invention provides an image recognition system for identifying predatermined individual members of a viewing audience in a monitored area:
memory means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the predetermined individual members; means for storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; audience scanning means ~or locating individual me~bers in the monitored area; means for capturing a video i~age of each said located individual membar in the monitored area;
means ~or sequentially comparlng a predefined serles of portions of each said captured video image with said univeræal pattern image signature and for identifying matching video image portions;
and means Eor comparing each of said ldentified matching video image portions with said stored pattern image signatures to r ~
4a ~, ~ ~ .
., . :
'~ : ."~: ~' ~32~0 identify at least one of the predetermined indlvidual members.
According to a further aspect, the present invention provides a method of identifying predetermined individual membars of a viewing audience in a monitored area: storing a plurality of pattern image signatures, each of said pattern image slgnatures corresponding to one of the predetermined ir~dividual members;
stsring a universal pattern image signature, said univergal pattern image signature corresponding to a composite signature of each of said pattern image signatures; scanning the monitored area and generating a temperature representative signal of the monitored area; processing said generated temperature representative signal and pro~iding individual members direction signal corresponding to located individual members; capturing a video image of each located lndlvidual member responsive to said indlvidual members direction signal in the monitored area;
sequentially comparing a predefined saries of portions of the captured video image with said universal pattern image signature and identifying matching video image portions; and co~paring each of said identified matching ~ideo image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
D~SCRIPTION OF THE DRAWING
These and other objects and advantages of the present invention will become readily apparent upon consideration of the following detailed description and attached drawing wherein:
FIG. 1 is a block diagram of the image recognition sys~em accordlng to the present invention;
FIG. 2 is a perspective view partly broke~ away to show interior details of an audience scanner of the image recognition system o~ FIG. l;
FIG. 3 is a block diagram of an audience location subsystem of the image recognition syste~ of FIG. l;
FIG. 4 is a block diagra~ of a distance measurement subsystem of the image recognition system of FIG. l;
FIG. S is a block diagram of a control command processor subsystem of the image recognition sys~em of EIG. l;
4b - , . . ';, ~32~0 FIG. 6 ls a block diagram of a learning functional portion of an audience recognition subsys~em of the image recognition system of FIG. l;
FIG. 6A is a graphical representation of a binary subimage and feature identifying logic for extracting and storing an image signature of the image recognition system of FIG. l;
FIG. 7 is a block diagram of a recognition functional portion of the audience recognition subsystem of the image recognltlon system of FIG. l;
4c ~'~
.
... ..
. ~ . , ` :: --5- ~32~80 FIGS. 8A - 8M are flvw charts illustrating the logical steps performed by the image recognition system o~
FIG. 1.
~ETAI~ED DESCRIPTI0~_~E ~HE P~F~X~n~ UI~
Referring now to the drawing, with particular attention to FIG. 1, there i~ illustrated a block ~iagram of a new and improved image reco~nition system according to the invention generally designated by the referencs numeral 10. While the image recognition system 10 is depicted and lo generally described herein for u~e with a television receiver to identify individual members of a vi~wing audience, the principles of the present invention are also applicable to other image recognition systems.
As its major components, th~ image recogn:Ltion system 10 includes an audience scanner 12 for scanning and capturing an image of the viewing audience member6 wit~in a monitor~d area, and a control command processor subsystem 14 for per~orming control operations and for s~oring and processinq captured images. A data trans~er device 16 i~
used for periodically transferring stored ~ata to a c~ntral computer (not shown) o~ t~e television audience measurement and/or marketing data collection systems. The image :~
recognition syste~ 10 includes an audience location subay~tem 18 illu~trated in FIG. 3 for locating the audience members within the monitored area, a distance measurement subsystem 20 illustrated in FIG. 4 for identifyin~ the distance between audience members and the audience scanner 12, an illumination controller 22 and a scanner controller 24 fQr providing illumination and motor control signals to the audience scanner 12. An audience recognition subsystem 26 for learning and for recognizing feature image signatures of the audience members is illustrated in FIG~. 6 and 7.
Re~erring also to FIG. 2, the audience scanner 12 includes a video camera 28 providing a video i~age signal at a line 28A that is appliQd to the audienc~ recognltion sub6ystem 26. An in~rared video cam~ra, ~or example, such as a ~odel CCD1200 IR Microcam, manufactured and sold by . .
~ .: , .
-6- ~32~8~
Electrophysics Corporation of Nutley, New Jersey, may be employed for the video camera 28. An infrared sensor 30 provides a sensed infrared signal at a line 30A that is applied to the audience location subsystem 18. A parallel opposed dual pyroelectric infrared detector used in conjunction with an optic focusing device including a pair of fixed surface ~irrors and a Fresnel lens, may be used for the infrared sensor 30, for example, such as an Eltec ~ odel 4192 and an Eltec Model 826C manufactured and sold by Eltec Instrumentsl Inc. of Daytona Beach, Florida. An ultrasound transducer 32, such as a 50 ~Hz electrostatic transducer, for transmitting and for receiving ultrasonic signals provides a distance pulse echo signal at a line 32A
that is applied to the distance measurement subsystem 20.
A pair of infrared illumination devices 34 for illuminating the monitored area are operatively controlled by the illumination controller 22. A Model IRL200 infrared room illuminator manufactured and sold by Electrophysics Corporation of Nutley, New Jersey, may be employed for the illumination devices 34, although various illumination devices such as infrared lasers, light emitting diodes or a filtered flash lamp can be used. h scanner drive 36, such as a stepping motor is operatively controlled by the scanner controller 24 for stepwise angular rotation of the video camera 28 for scanning the monitored area.
FIG. 3 provides a block diagram of an audience location subsystem 18 of the i~age recognition system lO.
The sensed voltage signal output o~ the infrared sensor 30 at line 30A corresponds to the ~emperature distribution of the monitored area. The sensed infrared signal at line 30A
is applied to a preamplifier device 38. The amplified in~rared signal is applied to a low pass fiIter 40 for providing a filtered infrared signal that is applied to an analog-to-digital A/D converter 42 which generates a digital representation o~ the processed infrared signal.
The digitized signal is applied to an audience location comput~tion logic circuit 44 to identify directions within t~e monitored area correspondinq to the possible locations ~r~
.
,:
.
.
,:
, - 11 32~480 of individual audience members. The identified directionssignal at a line 46 is applied to the control command processor subsys~e~ 14.
Alternatively, the separate audience location computation logic circuit 44 may be eliminated with the di~itized signal output o~ the A/D converter 42 applied ko the control command processor subsystem 14. Then the direction identifying function of the computation logic circuit 44 is performed by the control command processor subsystem 14.
The control command processor subsystem ~4 .
utilizes the identified directions signal at line 46 from the audience location subsy~tem 18 to initiate oper~tion of the distance measurement subsystem 20.
FIG. 4 provides a block diagram of a distance measurement subsystem 20 of the image recognition system lO. An ultrasound range module 48 drives the ultrasound transducer 32 for transmitting an ultrasonic burst signal and for receivinq an echo signal responsive to an enable or 20 initiate input signal applied by a transmitter controller device 50. An output echo signal o~ the ultr~sound range module 48 is coupled to the control command processor subsystem }4 via a distance measurement logic circuit 52 which converts the echo signal to a suitable f or~at f or use by the control command processor subsystem 14. A sonar ranging module, for example, such as an integrated circuit device type SN28827 manufactured and sold by Texas Instruments may be used for the ultrasound range module 48.
Bidirectional communications with the control command 30 proc:es~or subsystem 14 at a line 54 include the proces~ed echo signal output of the distance measurement logic circuit 52 and an input control signal to the transmitter controller 50 .
The processed echo signal representative of distance between the scanner 12 and the located individual audience mamber is utilized by the control command processor subsystem 1~ for adjusting the focus and zooming func~ion~ of the video camera 28.
. . : , . . - .
. : . . . .
.~ . , -- ~32~0 FIG. 5 provides a block diagram representation of the control command processor subsystem 14 of the image recognition system 10. The control command processor subsystem 14 includes a central processing unit 56, such as, an Intel 80286 high performance 16--bit microprocessor with inteyrated memory management and adapted ~or multi-tasking systems and an as~ociated memory device 58. The central processing unit 56 i~ program~able to perform the control and signal processing functions and includes, in known manner, asynchronous input signal timing and clock control bus timing functions. An inter~ace device 60 is provid0d in conjunction with the central processing unit 56 to enable bidirectional communications between th8 image recognition syste~ 10 and a host system for a particular application. The host system may be a home unit ~not shown) of the type as described in United States patent 4,697,209 to David A. Kiewit and Daozheng ~u and/or the data transfer device 16.
The contro} command processor subsystem 14 ~urther may include an image display 62, a computer disp}ay 64 and a keyboard 66 for use during the installation of the i~age recognition syst~m 10.
~ Control ~ignals from t~e central processing unit ', 56 at a line 68 are applied to the illumination controller 22 for controlling illumination of the monitored area.
Motor control signal~ at a line 70 from the central proces6ing unit 56 are applied to the scanner controller 24 which are translated and applied to the stepping motor 36 at a line 36A. Feedback position signals may be provided to the central processing unit 56. Bidirectional communications are provided between the central processinq ~: unit 56 and the audience recognition subsystem 26 . illustrated at a line 72.
FIGS. 6 and 7 provide a block diagram repre-sentation of the audience recognition subsystem 26.
.: Referring initially to FIG. 6, a learning opera~ional mode of the audience recognition subsy~tem 26 is illustrated.
The infrared image output signal at line 28~ of the .
, ~;
.
. ~ .
..
.: , ~32~48~
g infrared video ca~era 28 is applied to an image acquisition block 74 coupled to an analog-to-digital A~D converter 76 which generates a d1gital representation of the infrared image signal. A face image registration block 78 identifie~ a predetermined portion (mxn) pixels of the digitizQd image signal. The ~alues of m and n are between 32 and 256, for exampl~, such as a middle pixel image portion including m=50 and n=50. A gray~level subimage output o~ the face image registration block 78 at a line G-Sub is appliPd to a normaliæation block 80. The normalizedoutput of block 80 is applied to a thresholding block 82 to provide a thresholded, binary level face image output at a line B-Sub. Each pixel of the (mxn) thresholded, binary level face or B-Sub image is represented by a single blnary digit or bit, or 2500 bits ~or the 50x50 pixel~. The ~-Sub image siqnal is applied to a feature signature extraction block 84. An extracted pattern i~aqe signature output of the feature signature extraction block 84 is stored in both an individual face storage library (IFL~ 86 and a univer~al face model (UFM) storage block 88. The universal face model UFM includ~s all the individual pattern image or face signature~ stored within the individual face library IFh.
A stop function flag i~ set at stop blocks 90 for updating the image libraries performed by the control ~mmand pro-cessor subsy6te~ 14 as illustrated in FIG. 8A.
FIG. 6A provides a graphical representation of a~-sub image including mxn pixels. Each of the mxn pixels is eithsr a zero or a one. The B-sub image pixel data i8 utiliz~d to extract the pattern image signature for storing in the ~earning operational mode (FIG. 6) and to extract the pattern image signature for comparing with the universal image signature and the pattern image signatures in the recognition operational mode illustrated in FIG. 7.
In accordance with a feature of the inventiont a pseudo random predetermined sequence of the mxn B-Sub image bits defines a predetermined number T of feature po~itions used for storing the extracted feature signature output o~
th~ feature signature extraction block 84. Each feature .
, ' , .~
.... , .~ .:
.
-- ~32~8~
position has a predetexmined length ~, where the value of L
is between 3 and 10. Considering a predetermined feature position of len~th L=7 a~d with the above example B-Sub image of 2500 bits, a pseudo random sequence of 2500/7 or 357 feature positions results or T=357. Each feature has a value between 0 and (2L-l) or, for example, between 0 and 127 when L=7. A memory space of 2~ bits arranged as bytes b, where b equals 2L/8, is used for storing the possible feature values for each of the feature positions or, for example, 2*7 or 128 bits or 16 bytes. Thus a total memory space for each of the pattPrn or face image signature and the universal pattern image signature equal T multiplied by b or, for example, 357 po~itions x 16 byte~/position or 5712 bytes.
FIG. 6A illustrates a plurality of feature ~ positions i=o through i=(T-l) generally designated by the t reference character 84 corresponding to the feature extraction block 84. The corresponding memory space is ~ represented by the reference character 86 corresponding to ,j 20 the IFL block 86. The first or i=0 feature position value is stored in a corresponding bit po~ition ~ in a corresponding byte between 0 and (b-l~ within the memory ', space 84. The logic steps performed for storing the individual face and the universal face model are described with respect to FIG. 8B.
A distinct memory space of a predetermined capacity is definsd for the universal pattern image signature and each of the pattern image or individual face . signatures within the image face library. For example, for a viewing audience including a defined number of audience members P, individual face signatures ~TxP) are stored in ~ both the corresponding IF~ defined memory spaces (bxTxP) : and the UFM definsd mei~ory space (bxT). Multiple face images are learned for each of the audience members P by sequentially processing a serie~ of video images of the video camera 28 by the image signal processing blocks of FIG. 6 for each of the audience members. All of the resulting extracted pattern image signatures for each of ,, ., . .
"~ .
.:
.~ .
. .
.
~32~48~
the audience members are stored in the particular corr¢sponding memory space of the IFL memory spaces.
FIG. 7 provides a block diagram representation of the recognition mode of the audience recognition subsystem 26. The digital representation af the infrared image signal from the analog-to-digital A/D converter 76 corresponding to an identified direction of an audience m~mber by the audience location subsystem 18 is applied to a zooming and vertical strip image block 92. A first search area matrix (mxn)i is identified by a search area registration block 94. A gray-level subimage output G-Sub of the search area regis-tration block 94 i~ applled to a normalization block 96. The normalized output of block 96 is applied to a thre~holding block 98 to provide a thresholded, binary level search area image output B-Sub.
The search area B-Sub image is compared with the universal pattern image signature at a block 100 labelled recognition for UFM.
If the decision at a block 102 i5 that the search area B-Sub image matches or exceeds a predeter~ined correlation threshold with the universal pattern image signature, then the search area B-Sub image is compared to identify a match with each of the pattern image signatures stored in the individual face library as illustrated at a ~5 block 10~. Then or after a decision at ~lock 102 tha~ ~he il search area 8-Sub image does not match the universal pattern image signature, i is incremented by 1 or where l, at a block 106 so that a next seareh matrix (mxn)i is identified by the search area registration block 94 and processed as described for the first search area matrix.
After each of the search area matrices have been processed, more than one B-Sub image may be found that match the universal face model and an individual face library. The search area B-Sub imaqe having the best matching ra~e or highest correlation with an individual face in the individual face library is identified at a conclusion block 108. The logic steps performed for recognizing the universal face model and ~he individual : . , ; . :
'' ..
.
; . ~ :
~ ., .
132~80 face are described with respect to FIG. 8B. An output signal at a line 110 is then stored corresponding to the particular identified individual member of the viewing audience. The thus identified individual viewing member data can be stored together with other parameter data of a tel~vision data collection system, such as channel reception of a monitored receiver.
Referring to FIG. 8A, there is a main flow chart illustrating the logical steps performed by the control command processor subsystem 14 of the image recognition system 10. The sequential steps begin with an initialization routine. Then if a stop function is set~
the pattern image signatures and universal pattern image signature memory spaces can be updated to include the IFL
and UFM signatures stored at blocks 86 and 8~ of FIG. 6.
Otherwise, it is determined whether any function or mode has been selected, such as by a remote control or keyboard entry. If yes, then the selected function or mode is set or updated and then performed. Otherwise, the next sequential funckion or mode of modes 1-7 is performed.
FIG. 8B is a flow chart illustrating the logic steps perfor~ed for learning and recognizing the universal face model and the individual face. The sequential operations begin by setting a memory space address ADDR to the starting address with N-found set to zero. In the learning mode, an identified feature value from the B-Sub im~ge is set to a corresponding bit position, starting with feature position i=0 and repeated for each feature position to ~=356. The corresponding bit position B bit of ADDR + A
byte, is determined by the particular featur value S, where 5 is between 0 and 127, A equals an integer value S/8 and B equals S mod 8 or the residue of S after A bytes.
For example, a ~eature value S=114 from the B-5ub image for ` the feature position i=o is set to the 2nd bit of ADDR + 14 f~ 35 byte.
An individual audience member face i~age may be laarned multiple times ~R) with R possible diffarent ; extracted signatures resulting, dspending on any changed ;
' -13- ~32~80 facial expres~ions or various profiles of the audience member. Each of the extracted feature signatures is sequentially stored within the corresponding pattern image si~nature memory space for the particular audience member by repeating the sequential signal processing of FIG. 6 and the learning or storing steps of FI~. 8B.
Otherwise if not in the learning mode, then the sequential steps for the recognition mode are performed, such as at the recognition for UFM block 100 when the lo search area ~-Sub image is compared with the universal pattern image signature or at th~ block 104 when the search area B-Sub image is compared with individual pattern image signatures.
In the recognition mode, the identified featurs value from the B-Sub image is compared to a corresponding bit position, starting with feature position i=o and repeated for each feature position to i=35~. If the correspondln~ bit position is set, a match is indicated and the N-~ound value is incremented by one. Otherwise, if the i 20 corresponding bit position is not set, nonmatching is i indicated and the N-found value is not changed. The next incremental feature position is then compared to the corresponding bit position for the identified featurP
value.
After the last feature position i=356 has been identified and compared to identify a match, then the resulting N-found value is compared with a threshold value.
If resulting N-found value is less than the threshold value, then a FALSE or no recognition for the UFM or the J 30 particular IFL is indicated. If resulting N-found value is . greater than or equal to the threshold value, then a TRUE
I or a recognition of the UFM or the particular IFL is ~, indicated.
FIG. 8C is a flow chart illustrating an oper-., 35 ational function or mode 1 logical steps performed to add to the individual pattern image signatures and universal , pattern imag~ signature memory space or library. The sequential steps begin with a get and ~isplay a picture ~. .
~ 32~480 subroutine illustrated in FIG. 8D. Next a search all libraries subroutin~ illustrated in FIG. 8E is performed.
The results are displayed and added to the library.
The get and display a picture subroutine of FIG.
8D starts with an image acquisition step (block 74 of FIG.
6). The infrared video image is proce~sed (blocks 76, 78, 80 and 82 of FIG. 6) to provide a binary picture (B-sub image). A ratio of the ones in the resultin~ binary picture is calculated and the resulting binary picture is displayed.
In FIG. 8E, the search all librarie~ subroutine begins with a check of the exposure time based on the calculated ratio of ones and if adjustment i~ required, then the sequential oper~tion return without searching the libraries. Otherwise, if adjustment of the exposure time is not required, then an initial MAX value is set ~or the predetermined N-found value. A first library is searched (block 104 of FIG. 7 and FIG. 8B) and if the result N-found value i~ greater than the initial MAX value, then the MAX
value is updated. Otherwise th~ MAX value ls not changed.
Then a next library is searched and khe result is compared to the resulting MAX value and adju~ted, until all the libraries have been searched.
FIG. 8F is a flow chart illustrating an oper-ational function or mode 2 logical steps performed toverify and add to library. The sequential steps begin with the get and display the picture subroutine illustrated in , FIG. 8D. Next the search all libraries subroutine illu6trated in FIG. 8E is performed. Th results are displayed and added to the identified correct library.
FIG. 8G is a flow chart illustrating an oper-ational function or mode 3 logical steps performad to search and display. The sequential steps begin with a get and display the picture subroutine illustrated in FIG. 8D.
Next the search all libraries subroutine illustrated in FIG. 8E is performed. The results are displayed.
FIG. 8H is a flow chart illustrating an oper-ational function or mode 4 logical steps performed to 32~80 locate head and search. The sequential steps begin with a search raw picture for heads subroutine illustrated in FIG.
8I. Next a locate and search head(s) ~ubroutine illustrated in FIG. 8J is performed.
In FIG. 8I, the search raw pi.cture for head(s) subroutine begins with a check of the exposure time and if adjust~ent is required, then the sequential operation return without performinq any searching or heads.
Otherwise, if adjust~ent of tha exposure time is not required, then an initial MAX value is set for the predetermined N-found value and a search area pointer i is reset. The first search area matrix is identified and compared with the universal pattern image signature UF~
(block 100 of FI~. 7). The result is compared with the set for the correlation threshold MAX value, and if the result is greater than the initial MAX value, then that search area pointer is saved and the MAX value is updated.
Otherwise, the search area pointer is not saved and the MAX
value is not changed. Then the search area pointer value is updated and the next search area matrix is identified and the sequential steps are rep ated until the total raw picture has been searched.
FIG. 8J illustrates the locate and search head(s) subroutine per~orm~d by the control command processor subsystem 14 in the ~ode 4. If one search area pointer is stored in the subroutine of FIG. 8I, then the search area window is set to the identified search area matrix by the saved pointer value which corresponds to the head image portion~ ~he exposure time is adjusted and the search all libraries subroutine of FIG. 8E is performed and the resultæ are di~played.
Otherwise, if more than one pointer value are stored in the subroutine of FIG. 8I, then the MAX value is reset for a predeter~ined initial value. Then the search 3~ area window is set to the first identified search area matrix by the ~irst saved pointer value which corresponds to a first head image portion. A local normalization i5 per~ormed on the search area matrix data and the search all ~. ' `~ ' ' . ~ . . .
,, , , ~ :, '' ~ ' : '"
: :
' . ' . .\; ' ' ~ ~ ': ' ~. .
-16- ~ 32~8~
libraries subroutine o~ FIG. 8E i~ performed, and if the result is greater thanthe initial MAX value, then the ~AX
value is updated. Otherwise the MAX value i~ not changed.
Then a next search area window is set to the next saved pointer value which corresponds to a next head image portion and the sequential steps are repeated until all the head image portions have been searched. Then the search arsa window is set to the identified search area matrix having the highest MAX va~ue which corre~ponds to the head image portion. A local normalization is performed on the search area matrix data and the search all libraries - subroutine of FIG. 8E is performed and the result~ are ~isplayed.
FIG. 8K is a flow chart illustrating an oper-ational function or mode 5 logical steps performed to scan and search the monitored area. The sequential steps begin with scanning of the monitored area. Then the video camera 28 is pointed to audience members within the monitored area and the mode 4 operations of FIG. 8H are perfor~ed.
FIG. 8L is a flow chart illustrating an oper-ational function or mode 6 logical steps performed to shi f t and learn. The sequential steps begin with the got and display the picture subroutine illustrated in FIG. 8D.
Next the search all libraries subroutine illustrated in FIG. 8E is performed. The results are displayed and if all positions have been learned, then the sequential operation return without adding to the library. Otherwise, the audience member image is shifted to the left one position and added to the pattern image signature IFL and universal pattern image signature UFM. Then the audience member image is moved up one position and sequentially repeated until all positions have been learned and added to the library~
FIG. 8M is a flow chart illustrating an oper-ational function or mode 7 logical steps perfor~ed to search and pause. The sequential steps begin wi~h the search raw picture for heads subroutine illustrated in FIG.
8I. Next the locate and search head(s) subxoutine . . .
. ~ . .
.
-17- ~32 ~ ~8~
illustraked in FIG. 8J is performed. Then if a continue decision is yes, the sequential mode 7 steps are repeated.
Although the present invention has been described in conn~ction with details of the preferred embodiment, many alterations and modifications may be made without departing from the invention. Accordingly, it is intended that all such alterations and modi~icat:ions be considered as within the spirit and scope of the invention as defined in the appended claims.
The embodiments of the invent.ion in which an exclusive property or privilege is claimed are defined 2S
follo~s:
.- .~ . , .
,
132~80 GE R~COGNITION ~YST~ AN~ M~T~OD
BACKGRPUND OF T~E INVENTIO~ -A. Field oP thç Inven~ion The present invent on relates qenerally to ima~e recognition systems, and more particularly to image recognition 5yst2m6 and methods for use with television audience mea~urement and marketing data collection systems.
7 -~J~9rib5ion Qf th~ iQr Ax~
~anual sy6tems for determining the view-ing/listening habits of the public are prone to inaccu~
racies resulting from the entry o ~rroneous data that ~ay be intentionally or unintentionally entered and are ~low in acquiring data.
; United State6 Patent No. 3,056,135 to Currey et al. i~sued September 25, 1962 and assigned to the same a~ignee as the present application descri~es a method and apparatus for automatically determining the listening habits of wave signal receiYer users. The method disclosed in Currey et al. provides a record of:the number and types of person~ using a wave signal rec~iver by ~onitoring the operational conditions oP the rec~iver and utilizing both ~; strategically placed switches for counting the number of per~ona ent~ring, leaving and within a particular area and a photographic recorder for periodically recording the composition of the audience. A ~ailable magazine provides a record of both the audience composition and the receiver oper~tion information ~or manual proce6sing by a survey or-ganization. Thu~ a disadvantaga i6 that acquiring data is ,.~ : .
, . ~ : :
~ ~ 32~0 slow and fur~her many viewing audience members object ~o b~ing identified from the photoqraphic record.
United state6 Patent No. 4,644,509 to Kiewit et al. issued February 17, 1987 and a6signed to the same a~signee as the present application di6closes an ultrasonic, pulse-echo method and appa]^atu~ for determinin~
the number o~ persons in the audience and the composition of the audience of a radio receiver and/or a television receiver. Pirst and ~econd ref lected ultrasonic wave maps ~ 10 of the monitored area are collected, first without people - and second with people who may be pre~ent in the monitored area. The first collected background defining map is su~tracted from the second collected map to obtain a resulting map. The resulting map is processed to identify clusters having a minimum intensity. A cluster size of the thus identified clusters is utilized to identify clusters corresponding to people in an audience. While this arrangement is effective for counting viewing audience memb~rs, individual audience me~bers can not be identified.
various image recognition arrangements and systems are known for recognizing patterns within a captured video image. However, the conventional pattern ~ recognition systems are impractical and uneconomical for '`!' identifying individual audience members of a viewing 25 audienca due to the vast information storage and computing requirements that would be needed in the con~entional ,~ 8y tems. It is desirable to providP an image recognition sy6te~ having the capability to identify individual members o~ the viewing audience.
30 SUM~ARY OF ~ VE~ION
. It is an object of the present invention to provide a method and system for determining the viewing habit~ of the public that overcome many of the di~advantages of the prior art systems.
It i~ an object of the present invention to :~ provide an image recognition method and ~ystem ~or : identifying predetermined individual members oE a viewing audience in a monitored area.
. .
., .
. ' ~ . ~ , :
32~8~
It is an object of the present invention to provide an image recognition method and system for identifying a pattern of a plurality of predeter~ined patterns in a video image.
It is an object of the present invention to provide an image recognition method and 6ystem for i~enti~ying a pattern of a plurality of pr~determined patterns in a video image utilizing impro~ed feature signature extraction and storage techniquesO
There~ore, in accordance with a pref~rred em-bodiment of the invention, t~ere are provided an image recognition method and system for identifying a pattern of a plurality of predetermined patterns in a video image. A
plurality of feature image signatures ~re stored corresponding to each of the plurality of predeterminad patterns. A universal feature image signature i6 stored that includes each of the ~tored feature image signatures.
A predefined series of portions of a captured video image is sequentially compared with the universal feature ima~e sig~ature to identify matching portions. Each of the identi~ied matching video image portions i~ co~pared with the stored feature image signatures to identify the predetermined pattern.
In accordance with a feature of the invention, each of th~ plurality of feature image signatures and the universal feature image signature are stored in a distinct ~e~ory space of a predetermined capacity. The feature image s~gnatures are generated by processing a plurality of - video images of the pattern to be identified. A signature from each of the proces~ed video images is extracted and stored in the corresponding predetermined memory space for the particular pattern and in the predetermined ~emory space for the universal feature i~age signature.
A feature image signature is stored corre~ponding ~ 35 to each predeter~ined individual member~s face of a viewing -~ audience. An audience scanner include~ audience locating circuitry for locating individual audi~nce members in the monitored area. A video imaqe ~g captured and proces6ed : .
" .
.
:
i ~ .
~ 3 2 ~ ~ 8 642~7-660 for a first one of the located individual audience membars. A
portion of the processed video image is identified that matches a stored universal image signature that includes each of the feature image signatures. The identified portion is compared with the stored feature image signatures to identify the audience member.
These steps are repeated to identify all of the located individual audience members in the monitored area.
According to one aspect, the present invention provides an image recognition system for identifying a predetermined pattern of a plurality o~ predetermined patterns in a video image comprising: means for storing a plurality of pattern image signatures, each o~ said pattern image signatures corresponding to one of the plurality o~ predetermined patterns; means for storiny a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; means for sequentially comparing a predefined series of portions of the video image with sald universal pattern image signature and for ldentifying matching video image portions; and means for comparing each of said identified matching video imaga portions with said stored pattern image signatures to identify the predetermined pattern.
According to another aspect, the present inventlon provides a method of identifying a predetermined pattern of a plurality of predetermined patterns in a video image comprising the steps of: storing a plurality of pattern image signatures, each o~ said pattern image signatures correæponding to one of the plurality of predetermined patterns; storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and identifying matching video image portions; and comparing each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
According to yet another aspect, the present invention .
~323~80 provides a method of yenerating a pattern image signature for uæe in an image recognition system for identifying a predetermined pattern of a plurality of predetermined patterns in a video image, the image recognition system including a distinct pattern imaye signature memory æpace for storing each pattern ima~e signature corresponding to one of the plurality of predetermined patternæ
and a universal image memory space for storing a universal pa~tern image signature, the universal pattern image signature corresponding to a composite signature of each of the pattern image signatures; said method comprising the steps of: capturing a video image of the predetermined pattern; processing said captured video image to providç a digitized image signal; identifying a feature value from said digitized image signal for each of a plurality of predeiined feature positions; identifying a memory location correspondlng to said identified feature value for each of a plurallty of predefined feature positions; storing a binary digit one in said identified memory locations in the pattern image signature memory space corresponding to the predetermined pattern;
and storing a binary digit one in corresponding memory locations in said universal image memory space.
According to still another aspect, the present invention provides an image recognition system for identifying predatermined individual members of a viewing audience in a monitored area:
memory means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the predetermined individual members; means for storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures; audience scanning means ~or locating individual me~bers in the monitored area; means for capturing a video i~age of each said located individual membar in the monitored area;
means ~or sequentially comparlng a predefined serles of portions of each said captured video image with said univeræal pattern image signature and for identifying matching video image portions;
and means Eor comparing each of said ldentified matching video image portions with said stored pattern image signatures to r ~
4a ~, ~ ~ .
., . :
'~ : ."~: ~' ~32~0 identify at least one of the predetermined indlvidual members.
According to a further aspect, the present invention provides a method of identifying predetermined individual membars of a viewing audience in a monitored area: storing a plurality of pattern image signatures, each of said pattern image slgnatures corresponding to one of the predetermined ir~dividual members;
stsring a universal pattern image signature, said univergal pattern image signature corresponding to a composite signature of each of said pattern image signatures; scanning the monitored area and generating a temperature representative signal of the monitored area; processing said generated temperature representative signal and pro~iding individual members direction signal corresponding to located individual members; capturing a video image of each located lndlvidual member responsive to said indlvidual members direction signal in the monitored area;
sequentially comparing a predefined saries of portions of the captured video image with said universal pattern image signature and identifying matching video image portions; and co~paring each of said identified matching ~ideo image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
D~SCRIPTION OF THE DRAWING
These and other objects and advantages of the present invention will become readily apparent upon consideration of the following detailed description and attached drawing wherein:
FIG. 1 is a block diagram of the image recognition sys~em accordlng to the present invention;
FIG. 2 is a perspective view partly broke~ away to show interior details of an audience scanner of the image recognition system o~ FIG. l;
FIG. 3 is a block diagram of an audience location subsystem of the image recognition syste~ of FIG. l;
FIG. 4 is a block diagra~ of a distance measurement subsystem of the image recognition system of FIG. l;
FIG. S is a block diagram of a control command processor subsystem of the image recognition sys~em of EIG. l;
4b - , . . ';, ~32~0 FIG. 6 ls a block diagram of a learning functional portion of an audience recognition subsys~em of the image recognition system of FIG. l;
FIG. 6A is a graphical representation of a binary subimage and feature identifying logic for extracting and storing an image signature of the image recognition system of FIG. l;
FIG. 7 is a block diagram of a recognition functional portion of the audience recognition subsystem of the image recognltlon system of FIG. l;
4c ~'~
.
... ..
. ~ . , ` :: --5- ~32~80 FIGS. 8A - 8M are flvw charts illustrating the logical steps performed by the image recognition system o~
FIG. 1.
~ETAI~ED DESCRIPTI0~_~E ~HE P~F~X~n~ UI~
Referring now to the drawing, with particular attention to FIG. 1, there i~ illustrated a block ~iagram of a new and improved image reco~nition system according to the invention generally designated by the referencs numeral 10. While the image recognition system 10 is depicted and lo generally described herein for u~e with a television receiver to identify individual members of a vi~wing audience, the principles of the present invention are also applicable to other image recognition systems.
As its major components, th~ image recogn:Ltion system 10 includes an audience scanner 12 for scanning and capturing an image of the viewing audience member6 wit~in a monitor~d area, and a control command processor subsystem 14 for per~orming control operations and for s~oring and processinq captured images. A data trans~er device 16 i~
used for periodically transferring stored ~ata to a c~ntral computer (not shown) o~ t~e television audience measurement and/or marketing data collection systems. The image :~
recognition syste~ 10 includes an audience location subay~tem 18 illu~trated in FIG. 3 for locating the audience members within the monitored area, a distance measurement subsystem 20 illustrated in FIG. 4 for identifyin~ the distance between audience members and the audience scanner 12, an illumination controller 22 and a scanner controller 24 fQr providing illumination and motor control signals to the audience scanner 12. An audience recognition subsystem 26 for learning and for recognizing feature image signatures of the audience members is illustrated in FIG~. 6 and 7.
Re~erring also to FIG. 2, the audience scanner 12 includes a video camera 28 providing a video i~age signal at a line 28A that is appliQd to the audienc~ recognltion sub6ystem 26. An in~rared video cam~ra, ~or example, such as a ~odel CCD1200 IR Microcam, manufactured and sold by . .
~ .: , .
-6- ~32~8~
Electrophysics Corporation of Nutley, New Jersey, may be employed for the video camera 28. An infrared sensor 30 provides a sensed infrared signal at a line 30A that is applied to the audience location subsystem 18. A parallel opposed dual pyroelectric infrared detector used in conjunction with an optic focusing device including a pair of fixed surface ~irrors and a Fresnel lens, may be used for the infrared sensor 30, for example, such as an Eltec ~ odel 4192 and an Eltec Model 826C manufactured and sold by Eltec Instrumentsl Inc. of Daytona Beach, Florida. An ultrasound transducer 32, such as a 50 ~Hz electrostatic transducer, for transmitting and for receiving ultrasonic signals provides a distance pulse echo signal at a line 32A
that is applied to the distance measurement subsystem 20.
A pair of infrared illumination devices 34 for illuminating the monitored area are operatively controlled by the illumination controller 22. A Model IRL200 infrared room illuminator manufactured and sold by Electrophysics Corporation of Nutley, New Jersey, may be employed for the illumination devices 34, although various illumination devices such as infrared lasers, light emitting diodes or a filtered flash lamp can be used. h scanner drive 36, such as a stepping motor is operatively controlled by the scanner controller 24 for stepwise angular rotation of the video camera 28 for scanning the monitored area.
FIG. 3 provides a block diagram of an audience location subsystem 18 of the i~age recognition system lO.
The sensed voltage signal output o~ the infrared sensor 30 at line 30A corresponds to the ~emperature distribution of the monitored area. The sensed infrared signal at line 30A
is applied to a preamplifier device 38. The amplified in~rared signal is applied to a low pass fiIter 40 for providing a filtered infrared signal that is applied to an analog-to-digital A/D converter 42 which generates a digital representation o~ the processed infrared signal.
The digitized signal is applied to an audience location comput~tion logic circuit 44 to identify directions within t~e monitored area correspondinq to the possible locations ~r~
.
,:
.
.
,:
, - 11 32~480 of individual audience members. The identified directionssignal at a line 46 is applied to the control command processor subsys~e~ 14.
Alternatively, the separate audience location computation logic circuit 44 may be eliminated with the di~itized signal output o~ the A/D converter 42 applied ko the control command processor subsystem 14. Then the direction identifying function of the computation logic circuit 44 is performed by the control command processor subsystem 14.
The control command processor subsystem ~4 .
utilizes the identified directions signal at line 46 from the audience location subsy~tem 18 to initiate oper~tion of the distance measurement subsystem 20.
FIG. 4 provides a block diagram of a distance measurement subsystem 20 of the image recognition system lO. An ultrasound range module 48 drives the ultrasound transducer 32 for transmitting an ultrasonic burst signal and for receivinq an echo signal responsive to an enable or 20 initiate input signal applied by a transmitter controller device 50. An output echo signal o~ the ultr~sound range module 48 is coupled to the control command processor subsystem }4 via a distance measurement logic circuit 52 which converts the echo signal to a suitable f or~at f or use by the control command processor subsystem 14. A sonar ranging module, for example, such as an integrated circuit device type SN28827 manufactured and sold by Texas Instruments may be used for the ultrasound range module 48.
Bidirectional communications with the control command 30 proc:es~or subsystem 14 at a line 54 include the proces~ed echo signal output of the distance measurement logic circuit 52 and an input control signal to the transmitter controller 50 .
The processed echo signal representative of distance between the scanner 12 and the located individual audience mamber is utilized by the control command processor subsystem 1~ for adjusting the focus and zooming func~ion~ of the video camera 28.
. . : , . . - .
. : . . . .
.~ . , -- ~32~0 FIG. 5 provides a block diagram representation of the control command processor subsystem 14 of the image recognition system 10. The control command processor subsystem 14 includes a central processing unit 56, such as, an Intel 80286 high performance 16--bit microprocessor with inteyrated memory management and adapted ~or multi-tasking systems and an as~ociated memory device 58. The central processing unit 56 i~ program~able to perform the control and signal processing functions and includes, in known manner, asynchronous input signal timing and clock control bus timing functions. An inter~ace device 60 is provid0d in conjunction with the central processing unit 56 to enable bidirectional communications between th8 image recognition syste~ 10 and a host system for a particular application. The host system may be a home unit ~not shown) of the type as described in United States patent 4,697,209 to David A. Kiewit and Daozheng ~u and/or the data transfer device 16.
The contro} command processor subsystem 14 ~urther may include an image display 62, a computer disp}ay 64 and a keyboard 66 for use during the installation of the i~age recognition syst~m 10.
~ Control ~ignals from t~e central processing unit ', 56 at a line 68 are applied to the illumination controller 22 for controlling illumination of the monitored area.
Motor control signal~ at a line 70 from the central proces6ing unit 56 are applied to the scanner controller 24 which are translated and applied to the stepping motor 36 at a line 36A. Feedback position signals may be provided to the central processing unit 56. Bidirectional communications are provided between the central processinq ~: unit 56 and the audience recognition subsystem 26 . illustrated at a line 72.
FIGS. 6 and 7 provide a block diagram repre-sentation of the audience recognition subsystem 26.
.: Referring initially to FIG. 6, a learning opera~ional mode of the audience recognition subsy~tem 26 is illustrated.
The infrared image output signal at line 28~ of the .
, ~;
.
. ~ .
..
.: , ~32~48~
g infrared video ca~era 28 is applied to an image acquisition block 74 coupled to an analog-to-digital A~D converter 76 which generates a d1gital representation of the infrared image signal. A face image registration block 78 identifie~ a predetermined portion (mxn) pixels of the digitizQd image signal. The ~alues of m and n are between 32 and 256, for exampl~, such as a middle pixel image portion including m=50 and n=50. A gray~level subimage output o~ the face image registration block 78 at a line G-Sub is appliPd to a normaliæation block 80. The normalizedoutput of block 80 is applied to a thresholding block 82 to provide a thresholded, binary level face image output at a line B-Sub. Each pixel of the (mxn) thresholded, binary level face or B-Sub image is represented by a single blnary digit or bit, or 2500 bits ~or the 50x50 pixel~. The ~-Sub image siqnal is applied to a feature signature extraction block 84. An extracted pattern i~aqe signature output of the feature signature extraction block 84 is stored in both an individual face storage library (IFL~ 86 and a univer~al face model (UFM) storage block 88. The universal face model UFM includ~s all the individual pattern image or face signature~ stored within the individual face library IFh.
A stop function flag i~ set at stop blocks 90 for updating the image libraries performed by the control ~mmand pro-cessor subsy6te~ 14 as illustrated in FIG. 8A.
FIG. 6A provides a graphical representation of a~-sub image including mxn pixels. Each of the mxn pixels is eithsr a zero or a one. The B-sub image pixel data i8 utiliz~d to extract the pattern image signature for storing in the ~earning operational mode (FIG. 6) and to extract the pattern image signature for comparing with the universal image signature and the pattern image signatures in the recognition operational mode illustrated in FIG. 7.
In accordance with a feature of the inventiont a pseudo random predetermined sequence of the mxn B-Sub image bits defines a predetermined number T of feature po~itions used for storing the extracted feature signature output o~
th~ feature signature extraction block 84. Each feature .
, ' , .~
.... , .~ .:
.
-- ~32~8~
position has a predetexmined length ~, where the value of L
is between 3 and 10. Considering a predetermined feature position of len~th L=7 a~d with the above example B-Sub image of 2500 bits, a pseudo random sequence of 2500/7 or 357 feature positions results or T=357. Each feature has a value between 0 and (2L-l) or, for example, between 0 and 127 when L=7. A memory space of 2~ bits arranged as bytes b, where b equals 2L/8, is used for storing the possible feature values for each of the feature positions or, for example, 2*7 or 128 bits or 16 bytes. Thus a total memory space for each of the pattPrn or face image signature and the universal pattern image signature equal T multiplied by b or, for example, 357 po~itions x 16 byte~/position or 5712 bytes.
FIG. 6A illustrates a plurality of feature ~ positions i=o through i=(T-l) generally designated by the t reference character 84 corresponding to the feature extraction block 84. The corresponding memory space is ~ represented by the reference character 86 corresponding to ,j 20 the IFL block 86. The first or i=0 feature position value is stored in a corresponding bit po~ition ~ in a corresponding byte between 0 and (b-l~ within the memory ', space 84. The logic steps performed for storing the individual face and the universal face model are described with respect to FIG. 8B.
A distinct memory space of a predetermined capacity is definsd for the universal pattern image signature and each of the pattern image or individual face . signatures within the image face library. For example, for a viewing audience including a defined number of audience members P, individual face signatures ~TxP) are stored in ~ both the corresponding IF~ defined memory spaces (bxTxP) : and the UFM definsd mei~ory space (bxT). Multiple face images are learned for each of the audience members P by sequentially processing a serie~ of video images of the video camera 28 by the image signal processing blocks of FIG. 6 for each of the audience members. All of the resulting extracted pattern image signatures for each of ,, ., . .
"~ .
.:
.~ .
. .
.
~32~48~
the audience members are stored in the particular corr¢sponding memory space of the IFL memory spaces.
FIG. 7 provides a block diagram representation of the recognition mode of the audience recognition subsystem 26. The digital representation af the infrared image signal from the analog-to-digital A/D converter 76 corresponding to an identified direction of an audience m~mber by the audience location subsystem 18 is applied to a zooming and vertical strip image block 92. A first search area matrix (mxn)i is identified by a search area registration block 94. A gray-level subimage output G-Sub of the search area regis-tration block 94 i~ applled to a normalization block 96. The normalized output of block 96 is applied to a thre~holding block 98 to provide a thresholded, binary level search area image output B-Sub.
The search area B-Sub image is compared with the universal pattern image signature at a block 100 labelled recognition for UFM.
If the decision at a block 102 i5 that the search area B-Sub image matches or exceeds a predeter~ined correlation threshold with the universal pattern image signature, then the search area B-Sub image is compared to identify a match with each of the pattern image signatures stored in the individual face library as illustrated at a ~5 block 10~. Then or after a decision at ~lock 102 tha~ ~he il search area 8-Sub image does not match the universal pattern image signature, i is incremented by 1 or where l, at a block 106 so that a next seareh matrix (mxn)i is identified by the search area registration block 94 and processed as described for the first search area matrix.
After each of the search area matrices have been processed, more than one B-Sub image may be found that match the universal face model and an individual face library. The search area B-Sub imaqe having the best matching ra~e or highest correlation with an individual face in the individual face library is identified at a conclusion block 108. The logic steps performed for recognizing the universal face model and ~he individual : . , ; . :
'' ..
.
; . ~ :
~ ., .
132~80 face are described with respect to FIG. 8B. An output signal at a line 110 is then stored corresponding to the particular identified individual member of the viewing audience. The thus identified individual viewing member data can be stored together with other parameter data of a tel~vision data collection system, such as channel reception of a monitored receiver.
Referring to FIG. 8A, there is a main flow chart illustrating the logical steps performed by the control command processor subsystem 14 of the image recognition system 10. The sequential steps begin with an initialization routine. Then if a stop function is set~
the pattern image signatures and universal pattern image signature memory spaces can be updated to include the IFL
and UFM signatures stored at blocks 86 and 8~ of FIG. 6.
Otherwise, it is determined whether any function or mode has been selected, such as by a remote control or keyboard entry. If yes, then the selected function or mode is set or updated and then performed. Otherwise, the next sequential funckion or mode of modes 1-7 is performed.
FIG. 8B is a flow chart illustrating the logic steps perfor~ed for learning and recognizing the universal face model and the individual face. The sequential operations begin by setting a memory space address ADDR to the starting address with N-found set to zero. In the learning mode, an identified feature value from the B-Sub im~ge is set to a corresponding bit position, starting with feature position i=0 and repeated for each feature position to ~=356. The corresponding bit position B bit of ADDR + A
byte, is determined by the particular featur value S, where 5 is between 0 and 127, A equals an integer value S/8 and B equals S mod 8 or the residue of S after A bytes.
For example, a ~eature value S=114 from the B-5ub image for ` the feature position i=o is set to the 2nd bit of ADDR + 14 f~ 35 byte.
An individual audience member face i~age may be laarned multiple times ~R) with R possible diffarent ; extracted signatures resulting, dspending on any changed ;
' -13- ~32~80 facial expres~ions or various profiles of the audience member. Each of the extracted feature signatures is sequentially stored within the corresponding pattern image si~nature memory space for the particular audience member by repeating the sequential signal processing of FIG. 6 and the learning or storing steps of FI~. 8B.
Otherwise if not in the learning mode, then the sequential steps for the recognition mode are performed, such as at the recognition for UFM block 100 when the lo search area ~-Sub image is compared with the universal pattern image signature or at th~ block 104 when the search area B-Sub image is compared with individual pattern image signatures.
In the recognition mode, the identified featurs value from the B-Sub image is compared to a corresponding bit position, starting with feature position i=o and repeated for each feature position to i=35~. If the correspondln~ bit position is set, a match is indicated and the N-~ound value is incremented by one. Otherwise, if the i 20 corresponding bit position is not set, nonmatching is i indicated and the N-found value is not changed. The next incremental feature position is then compared to the corresponding bit position for the identified featurP
value.
After the last feature position i=356 has been identified and compared to identify a match, then the resulting N-found value is compared with a threshold value.
If resulting N-found value is less than the threshold value, then a FALSE or no recognition for the UFM or the J 30 particular IFL is indicated. If resulting N-found value is . greater than or equal to the threshold value, then a TRUE
I or a recognition of the UFM or the particular IFL is ~, indicated.
FIG. 8C is a flow chart illustrating an oper-., 35 ational function or mode 1 logical steps performed to add to the individual pattern image signatures and universal , pattern imag~ signature memory space or library. The sequential steps begin with a get and ~isplay a picture ~. .
~ 32~480 subroutine illustrated in FIG. 8D. Next a search all libraries subroutin~ illustrated in FIG. 8E is performed.
The results are displayed and added to the library.
The get and display a picture subroutine of FIG.
8D starts with an image acquisition step (block 74 of FIG.
6). The infrared video image is proce~sed (blocks 76, 78, 80 and 82 of FIG. 6) to provide a binary picture (B-sub image). A ratio of the ones in the resultin~ binary picture is calculated and the resulting binary picture is displayed.
In FIG. 8E, the search all librarie~ subroutine begins with a check of the exposure time based on the calculated ratio of ones and if adjustment i~ required, then the sequential oper~tion return without searching the libraries. Otherwise, if adjustment of the exposure time is not required, then an initial MAX value is set ~or the predetermined N-found value. A first library is searched (block 104 of FIG. 7 and FIG. 8B) and if the result N-found value i~ greater than the initial MAX value, then the MAX
value is updated. Otherwise th~ MAX value ls not changed.
Then a next library is searched and khe result is compared to the resulting MAX value and adju~ted, until all the libraries have been searched.
FIG. 8F is a flow chart illustrating an oper-ational function or mode 2 logical steps performed toverify and add to library. The sequential steps begin with the get and display the picture subroutine illustrated in , FIG. 8D. Next the search all libraries subroutine illu6trated in FIG. 8E is performed. Th results are displayed and added to the identified correct library.
FIG. 8G is a flow chart illustrating an oper-ational function or mode 3 logical steps performad to search and display. The sequential steps begin with a get and display the picture subroutine illustrated in FIG. 8D.
Next the search all libraries subroutine illustrated in FIG. 8E is performed. The results are displayed.
FIG. 8H is a flow chart illustrating an oper-ational function or mode 4 logical steps performed to 32~80 locate head and search. The sequential steps begin with a search raw picture for heads subroutine illustrated in FIG.
8I. Next a locate and search head(s) ~ubroutine illustrated in FIG. 8J is performed.
In FIG. 8I, the search raw pi.cture for head(s) subroutine begins with a check of the exposure time and if adjust~ent is required, then the sequential operation return without performinq any searching or heads.
Otherwise, if adjust~ent of tha exposure time is not required, then an initial MAX value is set for the predetermined N-found value and a search area pointer i is reset. The first search area matrix is identified and compared with the universal pattern image signature UF~
(block 100 of FI~. 7). The result is compared with the set for the correlation threshold MAX value, and if the result is greater than the initial MAX value, then that search area pointer is saved and the MAX value is updated.
Otherwise, the search area pointer is not saved and the MAX
value is not changed. Then the search area pointer value is updated and the next search area matrix is identified and the sequential steps are rep ated until the total raw picture has been searched.
FIG. 8J illustrates the locate and search head(s) subroutine per~orm~d by the control command processor subsystem 14 in the ~ode 4. If one search area pointer is stored in the subroutine of FIG. 8I, then the search area window is set to the identified search area matrix by the saved pointer value which corresponds to the head image portion~ ~he exposure time is adjusted and the search all libraries subroutine of FIG. 8E is performed and the resultæ are di~played.
Otherwise, if more than one pointer value are stored in the subroutine of FIG. 8I, then the MAX value is reset for a predeter~ined initial value. Then the search 3~ area window is set to the first identified search area matrix by the ~irst saved pointer value which corresponds to a first head image portion. A local normalization i5 per~ormed on the search area matrix data and the search all ~. ' `~ ' ' . ~ . . .
,, , , ~ :, '' ~ ' : '"
: :
' . ' . .\; ' ' ~ ~ ': ' ~. .
-16- ~ 32~8~
libraries subroutine o~ FIG. 8E i~ performed, and if the result is greater thanthe initial MAX value, then the ~AX
value is updated. Otherwise the MAX value i~ not changed.
Then a next search area window is set to the next saved pointer value which corresponds to a next head image portion and the sequential steps are repeated until all the head image portions have been searched. Then the search arsa window is set to the identified search area matrix having the highest MAX va~ue which corre~ponds to the head image portion. A local normalization is performed on the search area matrix data and the search all libraries - subroutine of FIG. 8E is performed and the result~ are ~isplayed.
FIG. 8K is a flow chart illustrating an oper-ational function or mode 5 logical steps performed to scan and search the monitored area. The sequential steps begin with scanning of the monitored area. Then the video camera 28 is pointed to audience members within the monitored area and the mode 4 operations of FIG. 8H are perfor~ed.
FIG. 8L is a flow chart illustrating an oper-ational function or mode 6 logical steps performed to shi f t and learn. The sequential steps begin with the got and display the picture subroutine illustrated in FIG. 8D.
Next the search all libraries subroutine illustrated in FIG. 8E is performed. The results are displayed and if all positions have been learned, then the sequential operation return without adding to the library. Otherwise, the audience member image is shifted to the left one position and added to the pattern image signature IFL and universal pattern image signature UFM. Then the audience member image is moved up one position and sequentially repeated until all positions have been learned and added to the library~
FIG. 8M is a flow chart illustrating an oper-ational function or mode 7 logical steps perfor~ed to search and pause. The sequential steps begin wi~h the search raw picture for heads subroutine illustrated in FIG.
8I. Next the locate and search head(s) subxoutine . . .
. ~ . .
.
-17- ~32 ~ ~8~
illustraked in FIG. 8J is performed. Then if a continue decision is yes, the sequential mode 7 steps are repeated.
Although the present invention has been described in conn~ction with details of the preferred embodiment, many alterations and modifications may be made without departing from the invention. Accordingly, it is intended that all such alterations and modi~icat:ions be considered as within the spirit and scope of the invention as defined in the appended claims.
The embodiments of the invent.ion in which an exclusive property or privilege is claimed are defined 2S
follo~s:
.- .~ . , .
,
Claims (38)
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. An image recognition system for identifying a predetermined pattern of a plurality of predetermined patterns in a video image comprising:
means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the plurality of predetermined patterns;
means for storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures;
means for sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and for identifying matching video image portions; and means for comparing each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the plurality of predetermined patterns;
means for storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures;
means for sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and for identifying matching video image portions; and means for comparing each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
2. An image recognition system as recited in claim 1 wherein said means for storing a plurality of pattern image signatures includes a distinct memory space having a predetermined capacity defined for each of said pattern image signatures.
3. An image recognition system as recited in claim 2 wherein said means for storing a universal pattern image signature includes a distinct memory space having said predetermined capacity.
4. An image recognition system as recited in claim 1 further comprises means for generating said plurality of pattern image signatures.
5. An image recognition system as recited in claim 4 wherein said pattern image signature generating means includes:
means for capturing a video image of the predetermined pattern;
means for processing said captured video image to provide a digitized image signal; and means for extracting a pattern signature from said digitized image signal.
means for capturing a video image of the predetermined pattern;
means for processing said captured video image to provide a digitized image signal; and means for extracting a pattern signature from said digitized image signal.
6. An image recognition system as recited in claim 5 wherein said digitized image signal comprises a digitized gray level image.
7. An image recognition system as recited in claim 5 wherein said digitized image signal comprises a thresholded binary image.
8. An image recognition system as recited in claim 5 wherein said pattern signature extracting means comprises:
means for defining a plurality of predefined feature positions from said digitized image signal;
means for identifying a feature value for each of said plurality of predefined feature positions;
and means for determining a memory location corresponding to said identified feature value for each of a plurality of predefined feature positions.
means for defining a plurality of predefined feature positions from said digitized image signal;
means for identifying a feature value for each of said plurality of predefined feature positions;
and means for determining a memory location corresponding to said identified feature value for each of a plurality of predefined feature positions.
9. An image recognition system as recited in claim 8 wherein said predefined feature positions include a predetermined number L of pixels from said digitized image signal, each pixel is represented by a single binary digit.
10. An image recognition system as recited in claim 9 wherein said digitized image signal includes mxn binary digits and wherein said plurality of feature positions equals (mxn)/L.
11. An image recognition system as recited in claim 10 wherein said plurality of feature positions is defined by a pseudo random predetermined sequence of said mxn binary digits.
12. An image recognition system as recited in claim 9 wherein said feature value equals a value between 0 and (2L-1).
13. An image recognition system as recited in claim 9 wherein a predetermined number equal to 2L of said memory locations are defined for each of said plurality of feature positions.
14. A method of identifying a predetermined pattern of a plurality of predetermined patterns in a video image comprising the steps of:
storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the plurality of predetermined patterns;
storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures;
sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and identifying matching video image portions; and comprising each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the plurality of predetermined patterns;
storing a universal pattern image signature, said universal pattern image signature corresponding to a composite signature of each of said pattern image signatures;
sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and identifying matching video image portions; and comprising each of said identified matching video image portions with said stored pattern image signatures to identify the predetermined pattern.
15. A method as recited in claim 14 wherein said step of sequentially comparing a predefined series of portions of the video image with said universal pattern image signature and identifying matching video image portions includes the steps of:
processing said video image to provide a digitized image signal; and sequentially extracting and comparing a pattern signature from said predefined series of portions of said digitized image signal with said universal pattern image signature.
processing said video image to provide a digitized image signal; and sequentially extracting and comparing a pattern signature from said predefined series of portions of said digitized image signal with said universal pattern image signature.
16. A method as recited in claim 15 wherein said step of sequentially extracting and comparing a pattern signature includes the steps of, identifying a feature value for each of a plurality of predefined feature positions from each of said digitized image signal portions;
identifying and comparing a memory location corresponding to said identified feature value for each of said plurality of predefined feature positions with said universal pattern image signature;
calculating the number of matching memory locations with said universal pattern image signature; and identifying matching video image portions responsive to said calculated number greater than a predetermined threshold value.
identifying and comparing a memory location corresponding to said identified feature value for each of said plurality of predefined feature positions with said universal pattern image signature;
calculating the number of matching memory locations with said universal pattern image signature; and identifying matching video image portions responsive to said calculated number greater than a predetermined threshold value.
17. A method as recited in claim 16 wherein said step of comparing each of said identified matching video image portions with said stored pattern image signatures includes the steps of:
extracting a pattern signature from said identified matching video image portions; and sequentially comparing said extracted pattern signature with each of said pattern image signatures.
extracting a pattern signature from said identified matching video image portions; and sequentially comparing said extracted pattern signature with each of said pattern image signatures.
18. A method as recited in claim 17 further comprising the step of identifying a matching value of said compared signatures and identifying matching signatures for said identified matching value greater than a predetermined threshold value.
19. A method as recited in claim 17 further comprising the step of identifying a highest matching value of said identified matching signatures to identify the predetermined pattern.
20. An image recognition system for identifying predeter-mined individual members of a viewing audience in a monitored area:
memory means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the predetermined individual members;
means for storing a universal pattern image signature, said universal pattern image signature corresponding to a compo-site signature of each of said pattern image signatures;
audience scanning means for locating individual members in the monitored area;
means for capturing a video image of each said located individual member in the monitored area;
means for sequentially comparing a predefined series of portions of each said captured video image with said universal pattern image signature and for identifying matching video image portions; and means for comparing each of said identified matching video image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
memory means for storing a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the predetermined individual members;
means for storing a universal pattern image signature, said universal pattern image signature corresponding to a compo-site signature of each of said pattern image signatures;
audience scanning means for locating individual members in the monitored area;
means for capturing a video image of each said located individual member in the monitored area;
means for sequentially comparing a predefined series of portions of each said captured video image with said universal pattern image signature and for identifying matching video image portions; and means for comparing each of said identified matching video image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
21. An image recognition system as recited in claim 20 wherein said audience scanning means for locating the individual members includes an infrared detector for providing a temperature representative signal of the monitored area.
22. An image recognition system as recited in claim 21 wherein said temperature representative signal is processed to provide a direction signal.
23. An image recognition system as recited in claim 20 wherein said video image capturing means includes an infrared video camera for providing a video image signal.
24. An image recognition system as recited in claim 20 wherein said comparing means includes:
means for processing said captured video image to provide a digitized image signal; and means for extracting a pattern signature from said digitized image signal.
means for processing said captured video image to provide a digitized image signal; and means for extracting a pattern signature from said digitized image signal.
25. An image recognition system as recited in claim 24 wherein said digitized image signal comprises a digitized gray level image.
26. An image recognition system as recited in claim 24 wherein said digitized image signal comprises a thresholded binary image.
27. An image recognition system as recited in claim 24 wherein said pattern signature extracting means includes:
means for defining a plurality of predefined feature positions from said digitized image signal;
means for identifying a feature value for each of said plurality of predefined feature positions; and means for identifying a memory location corresponding to said identified feature value for each of said plurality of prede-fined feature positions.
means for defining a plurality of predefined feature positions from said digitized image signal;
means for identifying a feature value for each of said plurality of predefined feature positions; and means for identifying a memory location corresponding to said identified feature value for each of said plurality of prede-fined feature positions.
28. An image recognition system as recited in claim 27 wherein said comparing means further includes:
means for calculating a number of matching memory loca-tions with said universal pattern image signature; and means for identifying a matching video image portion responsive to said calculated number greater than a predetermined threshold value.
means for calculating a number of matching memory loca-tions with said universal pattern image signature; and means for identifying a matching video image portion responsive to said calculated number greater than a predetermined threshold value.
29. An image recognition system as recited in claim 27 wherein said comparing means further includes:
means for calculating a number of matching memory loca-tions with each of said pattern image signatures; and means for identifying a match value responsive to said calculated number greater than a predetermined threshold value.
means for calculating a number of matching memory loca-tions with each of said pattern image signatures; and means for identifying a match value responsive to said calculated number greater than a predetermined threshold value.
30. An image recognition system as recited in claim 29 wherein said comparing means further includes:
means for identifying a highest matching value to identify the predetermined audience member.
means for identifying a highest matching value to identify the predetermined audience member.
31. A method of identifying predetermined individual members of a viewing audience in a monitored area:
stoning a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the pre-determined individual members;
storing a universal pattern image signature, said uni-versal pattern image signature corresponding to a composite signa-ture of each of said pattern image signatures;
scanning the monitored area and generating a temperature representative signal of the monitored area;
processing said generated temperature representative signal and providing individual members direction signal corres-ponding to located individual members; capturing a video image of each located individual member responsive to said individual members direction signal in the monitored area;
sequentially comparing a predefined series of port ions of the captured video image with said universal pattern image signature and identifying matching video image portions; and comparing each of said identified matching video image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
stoning a plurality of pattern image signatures, each of said pattern image signatures corresponding to one of the pre-determined individual members;
storing a universal pattern image signature, said uni-versal pattern image signature corresponding to a composite signa-ture of each of said pattern image signatures;
scanning the monitored area and generating a temperature representative signal of the monitored area;
processing said generated temperature representative signal and providing individual members direction signal corres-ponding to located individual members; capturing a video image of each located individual member responsive to said individual members direction signal in the monitored area;
sequentially comparing a predefined series of port ions of the captured video image with said universal pattern image signature and identifying matching video image portions; and comparing each of said identified matching video image portions with said stored pattern image signatures to identify at least one of the predetermined individual members.
32. A method as recited in claim 31 wherein said step of sequentially comparing a predefined series of portions of the captured video image with said universal pattern image signature and identifying matching video image portions includes the steps of:
processing said captured video image to provide a digi-tized image signal; and sequentially extracting and comparing a pattern signa-ture from said predefined series of portions of said digitized image signal with said universal pattern image signature.
processing said captured video image to provide a digi-tized image signal; and sequentially extracting and comparing a pattern signa-ture from said predefined series of portions of said digitized image signal with said universal pattern image signature.
33. A method as recited in claim 32 wherein said step of sequentially extracting and comparing a pattern signature includes the steps of:
identifying a feature value for each of a plurality of predefined feature positions from each of said digitized image signal portions;
identifying and comparing a memory location corres-ponding to said identified feature value for each of said plur-ality of predefined feature positions with said universal pattern image signature;
calculating the number of matching memory locations with said universal pattern image signature; and identifying matching video image portions responsive to said calculated number greater than a predetermined threshold value.
identifying a feature value for each of a plurality of predefined feature positions from each of said digitized image signal portions;
identifying and comparing a memory location corres-ponding to said identified feature value for each of said plur-ality of predefined feature positions with said universal pattern image signature;
calculating the number of matching memory locations with said universal pattern image signature; and identifying matching video image portions responsive to said calculated number greater than a predetermined threshold value.
34. A method as recited in claim 33 wherein said step of comparing each of said identified matching video image portions with said stored pattern image signatures includes the steps of:
extracting a pattern signature from said identified matching video image portions; and sequentially comparing said extracted pattern signature with each of said pattern image signatures.
extracting a pattern signature from said identified matching video image portions; and sequentially comparing said extracted pattern signature with each of said pattern image signatures.
35. A method as recited in claim 34 further comprising the step of identifying a matching value of said compared signatures and identifying matching signatures for said identified matching value greater than a predetermined threshold value.
36. A method as recited in claim 35 further comprising the step of identifying a highest matching value of said identified matching signatures to identify the predetermined individual members.
37. A method as recited in claim 31 wherein said steps of storing said plurality of pattern image signatures and said universal image signature includes the steps of:
providing a plurality of distinct pattern image signa-ture memory spaces for storing each of said pattern image signa-tures;
providing a universal pattern image memory space for storing said universal pattern image signature;
capturing a video image of the predetermined individual members;
processing said captured video image to provide a digi-tized image signal;
identifying a feature value from said digitized image signal for each of a plurality of predefined feature positions;
identifying a memory location corresponding to said identified feature value for each of a plurality of predefined feature position; and storing a binary digit one in said identified memory locations in the pattern image signature memory space corres-ponding to the predetermined individual members and in the universal pattern image memory space.
providing a plurality of distinct pattern image signa-ture memory spaces for storing each of said pattern image signa-tures;
providing a universal pattern image memory space for storing said universal pattern image signature;
capturing a video image of the predetermined individual members;
processing said captured video image to provide a digi-tized image signal;
identifying a feature value from said digitized image signal for each of a plurality of predefined feature positions;
identifying a memory location corresponding to said identified feature value for each of a plurality of predefined feature position; and storing a binary digit one in said identified memory locations in the pattern image signature memory space corres-ponding to the predetermined individual members and in the universal pattern image memory space.
38. A method as recited in claim 37 further comprising cap-turing at least one subsequent video image of the predetermined individual members and sequentially repeating the processing, identifying and storing steps for each captured video image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US244,492 | 1988-09-14 | ||
US07/244,492 US5031228A (en) | 1988-09-14 | 1988-09-14 | Image recognition system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CA1325480C true CA1325480C (en) | 1993-12-21 |
Family
ID=22922996
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA000604168A Expired - Fee Related CA1325480C (en) | 1988-09-14 | 1989-06-28 | Image recognition system and method |
Country Status (8)
Country | Link |
---|---|
US (1) | US5031228A (en) |
EP (1) | EP0358910B1 (en) |
JP (1) | JPH02121070A (en) |
AT (1) | ATE111663T1 (en) |
AU (1) | AU2854189A (en) |
CA (1) | CA1325480C (en) |
DE (1) | DE68918209T2 (en) |
ES (1) | ES2058415T3 (en) |
Families Citing this family (201)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5164992A (en) * | 1990-11-01 | 1992-11-17 | Massachusetts Institute Of Technology | Face recognition system |
AU1928392A (en) * | 1991-03-29 | 1992-11-02 | Csx Transportation, Inc. | Device to uniquely recognize travelling containers |
US6400996B1 (en) | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
US8352400B2 (en) | 1991-12-23 | 2013-01-08 | Hoffberg Steven M | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
USRE47908E1 (en) | 1991-12-23 | 2020-03-17 | Blanding Hovenweep, Llc | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US5903454A (en) * | 1991-12-23 | 1999-05-11 | Hoffberg; Linda Irene | Human-factored interface corporating adaptive pattern recognition based controller apparatus |
USRE46310E1 (en) | 1991-12-23 | 2017-02-14 | Blanding Hovenweep, Llc | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US6418424B1 (en) | 1991-12-23 | 2002-07-09 | Steven M. Hoffberg | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US6081750A (en) * | 1991-12-23 | 2000-06-27 | Hoffberg; Steven Mark | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US6850252B1 (en) | 1999-10-05 | 2005-02-01 | Steven M. Hoffberg | Intelligent electronic appliance system and method |
USRE48056E1 (en) | 1991-12-23 | 2020-06-16 | Blanding Hovenweep, Llc | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US10361802B1 (en) | 1999-02-01 | 2019-07-23 | Blanding Hovenweep, Llc | Adaptive pattern recognition based control system and method |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5432864A (en) * | 1992-10-05 | 1995-07-11 | Daozheng Lu | Identification card verification system |
US5550928A (en) * | 1992-12-15 | 1996-08-27 | A.C. Nielsen Company | Audience measurement system and method |
DE4312185A1 (en) * | 1993-04-14 | 1994-10-20 | Thomas Hohenacker | Device for storing image information displayed on a monitor |
US5481294A (en) * | 1993-10-27 | 1996-01-02 | A. C. Nielsen Company | Audience measurement system utilizing ancillary codes and passive signatures |
US5841978A (en) * | 1993-11-18 | 1998-11-24 | Digimarc Corporation | Network linking method using steganographically embedded data objects |
US5497314A (en) * | 1994-03-07 | 1996-03-05 | Novak; Jeffrey M. | Automated apparatus and method for object recognition at checkout counters |
JP3974946B2 (en) * | 1994-04-08 | 2007-09-12 | オリンパス株式会社 | Image classification device |
US6560349B1 (en) | 1994-10-21 | 2003-05-06 | Digimarc Corporation | Audio monitoring using steganographic information |
JPH08138053A (en) * | 1994-11-08 | 1996-05-31 | Canon Inc | Subject imformation processor and remote control device |
DE69633524T2 (en) * | 1995-04-12 | 2005-03-03 | Matsushita Electric Industrial Co., Ltd., Kadoma | Method and device for object detection |
US6760463B2 (en) | 1995-05-08 | 2004-07-06 | Digimarc Corporation | Watermarking methods and media |
US7224819B2 (en) | 1995-05-08 | 2007-05-29 | Digimarc Corporation | Integrating digital watermarks in multimedia content |
US7805500B2 (en) | 1995-05-08 | 2010-09-28 | Digimarc Corporation | Network linking methods and apparatus |
US6577746B1 (en) | 1999-12-28 | 2003-06-10 | Digimarc Corporation | Watermark-based object linking and embedding |
US6965682B1 (en) * | 1999-05-19 | 2005-11-15 | Digimarc Corp | Data transmission by watermark proxy |
US7711564B2 (en) | 1995-07-27 | 2010-05-04 | Digimarc Corporation | Connected audio and other media objects |
US6505160B1 (en) * | 1995-07-27 | 2003-01-07 | Digimarc Corporation | Connected audio and other media objects |
US6829368B2 (en) | 2000-01-26 | 2004-12-07 | Digimarc Corporation | Establishing and interacting with on-line media collections using identifiers in media signals |
US6411725B1 (en) | 1995-07-27 | 2002-06-25 | Digimarc Corporation | Watermark enabled video objects |
US7562392B1 (en) | 1999-05-19 | 2009-07-14 | Digimarc Corporation | Methods of interacting with audio and ambient music |
US6647548B1 (en) * | 1996-09-06 | 2003-11-11 | Nielsen Media Research, Inc. | Coded/non-coded program audience measurement system |
US5933502A (en) * | 1996-12-20 | 1999-08-03 | Intel Corporation | Method and apparatus for enhancing the integrity of visual authentication |
US6111517A (en) * | 1996-12-30 | 2000-08-29 | Visionics Corporation | Continuous video monitoring using face recognition for access control |
DE19708240C2 (en) * | 1997-02-28 | 1999-10-14 | Siemens Ag | Arrangement and method for detecting an object in a region illuminated by waves in the invisible spectral range |
DE19728099A1 (en) * | 1997-07-02 | 1999-01-07 | Klaus Dr Ing Schulze | Method and device for recognizing unique image sequences |
AU1613599A (en) * | 1997-12-01 | 1999-06-16 | Arsev H. Eraslan | Three-dimensional face identification system |
US5940118A (en) * | 1997-12-22 | 1999-08-17 | Nortel Networks Corporation | System and method for steering directional microphones |
US7689532B1 (en) | 2000-07-20 | 2010-03-30 | Digimarc Corporation | Using embedded data with file sharing |
US7134130B1 (en) | 1998-12-15 | 2006-11-07 | Gateway Inc. | Apparatus and method for user-based control of television content |
US7062073B1 (en) | 1999-01-19 | 2006-06-13 | Tumey David M | Animated toy utilizing artificial intelligence and facial image recognition |
US7966078B2 (en) | 1999-02-01 | 2011-06-21 | Steven Hoffberg | Network media appliance system and method |
US7039221B1 (en) | 1999-04-09 | 2006-05-02 | Tumey David M | Facial image verification utilizing smart-card with integrated video camera |
US7302574B2 (en) | 1999-05-19 | 2007-11-27 | Digimarc Corporation | Content identifiers triggering corresponding responses through collaborative processing |
US7565294B2 (en) | 1999-05-19 | 2009-07-21 | Digimarc Corporation | Methods and systems employing digital content |
US6519607B1 (en) * | 1999-10-28 | 2003-02-11 | Hewlett-Packard Company | Image driven operating system |
US8121843B2 (en) | 2000-05-02 | 2012-02-21 | Digimarc Corporation | Fingerprint methods and systems for media signals |
AU2002232817A1 (en) | 2000-12-21 | 2002-07-01 | Digimarc Corporation | Methods, apparatus and programs for generating and utilizing content signatures |
US7046819B2 (en) | 2001-04-25 | 2006-05-16 | Digimarc Corporation | Encoded reference signal for digital watermarks |
US6735329B2 (en) | 2001-05-18 | 2004-05-11 | Leonard S. Schultz | Methods and apparatus for image recognition and dictation |
US20020194586A1 (en) * | 2001-06-15 | 2002-12-19 | Srinivas Gutta | Method and system and article of manufacture for multi-user profile generation |
US20030002646A1 (en) * | 2001-06-27 | 2003-01-02 | Philips Electronics North America Corp. | Intelligent phone router |
US7113916B1 (en) * | 2001-09-07 | 2006-09-26 | Hill Daniel A | Method of facial coding monitoring for the purpose of gauging the impact and appeal of commercially-related stimuli |
US20030226968A1 (en) * | 2002-06-10 | 2003-12-11 | Steve Montellese | Apparatus and method for inputting data |
US7395062B1 (en) | 2002-09-13 | 2008-07-01 | Nielson Media Research, Inc. A Delaware Corporation | Remote sensing system |
US8154581B2 (en) | 2002-10-15 | 2012-04-10 | Revolutionary Concepts, Inc. | Audio-video communication system for receiving person at entrance |
US8139098B2 (en) * | 2002-10-15 | 2012-03-20 | Revolutionary Concepts, Inc. | Video communication method for receiving person at entrance |
US8144183B2 (en) * | 2002-10-15 | 2012-03-27 | Revolutionary Concepts, Inc. | Two-way audio-video communication method for receiving person at entrance |
US6862253B2 (en) * | 2002-10-23 | 2005-03-01 | Robert L. Blosser | Sonic identification system and method |
CA2509644A1 (en) * | 2002-12-11 | 2004-06-24 | Nielsen Media Research, Inc. | Detecting a composition of an audience |
US7203338B2 (en) * | 2002-12-11 | 2007-04-10 | Nielsen Media Research, Inc. | Methods and apparatus to count people appearing in an image |
US8235725B1 (en) * | 2005-02-20 | 2012-08-07 | Sensory Logic, Inc. | Computerized method of assessing consumer reaction to a business stimulus employing facial coding |
US7631324B2 (en) * | 2005-06-08 | 2009-12-08 | The Nielsen Company (Us), Llc | Methods and apparatus for indirect illumination in electronic media rating systems |
US8311294B2 (en) | 2009-09-08 | 2012-11-13 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US7450740B2 (en) | 2005-09-28 | 2008-11-11 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US7587070B2 (en) * | 2005-09-28 | 2009-09-08 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US8600174B2 (en) | 2005-09-28 | 2013-12-03 | Facedouble, Inc. | Method and system for attaching a metatag to a digital image |
US7599527B2 (en) * | 2005-09-28 | 2009-10-06 | Facedouble, Inc. | Digital image search system and method |
US8369570B2 (en) | 2005-09-28 | 2013-02-05 | Facedouble, Inc. | Method and system for tagging an image of an individual in a plurality of photos |
JP4749139B2 (en) * | 2005-12-05 | 2011-08-17 | 株式会社日立製作所 | Dangerous video detection method, video difference detection method and apparatus |
US9015740B2 (en) | 2005-12-12 | 2015-04-21 | The Nielsen Company (Us), Llc | Systems and methods to wirelessly meter audio/visual devices |
US7930199B1 (en) | 2006-07-21 | 2011-04-19 | Sensory Logic, Inc. | Method and report assessing consumer reaction to a stimulus by matching eye position with facial coding |
US7826464B2 (en) * | 2007-01-10 | 2010-11-02 | Mikhail Fedorov | Communication system |
KR101378372B1 (en) * | 2007-07-12 | 2014-03-27 | 삼성전자주식회사 | Digital image processing apparatus, method for controlling the same, and recording medium storing program to implement the method |
US20100162285A1 (en) * | 2007-09-11 | 2010-06-24 | Yossef Gerard Cohen | Presence Detector and Method for Estimating an Audience |
US8108055B2 (en) * | 2007-12-28 | 2012-01-31 | Larry Wong | Method, system and apparatus for controlling an electrical device |
CA2711143C (en) * | 2007-12-31 | 2015-12-08 | Ray Ganong | Method, system, and computer program for identification and sharing of digital images with face signatures |
US9721148B2 (en) | 2007-12-31 | 2017-08-01 | Applied Recognition Inc. | Face detection and recognition |
US20090278683A1 (en) * | 2008-05-11 | 2009-11-12 | Revolutionary Concepts, Inc. | Systems, methods, and apparatus for metal detection, viewing, and communications |
US20090284578A1 (en) * | 2008-05-11 | 2009-11-19 | Revolutionary Concepts, Inc. | Real estate communications and monitoring systems and methods for use by real estate agents |
US10380603B2 (en) * | 2008-05-31 | 2019-08-13 | International Business Machines Corporation | Assessing personality and mood characteristics of a customer to enhance customer satisfaction and improve chances of a sale |
US8219438B1 (en) * | 2008-06-30 | 2012-07-10 | Videomining Corporation | Method and system for measuring shopper response to products based on behavior and facial expression |
US8411963B2 (en) * | 2008-08-08 | 2013-04-02 | The Nielsen Company (U.S.), Llc | Methods and apparatus to count persons in a monitored environment |
US9124769B2 (en) | 2008-10-31 | 2015-09-01 | The Nielsen Company (Us), Llc | Methods and apparatus to verify presentation of media content |
US8487772B1 (en) | 2008-12-14 | 2013-07-16 | Brian William Higgins | System and method for communicating information |
JP2010157119A (en) * | 2008-12-26 | 2010-07-15 | Fujitsu Ltd | Monitoring device, monitoring method, and monitoring program |
US8049643B2 (en) | 2009-02-04 | 2011-11-01 | Pete Ness | Vehicle tracking system for vehicle washing |
US20100223144A1 (en) * | 2009-02-27 | 2010-09-02 | The Go Daddy Group, Inc. | Systems for generating online advertisements offering dynamic content relevant domain names for registration |
US9195898B2 (en) | 2009-04-14 | 2015-11-24 | Qualcomm Incorporated | Systems and methods for image recognition using mobile devices |
US8600100B2 (en) * | 2009-04-16 | 2013-12-03 | Sensory Logic, Inc. | Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions |
US8538093B2 (en) * | 2009-04-20 | 2013-09-17 | Mark Kodesh | Method and apparatus for encouraging social networking through employment of facial feature comparison and matching |
US8351712B2 (en) | 2009-04-27 | 2013-01-08 | The Neilsen Company (US), LLC | Methods and apparatus to perform image classification based on pseudorandom features |
US20100325253A1 (en) * | 2009-06-18 | 2010-12-23 | The Go Daddy Group, Inc. | Generating and registering screen name-based domain names |
US20100325128A1 (en) * | 2009-06-18 | 2010-12-23 | The Go Daddy Group, Inc. | Generating and registering domain name-based screen names |
US8326002B2 (en) * | 2009-08-13 | 2012-12-04 | Sensory Logic, Inc. | Methods of facial coding scoring for optimally identifying consumers' responses to arrive at effective, incisive, actionable conclusions |
US8312364B2 (en) | 2009-09-17 | 2012-11-13 | Go Daddy Operating Company, LLC | Social website domain registration announcement and search engine feed |
US8276057B2 (en) | 2009-09-17 | 2012-09-25 | Go Daddy Operating Company, LLC | Announcing a domain name registration on a social website |
GB2474508B (en) * | 2009-10-16 | 2015-12-09 | Norwell Sa | Audience measurement system |
JP2011124819A (en) * | 2009-12-11 | 2011-06-23 | Sanyo Electric Co Ltd | Electronic camera |
JP2011130043A (en) * | 2009-12-16 | 2011-06-30 | Sanyo Electric Co Ltd | Electronic camera |
US11067405B2 (en) | 2010-06-07 | 2021-07-20 | Affectiva, Inc. | Cognitive state vehicle navigation based on image processing |
US11017250B2 (en) | 2010-06-07 | 2021-05-25 | Affectiva, Inc. | Vehicle manipulation using convolutional image processing |
US10204625B2 (en) | 2010-06-07 | 2019-02-12 | Affectiva, Inc. | Audio analysis learning using video data |
US9503786B2 (en) | 2010-06-07 | 2016-11-22 | Affectiva, Inc. | Video recommendation using affect |
US10474875B2 (en) | 2010-06-07 | 2019-11-12 | Affectiva, Inc. | Image analysis using a semiconductor processor for facial evaluation |
US9934425B2 (en) | 2010-06-07 | 2018-04-03 | Affectiva, Inc. | Collection of affect data from multiple mobile devices |
US11073899B2 (en) | 2010-06-07 | 2021-07-27 | Affectiva, Inc. | Multidevice multimodal emotion services monitoring |
US10108852B2 (en) | 2010-06-07 | 2018-10-23 | Affectiva, Inc. | Facial analysis to detect asymmetric expressions |
US9204836B2 (en) | 2010-06-07 | 2015-12-08 | Affectiva, Inc. | Sporadic collection of mobile affect data |
US11700420B2 (en) | 2010-06-07 | 2023-07-11 | Affectiva, Inc. | Media manipulation using cognitive state metric analysis |
US10111611B2 (en) | 2010-06-07 | 2018-10-30 | Affectiva, Inc. | Personal emotional profile generation |
US10614289B2 (en) | 2010-06-07 | 2020-04-07 | Affectiva, Inc. | Facial tracking with classifiers |
US9642536B2 (en) | 2010-06-07 | 2017-05-09 | Affectiva, Inc. | Mental state analysis using heart rate collection based on video imagery |
US10628741B2 (en) | 2010-06-07 | 2020-04-21 | Affectiva, Inc. | Multimodal machine learning for emotion metrics |
US10482333B1 (en) | 2017-01-04 | 2019-11-19 | Affectiva, Inc. | Mental state analysis using blink rate within vehicles |
US11587357B2 (en) | 2010-06-07 | 2023-02-21 | Affectiva, Inc. | Vehicular cognitive data collection with multiple devices |
US10911829B2 (en) | 2010-06-07 | 2021-02-02 | Affectiva, Inc. | Vehicle video recommendation via affect |
US11430260B2 (en) | 2010-06-07 | 2022-08-30 | Affectiva, Inc. | Electronic display viewing verification |
US10627817B2 (en) | 2010-06-07 | 2020-04-21 | Affectiva, Inc. | Vehicle manipulation using occupant image analysis |
US11056225B2 (en) | 2010-06-07 | 2021-07-06 | Affectiva, Inc. | Analytics for livestreaming based on image analysis within a shared digital environment |
US10592757B2 (en) | 2010-06-07 | 2020-03-17 | Affectiva, Inc. | Vehicular cognitive data collection using multiple devices |
US11657288B2 (en) | 2010-06-07 | 2023-05-23 | Affectiva, Inc. | Convolutional computing using multilayered analysis engine |
US9723992B2 (en) | 2010-06-07 | 2017-08-08 | Affectiva, Inc. | Mental state analysis using blink rate |
US11511757B2 (en) | 2010-06-07 | 2022-11-29 | Affectiva, Inc. | Vehicle manipulation with crowdsourcing |
US11318949B2 (en) | 2010-06-07 | 2022-05-03 | Affectiva, Inc. | In-vehicle drowsiness analysis using blink rate |
US10517521B2 (en) | 2010-06-07 | 2019-12-31 | Affectiva, Inc. | Mental state mood analysis using heart rate collection based on video imagery |
US10779761B2 (en) | 2010-06-07 | 2020-09-22 | Affectiva, Inc. | Sporadic collection of affect data within a vehicle |
US11823055B2 (en) | 2019-03-31 | 2023-11-21 | Affectiva, Inc. | Vehicular in-cabin sensing using machine learning |
US10799168B2 (en) | 2010-06-07 | 2020-10-13 | Affectiva, Inc. | Individual data sharing across a social network |
US11151610B2 (en) | 2010-06-07 | 2021-10-19 | Affectiva, Inc. | Autonomous vehicle control using heart rate collection based on video imagery |
US11410438B2 (en) | 2010-06-07 | 2022-08-09 | Affectiva, Inc. | Image analysis using a semiconductor processor for facial evaluation in vehicles |
US11465640B2 (en) | 2010-06-07 | 2022-10-11 | Affectiva, Inc. | Directed control transfer for autonomous vehicles |
US9646046B2 (en) | 2010-06-07 | 2017-05-09 | Affectiva, Inc. | Mental state data tagging for data collected from multiple sources |
US11430561B2 (en) | 2010-06-07 | 2022-08-30 | Affectiva, Inc. | Remote computing analysis for cognitive state data metrics |
US10897650B2 (en) | 2010-06-07 | 2021-01-19 | Affectiva, Inc. | Vehicle content recommendation using cognitive states |
US10143414B2 (en) | 2010-06-07 | 2018-12-04 | Affectiva, Inc. | Sporadic collection with mobile affect data |
US11292477B2 (en) | 2010-06-07 | 2022-04-05 | Affectiva, Inc. | Vehicle manipulation using cognitive state engineering |
US11704574B2 (en) | 2010-06-07 | 2023-07-18 | Affectiva, Inc. | Multimodal machine learning for vehicle manipulation |
US9247903B2 (en) | 2010-06-07 | 2016-02-02 | Affectiva, Inc. | Using affect within a gaming context |
US10074024B2 (en) | 2010-06-07 | 2018-09-11 | Affectiva, Inc. | Mental state analysis using blink rate for vehicles |
US11887352B2 (en) | 2010-06-07 | 2024-01-30 | Affectiva, Inc. | Live streaming analytics within a shared digital environment |
US10843078B2 (en) | 2010-06-07 | 2020-11-24 | Affectiva, Inc. | Affect usage within a gaming context |
US11484685B2 (en) | 2010-06-07 | 2022-11-01 | Affectiva, Inc. | Robotic control using profiles |
US10289898B2 (en) | 2010-06-07 | 2019-05-14 | Affectiva, Inc. | Video recommendation via affect |
US10796176B2 (en) | 2010-06-07 | 2020-10-06 | Affectiva, Inc. | Personal emotional profile generation for vehicle manipulation |
US11232290B2 (en) | 2010-06-07 | 2022-01-25 | Affectiva, Inc. | Image analysis using sub-sectional component evaluation to augment classifier usage |
US11393133B2 (en) | 2010-06-07 | 2022-07-19 | Affectiva, Inc. | Emoji manipulation using machine learning |
US9959549B2 (en) | 2010-06-07 | 2018-05-01 | Affectiva, Inc. | Mental state analysis for norm generation |
US10869626B2 (en) | 2010-06-07 | 2020-12-22 | Affectiva, Inc. | Image analysis for emotional metric evaluation |
US10922567B2 (en) | 2010-06-07 | 2021-02-16 | Affectiva, Inc. | Cognitive state based vehicle manipulation using near-infrared image processing |
US10401860B2 (en) | 2010-06-07 | 2019-09-03 | Affectiva, Inc. | Image analysis for two-sided data hub |
US11935281B2 (en) | 2010-06-07 | 2024-03-19 | Affectiva, Inc. | Vehicular in-cabin facial tracking using machine learning |
US8565500B2 (en) * | 2010-06-14 | 2013-10-22 | Siemens Medical Solutions Usa, Inc. | Automatic patient and device recognition and association system |
US20120116252A1 (en) * | 2010-10-13 | 2012-05-10 | The Regents Of The University Of Colorado, A Body Corporate | Systems and methods for detecting body orientation or posture |
WO2012158234A2 (en) | 2011-02-27 | 2012-11-22 | Affectiva, Inc. | Video recommendation based on affect |
US20130127620A1 (en) | 2011-06-20 | 2013-05-23 | Cerner Innovation, Inc. | Management of patient fall risk |
US9489820B1 (en) | 2011-07-12 | 2016-11-08 | Cerner Innovation, Inc. | Method for determining whether an individual leaves a prescribed virtual perimeter |
US10546481B2 (en) | 2011-07-12 | 2020-01-28 | Cerner Innovation, Inc. | Method for determining whether an individual leaves a prescribed virtual perimeter |
US9741227B1 (en) | 2011-07-12 | 2017-08-22 | Cerner Innovation, Inc. | Method and process for determining whether an individual suffers a fall requiring assistance |
US8548207B2 (en) | 2011-08-15 | 2013-10-01 | Daon Holdings Limited | Method of host-directed illumination and system for conducting host-directed illumination |
US8620088B2 (en) | 2011-08-31 | 2013-12-31 | The Nielsen Company (Us), Llc | Methods and apparatus to count people in images |
US8809788B2 (en) * | 2011-10-26 | 2014-08-19 | Redwood Systems, Inc. | Rotating sensor for occupancy detection |
US9202105B1 (en) | 2012-01-13 | 2015-12-01 | Amazon Technologies, Inc. | Image analysis for user authentication |
US20130226655A1 (en) * | 2012-02-29 | 2013-08-29 | BVI Networks, Inc. | Method and system for statistical analysis of customer movement and integration with other data |
US8769557B1 (en) | 2012-12-27 | 2014-07-01 | The Nielsen Company (Us), Llc | Methods and apparatus to determine engagement levels of audience members |
US10096223B1 (en) | 2013-12-18 | 2018-10-09 | Cerner Innovication, Inc. | Method and process for determining whether an individual suffers a fall requiring assistance |
US10078956B1 (en) | 2014-01-17 | 2018-09-18 | Cerner Innovation, Inc. | Method and system for determining whether an individual takes appropriate measures to prevent the spread of healthcare-associated infections |
US9729833B1 (en) | 2014-01-17 | 2017-08-08 | Cerner Innovation, Inc. | Method and system for determining whether an individual takes appropriate measures to prevent the spread of healthcare-associated infections along with centralized monitoring |
US10225522B1 (en) | 2014-01-17 | 2019-03-05 | Cerner Innovation, Inc. | Method and system for determining whether an individual takes appropriate measures to prevent the spread of healthcare-associated infections |
US10614204B2 (en) | 2014-08-28 | 2020-04-07 | Facetec, Inc. | Facial recognition authentication system including path parameters |
CA2902093C (en) | 2014-08-28 | 2023-03-07 | Kevin Alan Tussy | Facial recognition authentication system including path parameters |
US10698995B2 (en) | 2014-08-28 | 2020-06-30 | Facetec, Inc. | Method to verify identity using a previously collected biometric image/data |
US11256792B2 (en) | 2014-08-28 | 2022-02-22 | Facetec, Inc. | Method and apparatus for creation and use of digital identification |
US10915618B2 (en) | 2014-08-28 | 2021-02-09 | Facetec, Inc. | Method to add remotely collected biometric images / templates to a database record of personal information |
US10803160B2 (en) | 2014-08-28 | 2020-10-13 | Facetec, Inc. | Method to verify and identify blockchain with user question data |
US10090068B2 (en) | 2014-12-23 | 2018-10-02 | Cerner Innovation, Inc. | Method and system for determining whether a monitored individual's hand(s) have entered a virtual safety zone |
US10524722B2 (en) | 2014-12-26 | 2020-01-07 | Cerner Innovation, Inc. | Method and system for determining whether a caregiver takes appropriate measures to prevent patient bedsores |
CN105809655B (en) * | 2014-12-30 | 2021-06-29 | 清华大学 | Vehicle inspection method and system |
KR102306538B1 (en) * | 2015-01-20 | 2021-09-29 | 삼성전자주식회사 | Apparatus and method for editing content |
US10091463B1 (en) | 2015-02-16 | 2018-10-02 | Cerner Innovation, Inc. | Method for determining whether an individual enters a prescribed virtual zone using 3D blob detection |
US10342478B2 (en) | 2015-05-07 | 2019-07-09 | Cerner Innovation, Inc. | Method and system for determining whether a caretaker takes appropriate measures to prevent patient bedsores |
US9892611B1 (en) | 2015-06-01 | 2018-02-13 | Cerner Innovation, Inc. | Method for determining whether an individual enters a prescribed virtual zone using skeletal tracking and 3D blob detection |
GB2540562B (en) * | 2015-07-21 | 2019-09-04 | Advanced Risc Mach Ltd | Method of and apparatus for generating a signature representative of the content of an array of data |
US10614288B2 (en) | 2015-12-31 | 2020-04-07 | Cerner Innovation, Inc. | Methods and systems for detecting stroke symptoms |
US10439556B2 (en) * | 2016-04-20 | 2019-10-08 | Microchip Technology Incorporated | Hybrid RC/crystal oscillator |
USD987653S1 (en) | 2016-04-26 | 2023-05-30 | Facetec, Inc. | Display screen or portion thereof with graphical user interface |
US10147184B2 (en) | 2016-12-30 | 2018-12-04 | Cerner Innovation, Inc. | Seizure detection |
US10922566B2 (en) | 2017-05-09 | 2021-02-16 | Affectiva, Inc. | Cognitive state evaluation for vehicle navigation |
US20190172458A1 (en) | 2017-12-01 | 2019-06-06 | Affectiva, Inc. | Speech analysis for cross-language mental state identification |
US10810427B1 (en) | 2017-12-15 | 2020-10-20 | AI Incorporated | Methods for an autonomous robotic device to identify locations captured in an image |
US10643446B2 (en) | 2017-12-28 | 2020-05-05 | Cerner Innovation, Inc. | Utilizing artificial intelligence to detect objects or patient safety events in a patient room |
US10482321B2 (en) | 2017-12-29 | 2019-11-19 | Cerner Innovation, Inc. | Methods and systems for identifying the crossing of a virtual barrier |
US10922936B2 (en) | 2018-11-06 | 2021-02-16 | Cerner Innovation, Inc. | Methods and systems for detecting prohibited objects |
US11887383B2 (en) | 2019-03-31 | 2024-01-30 | Affectiva, Inc. | Vehicle interior object management |
US11560784B2 (en) | 2019-06-11 | 2023-01-24 | Noven, Inc. | Automated beam pump diagnostics using surface dynacard |
US11769056B2 (en) | 2019-12-30 | 2023-09-26 | Affectiva, Inc. | Synthetic data for neural network training using vectors |
US11711638B2 (en) | 2020-06-29 | 2023-07-25 | The Nielsen Company (Us), Llc | Audience monitoring systems and related methods |
US11860704B2 (en) | 2021-08-16 | 2024-01-02 | The Nielsen Company (Us), Llc | Methods and apparatus to determine user presence |
US11758223B2 (en) | 2021-12-23 | 2023-09-12 | The Nielsen Company (Us), Llc | Apparatus, systems, and methods for user presence detection for audience monitoring |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS57157378A (en) * | 1981-03-25 | 1982-09-28 | Hitachi Ltd | Setting method of binary-coded threshold level |
US4606069A (en) * | 1983-06-10 | 1986-08-12 | At&T Bell Laboratories | Apparatus and method for compression of facsimile information by pattern matching |
US4672678A (en) * | 1984-06-25 | 1987-06-09 | Fujitsu Limited | Pattern recognition apparatus |
US4611347A (en) * | 1984-09-24 | 1986-09-09 | At&T Bell Laboratories | Video recognition system |
US4739398A (en) * | 1986-05-02 | 1988-04-19 | Control Data Corporation | Method, apparatus and system for recognizing broadcast segments |
US4754487A (en) * | 1986-05-27 | 1988-06-28 | Image Recall Systems, Inc. | Picture storage and retrieval system for various limited storage mediums |
CA1305235C (en) * | 1986-10-31 | 1992-07-14 | Bruce A. Reynolds | Cutoff control system |
-
1988
- 1988-09-14 US US07/244,492 patent/US5031228A/en not_active Expired - Lifetime
-
1989
- 1989-01-16 AU AU28541/89A patent/AU2854189A/en not_active Abandoned
- 1989-06-28 CA CA000604168A patent/CA1325480C/en not_active Expired - Fee Related
- 1989-07-28 DE DE68918209T patent/DE68918209T2/en not_active Expired - Fee Related
- 1989-07-28 AT AT89113971T patent/ATE111663T1/en not_active IP Right Cessation
- 1989-07-28 ES ES89113971T patent/ES2058415T3/en not_active Expired - Lifetime
- 1989-07-28 EP EP89113971A patent/EP0358910B1/en not_active Expired - Lifetime
- 1989-09-14 JP JP1239813A patent/JPH02121070A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
DE68918209D1 (en) | 1994-10-20 |
EP0358910B1 (en) | 1994-09-14 |
ATE111663T1 (en) | 1994-09-15 |
JPH02121070A (en) | 1990-05-08 |
US5031228A (en) | 1991-07-09 |
EP0358910A2 (en) | 1990-03-21 |
DE68918209T2 (en) | 1995-02-02 |
EP0358910A3 (en) | 1991-10-09 |
AU2854189A (en) | 1990-03-22 |
ES2058415T3 (en) | 1994-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA1325480C (en) | Image recognition system and method | |
CA1314621C (en) | Image recognition audience measurement system and method | |
US20100183227A1 (en) | Person detecting apparatus and method and privacy protection system employing the same | |
US4047154A (en) | Operator interactive pattern processing system | |
US5550928A (en) | Audience measurement system and method | |
US5729252A (en) | Multimedia program editing system and method | |
US7120278B2 (en) | Person recognition apparatus | |
CN100465985C (en) | Human ege detecting method, apparatus, system and storage medium | |
WO1991006921A1 (en) | Dynamic method for recognizing objects and image processing system therefor | |
CN102542249A (en) | Face recognition in video content | |
CN105184823B (en) | The evaluation method for the moving object detection algorithm performance that view-based access control model perceives | |
US4371865A (en) | Method for analyzing stored image details | |
US20070127788A1 (en) | Image processing device, method, and program | |
KR101366776B1 (en) | Video object detection apparatus and method thereof | |
CN112312215B (en) | Startup content recommendation method based on user identification, smart television and storage medium | |
US4468807A (en) | Method for analyzing stored image details | |
US20020067856A1 (en) | Image recognition apparatus, image recognition method, and recording medium | |
JPH0723012A (en) | Audience rating survey system | |
CN101536513A (en) | Methods and apparatus for detecting on-screen media sources | |
CN114387548A (en) | Video and liveness detection method, system, device, storage medium and program product | |
US4922093A (en) | Method and a device for determining the number of people present in a determined space by processing the grey levels of points in an image | |
US6526167B1 (en) | Image processing apparatus and method and provision medium | |
US4242734A (en) | Image corner detector using Haar coefficients | |
RU2295152C1 (en) | Method for recognizing face of an individual on basis of video image | |
EP2577565A1 (en) | System and method for identifying a user through an object held in a hand |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MKLA | Lapsed | ||
MKLA | Lapsed |
Effective date: 20051221 |