CN102360421B - Face identification method and system based on video streaming - Google Patents

Face identification method and system based on video streaming Download PDF

Info

Publication number
CN102360421B
CN102360421B CN201110316170.7A CN201110316170A CN102360421B CN 102360421 B CN102360421 B CN 102360421B CN 201110316170 A CN201110316170 A CN 201110316170A CN 102360421 B CN102360421 B CN 102360421B
Authority
CN
China
Prior art keywords
facial image
identified
face
crucial
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110316170.7A
Other languages
Chinese (zh)
Other versions
CN102360421A (en
Inventor
徐汀荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201110316170.7A priority Critical patent/CN102360421B/en
Publication of CN102360421A publication Critical patent/CN102360421A/en
Application granted granted Critical
Publication of CN102360421B publication Critical patent/CN102360421B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face identification method and system based on video streaming. The method comprises the steps of: receiving a video streaming to be identified acquired by a video acquisition device; carrying out face detection on each frame of image in the video streaming to be identified to determine a face image to be identified; positioning a characteristic point corresponding of each frame of face image to be identified; determining a key face image in the face image to be identified; determining a key characteristic point of corresponding characteristic points in each frame of key face image; carrying out image pre-processing on the determined key face image; and determining a face identification result corresponding to the video streaming to be identified according to a weighting processing result of a similarity of the key characteristic point in each frame of key face image and a characteristic point corresponding to each face image in a face image database. In the scheme, through manners of detecting a color histogram-based key frame, determining the key characteristic point and weighting the similarity, identification offsets brought by the influence of a video acquisition environment are eliminated, and the face identification is carried out rapidly and effectively.

Description

A kind of face identification method and system based on video flowing
Technical field
The present invention relates to simulation technical field, particularly relate to a kind of face identification method and system based on video flowing.
Background technology
Recognition of face is the biological identification technology that a kind of face feature information based on people is carried out identification, its with respect to have as other more ripe method for identifying human body biological characteristics such as fingerprint, DNA detections non-direct infringement, overcast, interaction strong, the advantage such as be convenient to follow the tracks of afterwards, therefore, be for many years a study hotspot always.Recognition of face is image or the video flowing that contains face with the collection of the equipment such as camera or video camera, automatically detection and tracking face in image, and then the face detecting is carried out to a series of correlation techniques of face, comprise man face image acquiring, face location, recognition of face pre-service, memory storage and contrast identification, reach the object of identification different people identity.
Wherein, the recognition of face based on video flowing refers to and is input as face video, and utilizes rest image face database identify or verify.Owing to conventionally comprising more information in video flowing, for example: the facial image in multiple image, the video of same person has continuity on time and space, estimated three-dimensional face structure, recovered high-definition picture etc. from low-resolution image by motion change, can prevent identification deception based on still image etc.
Therefore, the recognition of face based on video flowing can utilize more information in video, more has superiority than traditional recognition of face based on rest image, becomes the area research personnel's such as criminal's identification, immigration management, housekeeping robot concern.
Summary of the invention
For solving the problems of the technologies described above, the embodiment of the present invention provides a kind of face identification method and system based on video flowing, and technical scheme is as follows:
Based on a face identification method for video flowing, comprising:
The video flowing to be identified that receiver, video collecting device gathers;
Every two field picture in described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
Locate every frame facial image characteristic of correspondence point to be identified;
According to the corresponding color histogram of facial image to be identified, determine the crucial facial image in facial image to be identified;
Determine the key feature points in the crucial facial image character pair point of every frame;
Determined crucial facial image is carried out to image pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
According to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determine face recognition result corresponding to described video flowing to be identified.
Accordingly, the present invention also provides a kind of face identification system based on video flowing, comprising:
Video reception module, the video flowing to be identified gathering for receiver, video collecting device;
Face detection module, for the every two field picture of described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
Positioning feature point module, for locating every frame facial image characteristic of correspondence point to be identified;
Key frame determination module, for according to the corresponding color histogram of facial image to be identified, determines the crucial facial image in facial image to be identified;
Key point determination module, for determining the key feature points of the crucial facial image character pair point of every frame;
Pretreatment module, for determined crucial facial image is carried out to image pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
Result determination module, according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determines face recognition result corresponding to described video flowing to be identified.
In the technical scheme that the embodiment of the present invention provides, the video flowing that video capture device is gathered carries out after face detection and positioning feature point, utilize the color histogram of facial image, determine the crucial facial image in facial image, and then crucial facial image is reduced to the image pre-service of geometric properties and illumination effect and determining of key feature points, last according to the weighted value of the key feature points similarity corresponding with each faceform in face database, determine final face recognition result.In this programme, the key frame by based on color histogram detects and key feature points is determined, the mode of Similarity-Weighted, eliminates the identification deviation of bringing due to video acquisition environmental impact, fast and effeciently carries out recognition of face.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The first process flow diagram of a kind of face identification method based on video flowing that Fig. 1 provides for the embodiment of the present invention;
The second process flow diagram of a kind of face identification method based on video flowing that Fig. 2 provides for the embodiment of the present invention;
Fig. 3 is the schematic diagram to face vertical integral projection in the embodiment of the present invention;
Fig. 4 is the schematic diagram to the horizontal integral projection of face in the embodiment of the present invention;
Fig. 5 is the second schematic diagram to face vertical integral projection in the embodiment of the present invention;
Fig. 6 is image rotation schematic diagram in the embodiment of the present invention;
Fig. 7 is the histogram schematic diagram before crucial face image equalization in the embodiment of the present invention;
Fig. 8 is to the histogram schematic diagram after crucial face image equalization in the embodiment of the present invention;
A kind of the third process flow diagram of the face identification method based on video flowing that Fig. 9 provides for the embodiment of the present invention;
The structural representation of a kind of face identification system based on video flowing that Figure 10 provides for the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides a kind of face identification method and system based on video flowing, carries out recognition of face fast and effectively with the video flowing of realizing under the general application scenarios that utilization gathered.First a kind of face identification method based on the video flowing embodiment of the present invention being provided is below introduced.
Based on a face identification method for video flowing, comprising:
The video flowing to be identified that receiver, video collecting device gathers;
Every two field picture in described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
Locate every frame facial image characteristic of correspondence point to be identified;
According to the corresponding color histogram of facial image to be identified, determine the crucial facial image in facial image to be identified;
Determine the key feature points in the crucial facial image character pair point of every frame;
Determined crucial facial image is carried out to image pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
According to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determine face recognition result corresponding to described video flowing to be identified.
In the technical scheme that the embodiment of the present invention provides, the video flowing that video capture device is gathered carries out after face detection and positioning feature point, utilize the color histogram of facial image, determine the crucial facial image in facial image, and then crucial facial image is reduced to the image pre-service of geometric properties and illumination effect and determining of key feature points, last according to the weighted value of the key feature points similarity corresponding with each faceform in face database, determine final face recognition result.In this programme, the key frame by based on color histogram detects and key feature points is determined, the mode of Similarity-Weighted, eliminates the identification deviation of bringing due to video acquisition environmental impact, fast and effeciently carries out recognition of face.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Based on the embodiment in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
As shown in Figure 1, a kind of face identification method based on video flowing, can comprise:
S101, the video flowing to be identified that receiver, video collecting device gathers;
Conventionally video capture device generally can comprise: video camera, video recorder, LD video machine etc.These video capture devices are arranged on needs the region of video acquisition ad-hoc location.In the time that the video flowing that need to gather video capture device carries out analyzing and processing, it is connected with corresponding analyzing and processing equipment.
S102, carries out face detection to every two field picture in described video flowing to be identified, to determine facial image to be identified;
In each two field picture region of received video flowing to be identified, may not exist face, human face ratio less, or have multiple uncorrelated faces, complete face etc.That is to say, the information that each two field picture in video flowing to be identified comprises not is all available, so need to carry out face detection to the every two field picture in video flowing to be identified, by comprise complete face and meet certain human face ratio picture frame extract, carry out follow-up recognition of face processing.
Wherein, as shown in Figure 2, every two field picture in described video flowing to be identified is carried out to face detection, is specifically as follows:
S201, carries out gray processing and histogram equalization processing to every two field picture in described video flowing to be identified;
S202, the face that every two field picture after treatment is carried out to multiple dimensioned many features detects;
S203, carries out many features by the corresponding testing result of each yardstick of every two field picture and merges processing;
S204, merges result by many features corresponding every two field picture and carries out multiple dimensioned merging processing;
S205, is defined as facial image to be identified by the multiple dimensioned merging result that meets default face yardstick threshold value.
In above-mentioned processing procedure, each two field picture carries out after gray processing and histogram equalization process in to video flowing to be identified, and the face that every two field picture is carried out respectively to multiple dimensioned many features detects, to determine the testing result for different scale different characteristic.So-called multiple dimensioned be characterized as: every two field picture is carried out to the detection of each yardstick, and in the time that each yardstick is detected, detect according to different features more.Carrying out after multiple dimensioned many feature detection, testing result is being carried out to the multiple dimensioned merging processing of many features, to eliminate the deviation of different testing results, the corresponding amalgamation result of final every two field picture.Be understandable that, if contained face information is less in amalgamation result, this two field picture be not suitable for recognition of face.So, the multiple dimensioned merging result that meets default face yardstick threshold value need to be defined as to facial image to be identified.Be understandable that, face yardstick threshold value can be set according to actual conditions, does not limit at this.
S103, locates every frame facial image characteristic of correspondence point to be identified;
Be understandable that, face is made up of parts such as eyes, nose, face, chins, and the shape of these parts, size and structural each species diversity just make each face vary, therefore, for the accurate location of these organs, be a vital step for whole face recognition process.Wherein, because eyes are sockdolager's face features relatively in the middle of face, so as long as eyes are accurately positioned, other organs of face, as: eyebrow, face, nose etc., can be located more accurately by potential distribution relation.
In the present embodiment, the location of features of human face images to be identified can be undertaken by the crest corresponding to producing under different integral projection modes or trough.Wherein, integral projection is divided into vertical projection and horizontal projection, establishes the gray-scale value that f (x, y) presentation video (x, y) is located, at image [y 1, y 2] and [x 1, x 2] the horizontal integral projection M in region hand vertical integral projection M (y) v(x) be expressed as:
M h ( y ) = 1 x 2 - x 1 Σ x 1 x 2 G ( x , y ) - - - ( 1 )
M v ( x ) = 1 y 2 - y 1 Σ y 1 y 2 G ( x , y ) - - - ( 2 )
Wherein, horizontal integral projection is exactly to show after the gray-scale value of all pixels of a line is added up again; And vertical integral projection is exactly to show after the gray-scale value of all pixels of row is added up again.
Be understandable that, before carrying out positioning feature point, in order to obtain the obvious drop shadow curve of grey scale change, can increase the contrast of image, for example, adopt adaptive threshold method to carry out after binary conversion treatment, then carry out integral projection.
For example: as shown in Fig. 3 (a), Fig. 3 (b) and Fig. 3 (c), for to the vertical integral result of facial image, from each drop shadow curve, can find out, at face left and right boundary, the gray scale of image has greatly changed, and border, left and right correspondence two larger troughs on both sides in curve.Therefore, only need these two trough point x of location 1, x 2from this facial image to be identified transverse axis [x 1, x 2] region image interception out, can realize the location on border, facial image to be identified left and right.To binaryzation facial image to be identified after the boundary alignment of left and right, carry out respectively horizontal integral projection and vertical integral projection, its projection result is respectively as shown in Figure 4, Figure 5.Further, utilize knownly to the priori of facial image, eyebrow and eyes are black regions nearer in facial image, and its correspondence the first two minimum point in horizontal integral projection curve.As shown in Figure 4, what first minimum point was corresponding is the position of eyebrow on the longitudinal axis, and note is y brow, what second minimum point was corresponding is the positions of eyes on the longitudinal axis, note is y eye, what the 3rd minimum point was corresponding is the position of nose on the longitudinal axis, note is y nose, what the 4th minimum point was corresponding is the position of face on the longitudinal axis, note is y month.Equally, as shown in Figure 5, there are two minimum points in facial image central symmetry axis both sides, the position of corresponding right and left eyes on transverse axis respectively, and note is x left-eye, x right-eye; The position of eyebrow on transverse axis is identical with eyes; Face and the nose position on transverse axis is (x left-eye+ x right-eye)/2.
Be understandable that, the locator meams of the unique point of facial image to be identified is not limited to the mode described in the present embodiment.
S104, according to the corresponding color histogram of facial image to be identified, determines the crucial facial image in facial image to be identified;
Due in video to be identified, same person can occur in a lot of frames, in a frame, also having a lot of people occurs, owing to affected by attitude, illumination variation, front, side, dimensional variation, expression shape change etc., its picture quality is different, in order to improve accuracy rate and the efficiency of identification, need to filter out the good image of some mass ratioes, carry out follow-up recognition of face, good filtered out quality each two field picture is defined as to crucial facial image.Namely, under normal circumstances, many for the video picture frame of recognition of face, if each facial image is all processed, must affect running efficiency of system, reduce system real time.Meanwhile, include bulk redundancy frame in face video, comprise and rotate the excessive and even facial image frame of uneven illumination, these frames are carried out to feature location can be very difficult, even locates unsuccessfully, directly affects the discrimination of entire system.
In this example, according to the corresponding color histogram of facial image to be identified, determine the key frame images in facial image to be identified.
Wherein, color histogram has reflected the probability that in piece image, pixel color occurs, is the estimation of pixel probability of occurrence.A given width digital picture I, its color histogram vector H can be expressed as:
H = ( h [ c 1 ] , h [ c 2 ] , · · · , h [ c k ] , · · · , h [ c N ] ) , Σ k = 1 N h [ c k ] = 1,0 ≤ h [ c k ] ≤ 1 - - - ( 3 )
Wherein h[c k] be the probability number that k kind color occurs in this digital picture:
h [ c k ] = Σ i = 0 N 1 Σ j = 0 N 2 1 , I ( i , j ) = c k 0 , otherwise N 1 * N 2 - - - ( 4 )
N 1, N 2presentation video has N respectively 1row N 2row, the pixel value that I (i, j) presentation video mid point (i, j) is located.
Color histogram has reflected the global information of image in color, and every width image has the color histogram of answering in contrast.If G and H are the color histogram vector of the two width images that will compare, the color progression occurring in N presentation video, g k, h kthe frequency that in the color histogram of presentation video G and H, k level look occurs respectively, the similarity of two width images can represent with the Euclidean distance between its color histogram:
d ( G , H ) = 1 H Σ k = 1 N ( g k - h k ) 1 2 - - - ( 5 )
Color histogram, as a kind of very important visual feature, can effectively be distinguished the object of different structure, size, shape, and color identification is simultaneously a kind of both simple and quick method, and this is to there being the system of requirement of real-time particularly important.Based on this, the present invention proposes a kind of face key frame of video extraction algorithm based on color histogram, and before feature extraction and identification, advanced pedestrian's face key frame detects, and avoids redundant frame to process, thereby greatly improves system performance.Detailed process is as follows:
In face video sequence, select the good two field picture of quality as standard faces image, calculate its color histogram H, as standard faces color histogram;
Obtain frame by frame the facial image to be identified in face video sequence, calculate its color histogram G i, and and the comparison of Standard Colors histogram, through type (5) calculates its Euclidean distance;
Setpoint distance threshold value T (0≤T≤1), the facial image all to be identified that Euclidean distance is less than to T is as crucial facial image, and the facial image to be identified that Euclidean distance is greater than distance threshold is defined as to non-key facial image.Wherein, T can control the quality and quantity of choosing crucial facial image.
Be understandable that, carry out follow-up recognition of face by filter out crucial facial image in facial image to be identified, can effectively improve system treatment effeciency, more there is practicality simultaneously.
S105, determines the key feature points in the crucial facial image character pair point of every frame;
Be understandable that, face different parts unique point is concerning recognition of face, differentiated, namely, not all unique point all has higher identification value,, for example, find according to experimental result, the recognition capability of the genius loci points such as eyebrow, face is higher than positions such as eyes, nose, profiles.Therefore, need to determine the key feature points in the crucial facial image character pair point of every frame, to carry out effective recognition of face.
S106, carries out image pre-service to determined crucial facial image, to reduce image geometry feature and the illumination impact on crucial facial image;
Wherein, determined key frame is carried out to geometrical normalization pre-service, namely, to facial image process translation, rotation and filtering processing, the identification deviation of being brought to reduce image geometry feature, is convenient to feature extraction and identification.To the normalized processing procedure of determined crucial face image geometry can be:
The position coordinates of supposing two eyes in crucial facial image is respectively E 1(x 1, y 1) and E r(x r, y r), the distance of eye center is d, and the angle of two axis oculi lines and x axle is θ, and θ is the angle value that will rotate.
d = ( x r - x l ) 2 + ( y r - y l ) 2 - - - ( 6 )
θ = arctan ( y r - y l x r - x l ) - - - ( 7 )
According to the priori of facial image ratio, the center C (x of face c, y c) be:
x c = x r - x l 2 + d × sin ( θ ) (8)
y c = y r - y l 2 + d × cos ( θ )
, centered by C, take θ as the anglec of rotation, face is rotated, the computing formula of postrotational face coordinate (x ', y ') is:
x′=x c+cosθ(x-x c)+sinθ(y-y c)
(9)
y′=y c+cosθ(y-y c)+sinθ(x-x c)
Key frame images rotation schematic diagram as shown in Figure 6.
Be understandable that, the image changing through over-rotation may size differ, so also need regular facial image to unified size.Wherein the size normalization of image adopts the method for convergent-divergent to realize conventionally, and concrete processing procedure can be:
The conversion of pixel coordinate, is mapped to output image by the pixel of input picture.If the axial scaling of image x is R x, the axial scaling of y is R y, the transformation matrix of image scaling is as follows:
x ′ y ′ 1 = R x 0 0 0 R y 0 0 0 1 x y 1 - - - ( 10 )
Wherein, in the time that image is amplified, the pixel in output image may can not find corresponding pixel in source images, therefore, must carry out interpolation arithmetic to it.Conventional interpolation arithmetic method comprises: arest neighbors interpolation, bilinear interpolation and bicubic interpolation.Arest neighbors interpolation is to obtain with the picture element interpolation of the several points in its neighborhood, and algorithm is simple, and operand is very little, but after conversion, gray scale has obvious uncontinuity; Bicubic interpolation computational accuracy is high, but calculated amount is larger.Therefore, conventional bilinear interpolation compensates.Postulated point (x 1, y 1), (x 1, y 2), (x 2, y 1), (x 2, x 2) be four summits of rectangular area in image, point (x, y) is included in this rectangle, and the computing method of the gray-scale value of (x, y) are as follows:
f(x,y 0)=f(x 0,y 0)+(x-x 0)/(x 1-x 0)[f(x 1,y 0)-f(x 0,y 0)]
f(x,y 1)=f(x 0,y 1)+(x-x 0)/(x 1-x 0)[f(x 1,y 1)-f(x 0,y 1)] (11)
f(x,y)=f(x,y 0)+(y-y 0)/(y 1-y 0)[f(x,y 1)-f(x,y 0)]
Further, be one of principal element affecting face recognition result due to illumination, therefore need to eliminate the impact of illumination on facial image.In the present embodiment, can adopt histogram equalization techniques to carry out image irradiation pre-service, principle is the gray level by changing each pixel in image, changes the histogram of image with this, thereby makes partially bright image dimmed or partially dark image is brightened.Grey level histogram is the statistics of each grey scale pixel value occurrence number or frequency in piece image, is the function of gray-scale value, is a instrument simple, the most useful in Digital Image Processing.It has not only described the gray shade scale content of piece image, and has described the number in image with the pixel of this gray-scale value.The histogram of any piece image has all comprised considerable information, and the image of some type can also be described completely by its histogram.
Wherein, concrete disposal route can comprise:
Suppose that number of pixels total in gray level image is n, a certain gray level is r knumber of pixels be n k, n k/ n be exactly gray level be r kthe probability that occurs of pixel, add up the probability that in this gray level image, all gray levels occur, can obtain the grey level histogram of this image.If with pixel grayscale in variable r representative image, and r is done to normalized: 0≤r≤1, wherein, r=0 represents black, r=1 representative is white.For a given image, the gray level that each pixel obtains in [0,1] interval is random.R is a stochastic variable, can use probability density function P r(r) represent the intensity profile of original image, can find out the gray distribution features of piece image from the distribution of image gray levels.If the gray-scale value of most of pixels is taken at the region close to 0 in image, entire image is darker, otherwise, entire image trial of strength.The gray-scale value right and wrong of many images are equally distributed, and it is very common that gray-scale value concentrates on an image in minizone.
The object of histogram equalization processing is that to make the histogram of image after treatment be straight, and each gray level has the identical frequency of occurrences, and each gray level has uniform probability distribution thus, thereby improves the subjective quality of image.Therefore, need to find a kind of transformation relation S=T (r), make the rear new grey level histogram of conversion convert front histogram straight.Can adopt the cumulative distribution function of r as transfer function, that is:
S = T ( r ) = ∫ 0 r P r ( w ) dw - - - ( 12 )
Wherein, w is integration variable, it is the cumulative distribution function of r.Use r krepresent discrete gray-scale value, use P r(r k) represent P r(r), have:
P r ( r k ) = n k n 0≤r k≤1,k=0,1,2…,L-1 (13)
Wherein, n kfor occurring the number of picture elements of this gray scale in image, n is pixel sum in image, and L presentation video gray-scale value is divided into L level.
The discrete form of transfer function is:
S k = T ( r k ) = Σ j = 0 k n j n = Σ j = 0 k P r ( r j ) 0≤r k≤1,k=0,1,2…,L-1 (14)
Take any width facial image as example, Fig. 7 is the schematic diagram before histogram equalization, and Fig. 8 is the schematic diagram after histogram equalization.Visible, the intensity of illumination of the image after histogram equalization has obtained compensation, and gray-scale value dynamic range increases, and intensity profile is more even, and integral image contrast strengthens.
S107, according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determines face recognition result corresponding to described video flowing to be identified.
Be understandable that, in the time carrying out recognition of face, first should form corresponding face database by features training mode, think that recognition of face provides discrimination standard.In the process of carrying out features training, represent face using deformable two grids as template.Extract two grids at face area, these grids can be regarded as a two-dimensional topology figure, for each node in topological diagram, calculate its characteristic information, and give this node by obtained value and obtain labeled graph, be specially and first choose the special point (such as eyebrow, eyes, nose, chin etc.) in position on facial image as unique point; Then by Gabor wave filter, unique point is carried out to filtering, extract Gabor coefficient composition eigenvector, represent unique point and the characteristic of correspondence vector thereof of each face and be stored in database with face figure.
For example: the data in 1000 people's training storehouse are exactly the raw data of 1000 people's human face characteristic point.On facial image, the special point (such as eyebrow, eyes, nose, chin etc.) in position is as unique point; Then by Gabor wave filter, unique point is carried out to filtering, extract Gabor coefficient and form eigenvector and form face figure, represent unique point and the characteristic of correspondence vector thereof of each face and be stored in to train in storehouse with face figure.
Wherein, described according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determine face recognition result corresponding to described video flowing to be identified, as shown in Figure 9, be specifically as follows:
S901, utilizes wavelet transformation to determine the corresponding critical eigenvalue of each key feature points;
The feature extraction of face information is a vital step in recognition of face, whether the characteristic information extracting is stable, reliable, abundant etc. all by the discrimination final system that has influence on, particularly in the time there is the variations such as illumination, expression, attitude and other interference in facial image.Two wavelet transformations have locality, directional selectivity and frequency selectivity, can accurately extract the local feature of different directions in facial image, frequency and yardstick, and have certain antijamming capability, are therefore widely used in the feature extraction of face.Wherein, the functional form of 2-d wavelet can be expressed as:
ψ j ( x → ) = | | k j → | | 2 σ 2 exp ( - | | k j → | | 2 | | x → | | 2 2 σ 2 ) [ exp ( i k j → x → ) - exp ( - σ 2 2 ) ] - - - ( 15 )
Wherein wave vector
Figure BDA0000100220360000121
for
Figure BDA0000100220360000122
In formula,
Figure BDA0000100220360000123
for the image coordinate of given position; also claim the centre frequency of wave filter;
Figure BDA0000100220360000125
reflect the directional selectivity of wave filter.In natural image,
Figure BDA0000100220360000126
being used for compensating the energy spectrum being determined by frequency decays;
Figure BDA0000100220360000127
it is the Gaussian envelope function for constraint plane ripple;
Figure BDA0000100220360000128
for complex values plane wave, its real part is cosine plane wave
Figure BDA0000100220360000129
imaginary part is sinusoidal plane wave
Figure BDA00001002203600001210
Figure BDA00001002203600001211
be called flip-flop compensation, be used for the impact on Two-Dimensional Gabor Wavelets of the flip-flop of removal of images, make Two-Dimensional Gabor Wavelets conversion not be subject to the impact of gradation of image absolute figure, and insensitive to the illumination variation of image.The function of two-dimensional Gabor filter is a complex function, and its real part and imaginary part can be expressed as respectively:
R ( ψ j ( x → ) ) = | | k j → | | 2 σ 2 exp ( - | | k j → | | 2 | | x → | | 2 2 σ 2 ) [ cos ( k j → x → ) - exp ( - σ 2 2 ) ] - - - ( 17 )
I ( ψ j ( x → ) ) = | | k j → | | 2 σ 2 exp ( - | | k j → | | 2 | | x → | | 2 2 σ 2 ) [ sin ( k j → x → ) ] - - - ( 18 )
Two-Dimensional Gabor Wavelets conversion has been described in image I more given the gray feature of near zone, can realize by the convolution of Gabor family of functions and image the filtering of image I:
J j , x → 0 ( x → 0 ) = ∫ I ( x → ) ψ j ( x → 0 - x → ) d 2 x - - - ( 19 )
Two-dimensional Gabor filter is bandpass filter, all have good resolution characteristic in spatial domain and frequency field, and its parameter has embodied its sample mode in spatial domain and frequency field, has determined its ability to express to signal.Conventionally can adopt the Gabor bank of filters with multiple centre frequencies and direction to carry out Description Image.Parameter kv,
Figure BDA00001002203600001216
different choice embodied respectively the sample mode of Two-Dimensional Gabor Wavelets at frequency and director space; Parameter σ determines the bandwidth of wave filter.The experiment of Lades shows: for the image that is of a size of 128 × 128, as wave filter center of maximum frequency k maxfor pi/2, when the π of σ=2, reach best experimental result.Because the texture of image is stochastic distribution,
Figure BDA00001002203600001217
span be 0~2 π, if consider the symmetry of Gabor wave filter,
Figure BDA00001002203600001218
actual span be 0 to π.For the local feature of Description Image, the present embodiment adopts 40 Gabor wave filters of 5 centre frequencies, 8 direction compositions to carry out filtering to image.Parameter k vwith
Figure BDA0000100220360000131
value as follows:
Figure BDA0000100220360000132
In formula, f is the interval factor for limiting frequency domain Kernel Function distance, is conventionally taken as
Figure BDA0000100220360000133
v ∈ { 0,1,2,3,4}, u={0,1,2,3,4,5,6,7}, j=u+8v.
S902, calculates in the crucial facial image of every frame the similarity of each faceform's character pair point in each key feature points and face database;
S903, is weighted the similarity of each key feature points and same faceform's character pair point in crucial every frame facial image, faceform corresponding maximum weighted value is defined as to the alternate recognition results of described crucial facial image;
S904, adds up crucial face amount of images corresponding to each alternative face recognition result, and by quantity at most and be defined as higher than the alternative face recognition result of predetermined number threshold value the face recognition result that video flowing to be identified is corresponding.
In above-mentioned processing, by calculating the similarity of each key feature points and the character pair point in each faceform of presetting in the crucial facial image of every frame, all key feature points in the crucial facial image of each frame are weighted for same faceform's similarity, and faceform corresponding maximum weighted value are defined as to the alternate recognition results of described crucial facial image; Then, add up crucial face amount of images corresponding to each alternative face recognition result, and by quantity at most and be defined as higher than the alternative face recognition result of predetermined number threshold value the face recognition result that video flowing to be identified is corresponding.Be understandable that, in actual applications, in the video flowing gathering, might not comprise certain faceform in face database, therefore, crucial face amount of images corresponding to a certain alternative face recognition result may be at most, but when close with other alternative face recognition result quantity, show that gathered video flowing may not include the faceform in face database.Therefore, need to set an amount threshold, to improve the accuracy of identification.
By the description of above embodiment of the method, those skilled in the art can be well understood to the mode that the present invention can add essential general hardware platform by software and realize, can certainly pass through hardware, but in a lot of situation, the former is better embodiment.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions (can be personal computers in order to make a computer equipment, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as ROM (read-only memory) (ROM), random access memory (RAM), magnetic disc or CDs.
Corresponding to embodiment of the method above, the embodiment of the present invention also provides a kind of face identification system based on video flowing, as shown in figure 10, can comprise:
Video reception module 110, the video flowing to be identified gathering for receiver, video collecting device;
Face detection module 120, for the every two field picture of described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
Positioning feature point module 130, for locating every frame facial image characteristic of correspondence point to be identified;
Key frame determination module 140, for according to the corresponding color histogram of facial image to be identified, determines the crucial facial image in facial image to be identified;
Key point determination module 150, for determining the key feature points of the crucial facial image character pair point of every frame;
Pretreatment module 160, for determined crucial facial image is carried out to image pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
Result determination module 170, according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determines face recognition result corresponding to described video flowing to be identified.
Wherein, described face detection module comprises:
Equilibrium treatment unit, carries out gray processing and histogram equalization processing to every two field picture in described video flowing to be identified;
Face detecting unit, detects for the face that every two field picture after treatment is carried out to multiple dimensioned many features;
Many features merge cells, merges processing for the corresponding testing result of each yardstick of every two field picture being carried out to many features;
Multiple dimensioned merge cells, carries out multiple dimensioned merging processing for many features corresponding every two field picture are merged to result;
Result determining unit, for being defined as facial image to be identified by the multiple dimensioned merging result that meets default face yardstick threshold value.
Wherein, described positioning feature point module comprises:
Enhancement contrast unit, for adopting adaptive threshold method to carry out Enhancement contrast processing to facial image to be identified;
Positioning feature point unit, utilizes vertical integral projection method and horizontal integral projection method to carry out positioning feature point processing to facial image to be identified, to locate described facial image characteristic of correspondence point to be identified.
Wherein, described key frame determination module comprises:
Standard unit, for the facial image to be identified that settles the standard, and calculates the color histogram of the corresponding human face region of described standard facial image to be identified;
Color histogram determining unit, for calculating the color histogram of the corresponding human face region of every frame facial image to be identified;
Histogram difference computing unit, for calculating the Euclidean distance of the color histogram that color histogram that current facial image to be identified is corresponding is corresponding with standard facial image to be identified;
Key frame determining unit, for judging whether described Euclidean distance is less than predeterminable range threshold value, if so, is defined as crucial facial image by described current facial image to be identified; If not, described facial image to be identified is defined as to non-key facial image.
Wherein, described pretreatment module comprises:
Geometrical normalization unit, for carrying out translation, rotation and/or filtering processing to described crucial facial image, to realize geometrical normalization processing;
Illumination pretreatment unit, for the crucial facial image through above-mentioned processing is carried out to histogram equalization processing, to realize illumination pretreatment.
Wherein, described result determination module comprises:
Eigenwert computing unit, for utilizing wavelet transformation to determine the corresponding critical eigenvalue of each key feature points;
Similarity calculated, calculates in the crucial facial image of every frame the similarity of each faceform's character pair point in each key feature points and face database;
Alternative result determining unit, is weighted the similarity of each key feature points and same faceform's character pair point in crucial every frame facial image, faceform corresponding maximum weighted value is defined as to the alternate recognition results of described crucial facial image;
Recognition result determining unit, for adding up crucial face amount of images corresponding to each alternative face recognition result, and by quantity at most and be defined as higher than the alternative face recognition result of predetermined threshold value the face recognition result that video flowing to be identified is corresponding.
For device or system embodiment, because it is substantially corresponding to embodiment of the method, so relevant part is referring to the part explanation of embodiment of the method.Device described above or system embodiment are only schematic, the wherein said unit as separating component explanation can or can not be also physically to separate, the parts that show as unit can be or can not be also physical locations, can be positioned at a place, or also can be distributed in multiple network element.Can select according to the actual needs some or all of module wherein to realize the object of the present embodiment scheme.Those of ordinary skills, in the situation that not paying creative work, are appreciated that and implement.
In several embodiment provided by the present invention, should be understood that, disclosed system, apparatus and method, not exceeding in the application's spirit and scope, can realize in other way.Current embodiment is a kind of exemplary example, should not serve as restriction, and given particular content should in no way limit the application's object.For example, the division of described unit or subelement, is only that a kind of logic function is divided, and when actual realization, can have other dividing mode, and for example multiple unit or multiple subelement combine.In addition, multiple unit can or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.
In addition, institute's descriptive system, the schematic diagram of apparatus and method and different embodiment, not exceeding in the application's scope, can with other system, module, technology or method in conjunction with or integrated.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
The above is only the specific embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (8)

1. the face identification method based on video flowing, is characterized in that, comprising:
The video flowing to be identified that receiver, video collecting device gathers;
Every two field picture in described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
By crest or trough corresponding to producing under different integral projection modes, locate every frame facial image characteristic of correspondence point to be identified;
According to the corresponding color histogram of facial image to be identified, determine the crucial facial image in facial image to be identified;
Determine the key feature points in the crucial facial image character pair point of every frame;
Determined crucial facial image is carried out to geometrical normalization pre-service and image irradiation pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
According to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determine face recognition result corresponding to described video flowing to be identified;
Wherein,
Described according to the corresponding color histogram of facial image to be identified, determine the crucial facial image in facial image to be identified, be specially:
The facial image to be identified that settles the standard, and calculate the color histogram of the corresponding human face region of described standard facial image to be identified;
Calculate the color histogram of the corresponding human face region of every frame facial image to be identified;
Calculate the Euclidean distance of the color histogram that color histogram that current facial image to be identified is corresponding is corresponding with standard facial image to be identified;
Judge whether described Euclidean distance is less than predeterminable range threshold value, if so, described current facial image to be identified is defined as to crucial facial image; If not, described facial image to be identified is defined as to non-key facial image;
Describedly determine face recognition result corresponding to described video flowing to be identified according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, be specially:
Utilize wavelet transformation to determine the corresponding critical eigenvalue of each key feature points;
Calculate in the crucial facial image of every frame the similarity of each faceform's character pair point in each key feature points and face database;
The similarity of each key feature points and same faceform's character pair point in crucial every frame facial image is weighted, faceform corresponding maximum weighted value is defined as to the alternate recognition results of described crucial facial image;
Add up crucial face amount of images corresponding to each alternative face recognition result, and by quantity at most and be defined as higher than the alternative face recognition result of predetermined number threshold value the face recognition result that video flowing to be identified is corresponding.
2. method according to claim 1, is characterized in that, described every two field picture in described video flowing to be identified is carried out to face detection, to determine facial image to be identified, is specially:
Every two field picture in described video flowing to be identified is carried out to gray processing and histogram equalization processing;
The face that every two field picture after treatment is carried out to multiple dimensioned many features detects;
The corresponding testing result of each yardstick of every two field picture is carried out to many features and merge processing;
Many features corresponding every two field picture are merged to result and carry out multiple dimensioned merging processing;
The multiple dimensioned merging result that meets default face yardstick threshold value is defined as to facial image to be identified.
3. method according to claim 1, is characterized in that, described by crest or trough corresponding to producing under different integral projection modes, locates every frame facial image characteristic of correspondence point to be identified, is specially:
Adopt adaptive threshold method to carry out Enhancement contrast processing to facial image to be identified;
Utilize vertical integral projection method and horizontal integral projection method to carry out positioning feature point processing to facial image to be identified, to locate described facial image characteristic of correspondence point to be identified.
4. method according to claim 1, is characterized in that, described determined crucial facial image is carried out to geometrical normalization pre-service and image irradiation pre-service, is specially:
Described crucial facial image is carried out to translation, rotation and filtering processing, to realize geometrical normalization processing;
Crucial facial image through above-mentioned processing is carried out to histogram equalization processing, to realize illumination pretreatment.
5. the face identification system based on video flowing, is characterized in that, comprising:
Video reception module, the video flowing to be identified gathering for receiver, video collecting device;
Face detection module, for the every two field picture of described video flowing to be identified is carried out to face detection, to determine facial image to be identified;
Positioning feature point module, for by crest or trough corresponding to producing under different integral projection modes, locates every frame facial image characteristic of correspondence point to be identified;
Key frame determination module, for according to the corresponding color histogram of facial image to be identified, determines the crucial facial image in facial image to be identified;
Key point determination module, for determining the key feature points of the crucial facial image character pair point of every frame;
Pretreatment module, for determined crucial facial image is carried out to geometrical normalization pre-service and image irradiation pre-service, to reduce image geometry feature and the illumination impact on crucial facial image;
Result determination module, according to the weighting result of the similarity of each faceform's character pair point in key feature points and face database in the crucial facial image of every frame, determines face recognition result corresponding to described video flowing to be identified;
Wherein,
Described key frame determination module comprises:
Standard unit, for the facial image to be identified that settles the standard, and calculates the color histogram of the corresponding human face region of described standard facial image to be identified;
Color histogram determining unit, for calculating the color histogram of the corresponding human face region of every frame facial image to be identified;
Histogram difference computing unit, for calculating the Euclidean distance of the color histogram that color histogram that current facial image to be identified is corresponding is corresponding with standard facial image to be identified;
Key frame determining unit, for judging whether described Euclidean distance is less than predeterminable range threshold value, if so, is defined as crucial facial image by described current facial image to be identified; If not, described facial image to be identified is defined as to non-key facial image;
Described result determination module comprises:
Eigenwert computing unit, for utilizing wavelet transformation to determine the corresponding critical eigenvalue of each key feature points;
Similarity calculated, calculates in the crucial facial image of every frame the similarity of each faceform's character pair point in each key feature points and face database;
Alternative result determining unit, is weighted the similarity of each key feature points and same faceform's character pair point in crucial every frame facial image, faceform corresponding maximum weighted value is defined as to the alternate recognition results of described crucial facial image;
Recognition result determining unit, for adding up crucial face amount of images corresponding to each alternative face recognition result, and by quantity at most and be defined as higher than the alternative face recognition result of predetermined number threshold value the face recognition result that video flowing to be identified is corresponding.
6. system according to claim 5, is characterized in that, described face detection module comprises:
Equilibrium treatment unit, carries out gray processing and histogram equalization processing to every two field picture in described video flowing to be identified;
Face detecting unit, detects for the face that every two field picture after treatment is carried out to multiple dimensioned many features;
Many features merge cells, merges processing for the corresponding testing result of each yardstick of every two field picture being carried out to many features;
Multiple dimensioned merge cells, carries out multiple dimensioned merging processing for many features corresponding every two field picture are merged to result;
Result determining unit, for being defined as facial image to be identified by the multiple dimensioned merging result that meets default face yardstick threshold value.
7. system according to claim 5, is characterized in that, described positioning feature point module comprises:
Enhancement contrast unit, for adopting adaptive threshold method to carry out Enhancement contrast processing to facial image to be identified;
Positioning feature point unit, utilizes vertical integral projection method and horizontal integral projection method to carry out positioning feature point processing to facial image to be identified, to locate described facial image characteristic of correspondence point to be identified.
8. system according to claim 5, is characterized in that, described pretreatment module comprises:
Geometrical normalization unit, for carrying out translation, rotation and filtering processing to described crucial facial image, to realize geometrical normalization processing;
Illumination pretreatment unit, for the crucial facial image through above-mentioned processing is carried out to histogram equalization processing, to realize illumination pretreatment.
CN201110316170.7A 2011-10-19 2011-10-19 Face identification method and system based on video streaming Expired - Fee Related CN102360421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110316170.7A CN102360421B (en) 2011-10-19 2011-10-19 Face identification method and system based on video streaming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110316170.7A CN102360421B (en) 2011-10-19 2011-10-19 Face identification method and system based on video streaming

Publications (2)

Publication Number Publication Date
CN102360421A CN102360421A (en) 2012-02-22
CN102360421B true CN102360421B (en) 2014-05-28

Family

ID=45585748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110316170.7A Expired - Fee Related CN102360421B (en) 2011-10-19 2011-10-19 Face identification method and system based on video streaming

Country Status (1)

Country Link
CN (1) CN102360421B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279496A (en) * 2015-10-26 2016-01-27 浙江宇视科技有限公司 Human face recognition method and apparatus
RU2712417C1 (en) * 2019-02-28 2020-01-28 Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) Method and system for recognizing faces and constructing a route using augmented reality tool

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294986B (en) * 2012-03-02 2019-04-09 汉王科技股份有限公司 A kind of recognition methods of biological characteristic and device
CN104349045B (en) * 2013-08-09 2019-01-15 联想(北京)有限公司 A kind of image-pickup method and electronic equipment
CN103809759A (en) * 2014-03-05 2014-05-21 李志英 Face input method
CN104008370B (en) * 2014-05-19 2017-06-13 清华大学 A kind of video face identification method
CN103955719A (en) * 2014-05-20 2014-07-30 中国科学院信息工程研究所 Filter bank training method and system and image key point positioning method and system
GB2528330B (en) * 2014-07-18 2021-08-04 Unifai Holdings Ltd A method of video analysis
CN105631391B (en) * 2014-11-05 2019-03-22 联芯科技有限公司 A kind of image processing method and system for realizing eyes amplification
CN104376334B (en) * 2014-11-12 2018-05-29 上海交通大学 A kind of pedestrian comparison method of multi-scale feature fusion
CN104581047A (en) * 2014-12-15 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method for supervisory video recording
CN104794464B (en) * 2015-05-13 2019-06-07 上海依图网络科技有限公司 A kind of biopsy method based on relative priority
CN104899575A (en) * 2015-06-19 2015-09-09 南京大学 Human body assembly dividing method based on face detection and key point positioning
CN105046227B (en) * 2015-07-24 2018-07-31 上海依图网络科技有限公司 A kind of key frame acquisition methods for portrait video system
CN106407984B (en) * 2015-07-31 2020-09-11 腾讯科技(深圳)有限公司 Target object identification method and device
CN105893922A (en) * 2015-08-11 2016-08-24 乐视体育文化产业发展(北京)有限公司 Bicycle unlocking method and device and bicycle
CN106570445B (en) * 2015-10-13 2019-02-05 腾讯科技(深圳)有限公司 A kind of characteristic detection method and device
CN105234940A (en) * 2015-10-23 2016-01-13 上海思依暄机器人科技有限公司 Robot and control method thereof
CN105426829B (en) * 2015-11-10 2018-11-16 深圳Tcl新技术有限公司 Video classification methods and device based on facial image
CN106856063A (en) * 2015-12-09 2017-06-16 朱森 A kind of new teaching platform
CN105631419B (en) * 2015-12-24 2019-06-11 浙江宇视科技有限公司 Face identification method and device
CN105809107B (en) * 2016-02-23 2019-12-03 深圳大学 Single sample face recognition method and system based on face feature point
CN105628996A (en) * 2016-03-25 2016-06-01 胡荣 Image processing-based electric energy meter
CN106022313A (en) * 2016-06-16 2016-10-12 湖南文理学院 Scene-automatically adaptable face recognition method
CN106250825A (en) * 2016-07-22 2016-12-21 厚普(北京)生物信息技术有限公司 A kind of at the medical insurance adaptive face identification system of applications fields scape
CN106326853B (en) * 2016-08-19 2020-05-15 厦门美图之家科技有限公司 Face tracking method and device
CN106326981B (en) * 2016-08-31 2019-03-26 北京光年无限科技有限公司 Robot automatically creates the method and device of individualized virtual robot
CN106326980A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Robot and method for simulating human facial movements by robot
CN107066932A (en) * 2017-01-16 2017-08-18 北京龙杯信息技术有限公司 The detection of key feature points and localization method in recognition of face
CN106919898A (en) * 2017-01-16 2017-07-04 北京龙杯信息技术有限公司 Feature modeling method in recognition of face
CN106951866A (en) * 2017-03-21 2017-07-14 北京深度未来科技有限公司 A kind of face authentication method and device
CN108932254A (en) * 2017-05-25 2018-12-04 中兴通讯股份有限公司 A kind of detection method of similar video, equipment, system and storage medium
CN107633209B (en) * 2017-08-17 2018-12-18 平安科技(深圳)有限公司 Electronic device, the method for dynamic video recognition of face and storage medium
CN107633564A (en) * 2017-08-31 2018-01-26 深圳市盛路物联通讯技术有限公司 Monitoring method and Internet of Things server based on image
CN108038422B (en) * 2017-11-21 2021-12-21 平安科技(深圳)有限公司 Camera device, face recognition method and computer-readable storage medium
CN108228742B (en) * 2017-12-15 2021-10-22 深圳市商汤科技有限公司 Face duplicate checking method and device, electronic equipment, medium and program
CN108615256B (en) * 2018-03-29 2022-04-12 西南民族大学 Human face three-dimensional reconstruction method and device
CN108898051A (en) * 2018-05-22 2018-11-27 广州洪森科技有限公司 A kind of face identification method and system based on video flowing
CN108629335A (en) * 2018-06-05 2018-10-09 华东理工大学 Adaptive face key feature points selection method
CN110688872A (en) * 2018-07-04 2020-01-14 北京得意音通技术有限责任公司 Lip-based person identification method, device, program, medium, and electronic apparatus
CN109190474B (en) * 2018-08-01 2021-07-20 南昌大学 Human body animation key frame extraction method based on gesture significance
CN109214157A (en) * 2018-08-16 2019-01-15 安徽超清科技股份有限公司 A kind of embedded human face identification intelligent identity authorization system based on robot platform
CN109190561B (en) * 2018-09-04 2022-03-22 四川长虹电器股份有限公司 Face recognition method and system in video playing
CN109350965B (en) * 2018-10-09 2019-10-29 苏州好玩友网络科技有限公司 A kind of game control method, device and terminal applied to mobile terminal
CN109360284A (en) * 2018-10-16 2019-02-19 菏泽学院 A kind of Intelligence of Students Work attendance management system and method
CN109584276B (en) * 2018-12-04 2020-09-25 北京字节跳动网络技术有限公司 Key point detection method, device, equipment and readable medium
CN109670440B (en) * 2018-12-14 2023-08-08 央视国际网络无锡有限公司 Identification method and device for big bear cat face
CN111258406A (en) * 2019-03-19 2020-06-09 李华 Mobile terminal power consumption management device
CN110148092B (en) * 2019-04-16 2022-12-13 无锡海鸿信息技术有限公司 Method for analyzing sitting posture and emotional state of teenager based on machine vision
CN112307817B (en) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 Face living body detection method, device, computing equipment and computer storage medium
CN110688977B (en) * 2019-10-09 2022-09-20 浙江中控技术股份有限公司 Industrial image identification method and device, server and storage medium
CN111144326B (en) * 2019-12-28 2023-10-27 神思电子技术股份有限公司 Human face anti-re-recognition method for man-machine interaction
CN112102150B (en) * 2020-01-07 2022-03-18 杭州鸭梨互动网络科技有限公司 Adaptive short video content enhancement system
CN111783681A (en) * 2020-07-02 2020-10-16 深圳市万睿智能科技有限公司 Large-scale face library recognition method, system, computer equipment and storage medium
CN112487904A (en) * 2020-11-23 2021-03-12 成都尽知致远科技有限公司 Video image processing method and system based on big data analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706576B1 (en) * 2004-12-28 2010-04-27 Avaya Inc. Dynamic video equalization of images using face-tracking
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706576B1 (en) * 2004-12-28 2010-04-27 Avaya Inc. Dynamic video equalization of images using face-tracking
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李新涛等.结合加权相似值和相似投票的视频流人脸识别.《合肥工业大学学报自然科学版》.2011,第34卷(第2期),摘要.
结合加权相似值和相似投票的视频流人脸识别;李新涛等;《合肥工业大学学报自然科学版》;20110228;第34卷(第2期);摘要 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279496A (en) * 2015-10-26 2016-01-27 浙江宇视科技有限公司 Human face recognition method and apparatus
CN105279496B (en) * 2015-10-26 2019-10-18 浙江宇视科技有限公司 A kind of method and apparatus of recognition of face
RU2712417C1 (en) * 2019-02-28 2020-01-28 Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) Method and system for recognizing faces and constructing a route using augmented reality tool

Also Published As

Publication number Publication date
CN102360421A (en) 2012-02-22

Similar Documents

Publication Publication Date Title
CN102360421B (en) Face identification method and system based on video streaming
Ju et al. Depth-aware salient object detection using anisotropic center-surround difference
Zhou et al. Scale adaptive image cropping for UAV object detection
CN105956578A (en) Face verification method based on identity document information
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
Zheng et al. Single image cloud removal using U-Net and generative adversarial networks
CN106846339A (en) A kind of image detecting method and device
CN103914676A (en) Method and apparatus for use in face recognition
CN103455991A (en) Multi-focus image fusion method
CN110443128A (en) One kind being based on SURF characteristic point accurately matched finger vein identification method
CN107633226A (en) A kind of human action Tracking Recognition method and system
CN104268520A (en) Human motion recognition method based on depth movement trail
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
Xiao et al. Image Fusion
CN113392856A (en) Image forgery detection device and method
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
Ma et al. Multiscale 2-D singular spectrum analysis and principal component analysis for spatial–spectral noise-robust feature extraction and classification of hyperspectral images
CN109101985A (en) It is a kind of based on adaptive neighborhood test image mismatch point to elimination method
Li Face detection algorithm based on double-channel CNN with occlusion perceptron
CN106023184A (en) Depth significance detection method based on anisotropy center-surround difference
CN108038464A (en) A kind of new HOG features Uygur nationality facial image recognizer
CN106874843A (en) A kind of method for tracking target and equipment
Zheng A novel thermal face recognition approach using face pattern words
Zhang et al. Region of interest extraction via common salient feature analysis and feedback reinforcement strategy for remote sensing images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140528

Termination date: 20171019

CF01 Termination of patent right due to non-payment of annual fee