CN100561501C - A kind of image detecting method and device - Google Patents
A kind of image detecting method and device Download PDFInfo
- Publication number
- CN100561501C CN100561501C CNB2007101797868A CN200710179786A CN100561501C CN 100561501 C CN100561501 C CN 100561501C CN B2007101797868 A CNB2007101797868 A CN B2007101797868A CN 200710179786 A CN200710179786 A CN 200710179786A CN 100561501 C CN100561501 C CN 100561501C
- Authority
- CN
- China
- Prior art keywords
- candidate frame
- sorter
- integral image
- image
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001514 detection method Methods 0.000 claims abstract description 47
- 238000012545 processing Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims description 10
- 238000012217 deletion Methods 0.000 claims description 6
- 230000037430 deletion Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 241000405217 Viola <butterfly> Species 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 210000003746 feather Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Abstract
The invention discloses a kind of image detecting method and device, in order to a kind of processing speed image detecting technique faster to be provided.A kind of image detecting method that the present invention proposes comprises: the integral image of calculating input image and square integral image; When the line number of the described integral image that has calculated during more than or equal to the height of object detection device model, according to described integral image and square integral image, adopt described object detection device that the candidate frame position that is in the described integral image scope that has calculated is verified; According to candidate frame position, determine the object space on the described input picture by checking.The present invention is used for image detection, makes the speed that the object in the image is detected be improved.
Description
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of image detecting method and device.
Background technology
In computer vision and technical field of image processing, people's face information of obtaining in image or the video all has important use such as fields such as man-machine interaction, safety, amusements.Therefore, obtain number, the size of people's face, the technology of positional information automatically from image, promptly human face detection tech has been subjected to greatly paying attention to.In recent years, along with the development of computer vision and mode identification technology, human face detection tech has also obtained development fast, and trend is ripe gradually.
Voila etc. have proposed a kind of human face detection tech based on microstructure features (Haar-like Features) and level type self-adaptation enhancing (Adaboost) sorter, this technology on performance with suitable based on the method for vector machine (SVM) and neural network (Neural Network), but, on speed, be higher than far away based on vector machine and neural network method, can reach the level of real time execution substantially.This method has just obtained researcher's attention after proposing, and has proposed a lot of improvement technology, and, be applied in a lot of products of industry member.
The fireballing reason of method for detecting human face that Viola proposes mainly is 2 points, the one, calculate the microstructure features value owing to adopted based on the method for integral image (Integral Image), and can calculate the microstructure features value of input picture apace; The 2nd, owing to adopted level type Adaboost algorithm, this algorithm adopts the little layer of operand to refuse most of interference of getting rid of easily earlier, then, adopts the big layer of operand to handle a small amount of candidate and disturbs.The microstructure features of the employing in this method as shown in Figure 1, each microstructure features value defined be the interior pixel brightness (being grey scale pixel value) of grey rectangle zone and with the interior pixel brightness of white rectangle zone and poor.
In order to calculate the microstructure features value fast, the integral image that Viola proposes as shown in Figure 2, integral image point (x, the value defined of y) locating for all grey scale pixel values in the grey rectangle zone in its upper left corner with, that is:
Wherein, (x, y) the expression integral image is at point (x, the value of y) locating, the grey scale pixel value that I (x ', y ') expression input picture is located at point (x ', y ') for II.The mode that Viola adopts following iteration obtains integral image one time to image from the grey rectangle sector scanning in the upper left corner:
s(x,y)=s(x,y-1)+I(x,y)
II(x,y)=II(x-1,y)+s(x,y)
Wherein, s (x, y) all grey scale pixel value sums of (comprising y) before the capable y of the being in row of expression x, and, definition s (x ,-1)=0, and II (1, y)=0.
Adopt integral image can ask for any rectangular area grey scale pixel value sum fast.Grey scale pixel value sum with sum (r) expression rectangular area r.As shown in Figure 3, according to the definition of integral image, can be according to following formula:
sum(D)=II(4)-II(2)-II(3)+II(1)
Ask for the grey scale pixel value sum (A, B, C, D represent a shaded rectangle zone respectively, and point 1,2,3,4 is corresponding region A respectively, B, C, the summit, the lower right corner of D) in any rectangular area D.
In order to get rid of the interference of conditions such as illumination, Viola further adopts brightness of image variance (can also be called normalized parameter) that above-mentioned microstructure features value is carried out normalization.Viola is defined as the brightness of image variance:
Wherein,
Be the brightness of image average, (i is that (i, the brightness value of j) locating, N are the number of pixels in the image for point on the image j) to I.The brightness of image variance can adopt formula:
Calculate, then the microstructure features value defined after the normalization is g
j=f
j/ σ, wherein f
jBe the microstructure features value of above-mentioned definition, i.e. grey rectangle zone interior pixel brightness and with the interior pixel brightness of white rectangle zone and poor.
Viola adopts each microstructure features is constructed the simplest tree classification device as Weak Classifier, and is specific as follows:
Wherein, x is the input picture of fixed size, g
j(x) represent j microstructure features value of this image correspondence, θ
jBe the decision threshold of j microstructure features correspondence, p
jValue be 1 or-1, work as p
jBe 1 o'clock, the judgement symbol of decision device is a greater-than sign, works as p
jBe at-1 o'clock, the symbol of decision device is an is less than, h
j(x) the judgement output of j Weak Classifier of expression.Like this, each Weak Classifier only needs threshold ratio once just can finish judgement.
The level type Adaboost sorter structure that Viola proposes as shown in Figure 4, human-face detector is made up of some strong classifiers, each strong classifier is made up of a plurality of Weak Classifiers, microstructure features of each Weak Classifier correspondence.For certain candidate frame, adopt the ground floor sorter to judge earlier, if can pass through the ground floor sorter, then continue to adopt second layer sorter to judge, otherwise, directly refuse.In like manner, carry out follow-up each layer processing, after a candidate frame disposes, handle next candidate frame again.At last, can think human face region by the candidate frame that all sorters are handled.
In order to detect people's face of different sizes, diverse location, Viola adopts and handles based on the mode of feature scaling.At first set the width of human-face detector model and highly be respectively MW and MH (MW=24 that Viola adopts MH=24), adopts to extract and scaling the people's face sample and the non-face sample of yardstick for this reason training level type AdaBoost people face detection model.Suppose that the scaling ratio is SR, then adopt a series of different scales that the mode scaling of feature scaling obtains sorter width and highly be respectively ROUND (MW*SR
s) and ROUND (MH*SR
s).Wherein, s is the integer greater than 0, and ROUND () expression is carried out the round computing to the numerical value in the bracket.In order to detect people's face of different sizes, input picture is calculated an integral image, then, adopt the human-face detector of the above-mentioned different scale that obtains to carry out traversal search respectively, thereby detect different sizes, people's face of diverse location, and add all candidate rectangles by level type detecting device to the people face and detect in the formation and note.
Consider that people's face may be because of yardstick, change in displacement and corresponding a plurality of testing result, therefore, common people's face detection algorithm all can adopt post-processing step to come the fusion detection result, makes people's face position only export a testing result.Simultaneously, can also merge some flase drop result, thereby reduce false drop rate by merging.Referring to Fig. 5, be the schematic flow sheet of the human face region in the detected image of Viola proposition.Figure 6 shows that the concrete steps that step S503 verifies candidate frame.Wherein, the candidate frame of verifying by the strong classifier of all layers among the step S607 is considered to people's face frame, and step S504 adds such candidate frame in the candidate queue.
Though the method for detecting human face that Viola proposes has lot of advantages, but, the intermodule of the module of calculated product partial image and integrated square image and checking candidate frame, share integral image and square integral image internal memory, on the time order and function order, the step of checking candidate frame must be in candidate frame integral image of being had a few and square integral image all calculate and could begin execution after finishing, therefore, the processing speed of this method is slow.
Summary of the invention
The embodiment of the invention provides a kind of image detecting method and device, in order to a kind of processing speed image detecting technique faster to be provided.
The image detecting method that the embodiment of the invention provides comprises:
In the process of the integral image of parallel computation input picture and square integral image, monitor the line number of the current integral image that calculates; When this line number during more than or equal to the height of object detection device model,
Simultaneously described integral image and square integral image are carried out calculation process, obtain normalized parameter;
For each Weak Classifier of every layer of sorter of described object detection device, the brightness by calculating two rectangular areas and poor, and, obtain the microstructure features value corresponding with each Weak Classifier of every layer of sorter by described normalized parameter;
Compare with the threshold value that sets in advance by microstructure features value, judge whether each microstructure features value is effective each Weak Classifier correspondence of described every layer of sorter;
With the effective microstructure features value weighting summation of described every layer of sorter, and, by the result after the addition is compared with the threshold value that sets in advance, judge whether described candidate frame passes through the checking of this layer sorter; Wherein, for described every layer of sorter, when one deck sorter is finished determination processing to a candidate frame, continue next candidate frame is judged;
To be defined as candidate frame position by the candidate frame of all sorter checkings by checking;
According to candidate frame position, determine the object space on the described input picture by checking.
The image detection device that the embodiment of the invention provides comprises:
The integral image unit is used for the integral image and square integral image of parallel computation input picture;
Authentication unit, be used to monitor the line number of the current integral image that calculates in described integral image unit, when this line number during more than or equal to the height of object detection device model, simultaneously described integral image and square integral image are carried out calculation process, obtain normalized parameter; For each Weak Classifier of every layer of sorter of described object detection device, the brightness by calculating two rectangular areas and poor, and, obtain the microstructure features value corresponding with each Weak Classifier of every layer of sorter by described normalized parameter; Compare with the threshold value that sets in advance by microstructure features value, judge whether each microstructure features value is effective each Weak Classifier correspondence of described every layer of sorter; With the effective microstructure features value weighting summation of described every layer of sorter, and, by the result after the addition is compared with the threshold value that sets in advance, judge whether described candidate frame passes through the checking of this layer sorter; Wherein, for described every layer of sorter, when one deck sorter is finished determination processing to a candidate frame, continue next candidate frame is judged; To be defined as candidate frame position by the candidate frame of all sorter checkings by checking;
Determining unit is used for determining the object space on the described input picture according to the candidate frame position by checking.
The embodiment of the invention, when the line number of the integral image of the input picture that has calculated during more than or equal to the height of object detection device model, according to described integral image and square integral image, adopt described object detection device that the candidate frame position that is in the described integral image scope that has calculated is verified; According to candidate frame position, determine the object space on the described input picture by checking.By this technical scheme, just begin to carry out after having avoided integral image of being had a few that the step of checking candidate frame must be in candidate frame and square integral image all to calculate finishing, thereby the feasible speed that object in the image detected is improved.
Description of drawings
Fig. 1 is the microstructure features synoptic diagram that human face detection tech adopted of propositions such as Viola in the prior art;
Fig. 2 is an integral image synoptic diagram of the prior art;
Fig. 3 for the available technology adopting integral image ask for any rectangular pixels gray scale and synoptic diagram, wherein, the summit, the lower right corner that point 1,2,3,4 is respectively regional A, B, C, D;
Fig. 4 is a level type human-face detector structural representation of the prior art;
Fig. 5 is the method for detecting human face schematic flow sheet that Viola etc. proposes in the prior art;
Fig. 6 verifies all possible rectangle frame for propositions such as Viola in the prior art, to determine the idiographic flow synoptic diagram of candidate frame;
The schematic flow sheet of the image detecting method that Fig. 7 provides for the embodiment of the invention;
Whether calculated product partial image that Fig. 8 provides for the embodiment of the invention and integrated square image and checking candidate frame the parallel processing synoptic diagram by each layer sorter;
Whether the parallel judgement candidate frame that Fig. 9 provides for the embodiment of the invention can be by the schematic flow sheet when the anterior layer sorter;
Effectively whether the judgement microstructure features that Figure 10 provides for the embodiment of the invention schematic flow sheet.
Embodiment
The embodiment of the invention, a kind of image detecting method and device have been proposed, from the account form of integral image and square integral image, and several aspects such as verification mode that candidate frame carries out have been made improvement to prior art, to improve the processing speed of image detection.
Below in conjunction with accompanying drawing, describe the embodiment of the embodiment of the invention in detail.
Referring to Fig. 7, a kind of image detecting method that the embodiment of the invention provides comprises:
The partial integration image of S701, calculating input image and square integral image, wherein, described integral image and/or described integrated square image are according to from top to bottom, order from left to right, the 1st row pixel that adopts described input picture to all pixel intensity of current pixel with calculate, and the calculating of integral image and square integral image is carried out synchronously.
S702, judge in the object detection device of all yardsticks whether to exist the height of object detecting device model to be less than or equal to the line number of the integral image that has calculated, if, execution in step S703 then; Otherwise, execution in step S701, the integral image and square integral image of next line image calculated in continuation.
The device of object detection described in the embodiment of the invention can be human-face detector, also can be the detecting device of other object spaces of being used for detected image, for example detecting device of human body, automobile etc.
S703, according to described integral image and square integral image, adopt the object detection device that the candidate frame position that is in the described integral image scope that has calculated is verified.
S704, will add in the candidate queue, and continue to adopt the object detection device of next yardstick to detect by the candidate frame positional information of checking.
S705, the candidate frame positional information of all overlappings in the candidate queue is merged processing, determine the object space on the input picture.
Among the step S701, integral image and square integral image are separate in computing, do not have temporal precedence relationship, therefore can parallel processing, at the input picture of every row, read or calculate the brightness value of each pixel of this row image; Parallel iteration calculated product partial image and square integral image obtain the integral image and square integral image of this row image correspondence.
The parallel processing of mentioning in the embodiment of the invention refers generally to for various piece provides separately independently arithmetic element respectively, and carry out the computing of various piece simultaneously respectively in application such as design chips.How many arithmetic elements specifically are provided, how handle simultaneously, those skilled in the art need not creative work and just can realize.
And in the prior art, generally only have a cover arithmetic element, after needing to have calculated integral image, remove to calculate the integrated square image again, thereby the processing time just is longer than the scheme that the embodiment of the invention proposes.
In order to improve processing speed, after the embodiment of the invention is finished the integral image and square integral image of part in calculating, just begin to verify whether the candidate frame that is in fully in the integral image scope that has calculated is the object frame, promptly the proof procedure of candidate frame and the computation process of integral image and integrated square image are walked abreast, rather than after waiting for the integral image of input picture and square integral image all calculating finishing, carry out the proof procedure of candidate frame again.
For example, use THeight
nThe height of representing the object detection device of n yardstick, after the calculating of input picture having been finished capable integral image of k and square integral image, judge in the object detection device of all yardsticks whether exist the height of object detecting device the formula of satisfying is arranged:
THeight
n≤k
If exist the height of object detecting device to satisfy above-mentioned formula, then below calculating input image when integral image of each row and square integral image, simultaneous verification lower frame ordinate is k, and horizontal ordinate is 0 to W-THeight
n, width and highly be THeight
nAll candidate rectangle frames whether be object frame (as people's face frame or human body frame etc.).In like manner, treat k+delta
nAfter the integral image of row and square integral image calculate and finish, below the continuation calculating input image, in integral image and square integral image of each row, verify that the lower frame ordinate is k+delta again
n, horizontal ordinate is 0 to W-THeight
nAll candidate rectangle frames whether be the object frame.
Like this, parallel processing has just been realized at the proof procedure of each candidate frame of different rows and the computation process of integral image and integrated square image in the bottom.
Certainly, also can adopt other modes to handle, for example, every integral image and square integral image that calculates delegation just once judged, judged whether to satisfy formula THeight
nThe object detection device of≤k if satisfied object detection device is arranged, then carries out the proof procedure of candidate frame.
When whether step S703 is the object frame in checking candidate frame position, need judge step by step this candidate frame position with the sorter of level type.Preferably, step S703 specifically comprises:
Simultaneously described integral image and square integral image are carried out calculation process, obtain normalized parameter;
For each Weak Classifier of every layer of sorter of described object detection device, the brightness by calculating two rectangular areas and poor, and, obtain the microstructure features value corresponding with each Weak Classifier of every layer of sorter by described normalized parameter;
Compare with the threshold value that sets in advance by microstructure features value, judge whether each microstructure features value is effective each Weak Classifier correspondence of described every layer of sorter;
With the effective microstructure features value weighting summation of described every layer of sorter, and, by the result after the addition is compared with the threshold value that sets in advance, judge whether described candidate frame passes through the checking of this layer sorter;
To be defined as candidate frame position by the candidate frame of all sorter checkings by checking.
The processing of each layer sorter has sequencing, but the embodiment of the invention adopts pipeline organization that each candidate frame is handled, and to improve the speed to the candidate frame checking, specifically comprises:
All be provided with one for every layer of sorter of the sorter of level type and overlap independently arithmetic element, in order to different candidate frames are carried out pipeline processes.
For example the 1st candidate frame takies the 0th layer of arithmetic element earlier, when the 0th layer of arithmetic element handled the 1st candidate frame, the 2nd candidate frame begins to take the 0th layer of arithmetic element, and when the 0th layer of arithmetic element handled the 2nd candidate frame, the 3rd candidate frame began to take the 0th layer of arithmetic element.Equally, the 1st candidate frame of judging by the 0th layer of arithmetic element can take the 1st layer of arithmetic element, and after finishing dealing with, the then next candidate frame of judging by the 0th layer of arithmetic element can take the 1st layer of arithmetic element.Therefore, need Sn * CascNum to overlap arithmetic element altogether, wherein, Sn represents total number of the object detection device of all yardsticks, total number of plies of CascNum presentation class device.But, in actual applications,, can partly adopt this mode if think that the hardware resource of making needs like this is too much.For example, consider that the candidate frame that front each layer sorter need handle is many, and the candidate frame that follow-up each layer sorter handled is less, each layer sorter distributes more arithmetic element in front, and back each layer sorter distributes less arithmetic element.
Preferably, also corresponding candidate frame data structure formation (FIFO) is set, is used to coordinate information that writes down candidate frame etc., for example comprise left side coordinate, the top coordinate of candidate frame for each layer sorter, the sequence number of candidate frame place yardstick, and normalized parameter (stdDev).The determination module of each layer sorter reads the coordinate information of the candidate frame among the corresponding FIFO, and according to the yardstick sequence number, obtains the classifier parameters of corresponding yardstick, and this candidate frame is judged.
Wherein, the processing mode of the 0th layer of sorter is slightly different with the processing mode of the sorter of each layer of back, in the 0th layer of sorter, need to ask for normalized parameter, and normalized parameter is recorded among the 0th layer the FIFO, and pass to the FIFO of follow-up each layer successively, so that the use of follow-up sorter.
Preferably, handle for convenience, limit the step-length of the object detection device of all yardsticks, for example, no matter horizontal direction or vertical direction, the step-length of the object detection device of all yardsticks is 2 pixels.
Equaling 2 with step-length below is example, tells about the parallel processing step whether calculated product partial image and integrated square image and checking candidate frame pass through each layer sorter, specifically sees also Fig. 8.Suppose capable integral image and square integral image of the current 2k+1 that finishes as calculated, whether the height of then further judging the object detection device of certain yardstick is less than or equal to 2k+1, if then the feather edge ordinate with the object detection device of this yardstick adds among the FIFO of the 0th layer of sorter at all candidate frames of current line.Specific as follows, for the left frame horizontal ordinate (being assumed to i) of all possible candidate frame, it is since 0, and step-length is delta
n, up to maxx, maxx=W-TWidth wherein
n, with current candidate frame R (i, 2k+1-THeight
n, TWidth
n, THeight
n) add among the FIFO of the 0th layer of sorter.Wherein, i represents the left frame horizontal ordinate of candidate frame, 2k+1-THeight
nThe upper side frame ordinate of expression candidate frame, TWidth
nRepresent the wide of candidate frame, THeight
nThe height of expression candidate frame.
In the concrete steps of checking candidate frame, suppose in the object detection device of current yardstick, when the anterior layer sorter comprises weakNum altogether
StageOrderIndividual microstructure features, wherein weakNum
StageOrderBe every layer of Weak Classifier number.Then separate between these microstructure features, only shared integral image internal memory and normalized parameter.Thereby, in order further to improve verifying speed, can adopt the parallel processing mode, the different microstructure features values of parallel computation are calculated the summation again that finishes, as shown in Figure 9.
Further, specifically when calculating certain microstructure features, ask for the brightness of two rectangular areas and also can parallel processing, as shown in figure 10.Preferably, can be provided with a hardware cell be used to calculate rectangular area brightness and.
Further, aspect the calculating normalized parameter, also can carry out parallel processing, simultaneously integral image and square integral image be carried out related operation, obtain normalized parameter.
Preferably, step S704 specifically comprises the step that the candidate frame positional information adds candidate queue:
Size and position according to described candidate frame to be added, and the size and the position that have been added to the candidate frame in the candidate queue, judge whether described candidate frame to be added is close with the described candidate frame that has added, if, then close candidate frame is merged, and with the number of the merged candidate frame degree of confidence as the candidate frame after merging; Otherwise, described candidate frame to be added is added in the described candidate queue.
Preferably, the candidate frame positional information to all overlappings in the candidate queue among the step S705 merges processing, determines that the step of the object space on the input picture specifically comprises:
When the candidate frame in the described candidate queue is contained in another candidate frame, the candidate frame deletion that degree of confidence is less; When degree of confidence is identical, the less candidate frame of deletion area;
To be defined as the object space on the described input picture through the position of remaining candidate frame in the candidate queue after described merging and the deletion processing.
The image detection device that the embodiment of the invention provides comprises:
The integral image unit is used for the integral image and square integral image of calculating input image.
Authentication unit, be used for when the line number of the described integral image that has calculated during more than or equal to the height of object detection device model, according to described integral image and square integral image, adopt described object detection device that the candidate frame position that is in the described integral image scope that has calculated is verified.
Determining unit is used for determining the object space on the described input picture according to the candidate frame position by checking.
In the field that the object in the image is detected, it is a sub-field of object detection that people's face detects, and other application such as Automobile Detection, pedestrian detection and human face detection tech are similar, all belongs to two class sorting techniques of area of pattern recognition.Therefore, the scheme that the embodiment of the invention proposes goes for the human face region in the detected image, can also be applied to the shared zone of other types object in the detected image according to actual needs.For example, the automobile region in can detected image, each human body in can also detected image or the zone at animal place or the like.
In sum, the present invention is from improving the angle of image detection speed, and at integral image and square integral image, and several aspects such as checking candidate frame realize parallel processing respectively, thereby reach the purpose of raising image detection speed.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.
Claims (8)
1, a kind of image detecting method is characterized in that, this method comprises:
In the process of the integral image of parallel computation input picture and square integral image, monitor the line number of the current integral image that calculates; When this line number during more than or equal to the height of object detection device model,
Simultaneously described integral image and square integral image are carried out calculation process, obtain normalized parameter;
For each Weak Classifier of every layer of sorter of described object detection device, the brightness by calculating two rectangular areas and poor, and, obtain the microstructure features value corresponding with each Weak Classifier of every layer of sorter by described normalized parameter;
Compare with the threshold value that sets in advance by microstructure features value, judge whether each microstructure features value is effective each Weak Classifier correspondence of described every layer of sorter;
With the effective microstructure features value weighting summation of described every layer of sorter, and, by the result after the addition is compared with the threshold value that sets in advance, judge whether described candidate frame passes through the checking of this layer sorter; Wherein, for described every layer of sorter, when one deck sorter is finished determination processing to a candidate frame, continue next candidate frame is judged;
To be defined as candidate frame position by the candidate frame of all sorter checkings by checking;
According to candidate frame position, determine the object space on the described input picture by checking.
2, method according to claim 1 is characterized in that, a described integral image and a square integral image calculate simultaneously.
3, method according to claim 1 is characterized in that, the microstructure features value of each Weak Classifier correspondence in described every layer of sorter is calculated simultaneously.
4, method according to claim 1 and 2 is characterized in that, the brightness of described two rectangular areas and calculating simultaneously.
5, method according to claim 1 is characterized in that, according to the candidate frame position by checking, determines that the step of the object space on the described input picture comprises:
To add by the candidate frame position of checking in the candidate queue that sets in advance;
From described candidate queue, determine the object space on the described input picture.
6, method according to claim 5 is characterized in that, the step that described candidate frame is added candidate queue comprises:
Size and position according to described candidate frame to be added, and the size and the position that have been added to the candidate frame in the candidate queue, judge whether described candidate frame to be added is close with the described candidate frame that has added, if, then close candidate frame is merged, and with the number of the merged candidate frame degree of confidence as the candidate frame after merging; Otherwise, described candidate frame to be added is added in the described candidate queue.
7, method according to claim 6 is characterized in that, determines that from described candidate queue the step of the object space of described input picture comprises:
When the candidate frame in the described candidate queue is contained in another candidate frame, the candidate frame deletion that degree of confidence is less; When degree of confidence is identical, the less candidate frame of deletion area;
To be defined as the object space on the described input picture through the position of remaining candidate frame in the candidate queue after described merging and the deletion processing.
8, a kind of image detection device is characterized in that, this device comprises:
The integral image unit is used for the integral image and square integral image of parallel computation input picture;
Authentication unit, be used to monitor the line number of the current integral image that calculates in described integral image unit, when this line number during more than or equal to the height of object detection device model, simultaneously described integral image and square integral image are carried out calculation process, obtain normalized parameter; For each Weak Classifier of every layer of sorter of described object detection device, the brightness by calculating two rectangular areas and poor, and, obtain the microstructure features value corresponding with each Weak Classifier of every layer of sorter by described normalized parameter; Compare with the threshold value that sets in advance by microstructure features value, judge whether each microstructure features value is effective each Weak Classifier correspondence of described every layer of sorter; With the effective microstructure features value weighting summation of described every layer of sorter, and, by the result after the addition is compared with the threshold value that sets in advance, judge whether described candidate frame passes through the checking of this layer sorter; Wherein, for described every layer of sorter, when one deck sorter is finished determination processing to a candidate frame, continue next candidate frame is judged; To be defined as candidate frame position by the candidate frame of all sorter checkings by checking;
Determining unit is used for determining the object space on the described input picture according to the candidate frame position by checking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007101797868A CN100561501C (en) | 2007-12-18 | 2007-12-18 | A kind of image detecting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007101797868A CN100561501C (en) | 2007-12-18 | 2007-12-18 | A kind of image detecting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101183428A CN101183428A (en) | 2008-05-21 |
CN100561501C true CN100561501C (en) | 2009-11-18 |
Family
ID=39448697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2007101797868A Expired - Fee Related CN100561501C (en) | 2007-12-18 | 2007-12-18 | A kind of image detecting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100561501C (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5388835B2 (en) * | 2009-12-24 | 2014-01-15 | キヤノン株式会社 | Information processing apparatus and information processing method |
CN102147866B (en) * | 2011-04-20 | 2012-11-28 | 上海交通大学 | Target identification method based on training Adaboost and support vector machine |
CN103390151B (en) * | 2012-05-08 | 2016-09-07 | 展讯通信(上海)有限公司 | Method for detecting human face and device |
CN104091171B (en) * | 2014-07-04 | 2017-06-20 | 华南理工大学 | Vehicle-mounted far infrared pedestrian detecting system and method based on local feature |
CN106326817B (en) * | 2015-07-03 | 2021-08-03 | 佳能株式会社 | Method and apparatus for detecting object from image |
CN106874845B (en) * | 2016-12-30 | 2021-03-26 | 东软集团股份有限公司 | Image recognition method and device |
CN107273923B (en) * | 2017-06-02 | 2020-09-29 | 浙江理工大学 | Construction method of textile fabric friction sound wave discriminator |
CN111583160B (en) * | 2020-06-03 | 2023-04-18 | 浙江大华技术股份有限公司 | Method and device for evaluating noise of video picture |
CN111915640B (en) * | 2020-08-11 | 2023-06-13 | 浙江大华技术股份有限公司 | Method and device for determining candidate frame scale, storage medium and electronic device |
CN112560939B (en) * | 2020-12-11 | 2023-05-23 | 上海哔哩哔哩科技有限公司 | Model verification method and device and computer equipment |
-
2007
- 2007-12-18 CN CNB2007101797868A patent/CN100561501C/en not_active Expired - Fee Related
Non-Patent Citations (6)
Title |
---|
Robust Real-Time Face Detection. PAUL VIOLA, MICHAEL J. JONES.International Journal of computer Vision,Vol.57 No.2. 2004 |
Robust Real-Time Face Detection. PAUL VIOLA, MICHAEL J. JONES.International Journal of computer Vision,Vol.57 No.2. 2004 * |
动态权值预划分实值Adaboost人脸检测算法. 武妍,项恩宁.计算机工程,第33卷第3期. 2007 |
动态权值预划分实值Adaboost人脸检测算法. 武妍,项恩宁.计算机工程,第33卷第3期. 2007 * |
基于AdaBoost和遗传算法的快速人脸定位算法. 唐旭晟,欧宗瑛,苏铁明,华顺刚.华南理工大学学报(自然科学版),第35卷第1期. 2007 |
基于AdaBoost和遗传算法的快速人脸定位算法. 唐旭晟,欧宗瑛,苏铁明,华顺刚.华南理工大学学报(自然科学版),第35卷第1期. 2007 * |
Also Published As
Publication number | Publication date |
---|---|
CN101183428A (en) | 2008-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100561501C (en) | A kind of image detecting method and device | |
CN101178770B (en) | Image detection method and apparatus | |
CN108830188B (en) | Vehicle detection method based on deep learning | |
CN100561505C (en) | A kind of image detecting method and device | |
CN104680124B (en) | Detect the image processor and its method of pedestrian | |
CN102663413B (en) | Multi-gesture and cross-age oriented face image authentication method | |
CN106358444B (en) | Method and system for face verification | |
CN101350063B (en) | Method and apparatus for locating human face characteristic point | |
Gan et al. | Pedestrian detection based on HOG-LBP feature | |
US20120093420A1 (en) | Method and device for classifying image | |
CN103049733B (en) | Method for detecting human face and human-face detection equipment | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN102147851B (en) | Device and method for judging specific object in multi-angles | |
US7840037B2 (en) | Adaptive scanning for performance enhancement in image detection systems | |
CN106874894A (en) | A kind of human body target detection method based on the full convolutional neural networks in region | |
CN101655914B (en) | Training device, training method and detection method | |
CN101576953A (en) | Classification method and device of human body posture | |
CN104063719A (en) | Method and device for pedestrian detection based on depth convolutional network | |
Monteiro et al. | Vision-based pedestrian detection using haar-like features | |
CN103390164A (en) | Object detection method based on depth image and implementing device thereof | |
CN102855500A (en) | Haar and HoG characteristic based preceding car detection method | |
CN106940791B (en) | A kind of pedestrian detection method based on low-dimensional histograms of oriented gradients | |
CN103455820A (en) | Method and system for detecting and tracking vehicle based on machine vision technology | |
CN106600955A (en) | Method and apparatus for detecting traffic state and electronic equipment | |
CN103020614A (en) | Human movement identification method based on spatio-temporal interest point detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20091118 Termination date: 20111218 |