CN103020260A - Video query method - Google Patents

Video query method Download PDF

Info

Publication number
CN103020260A
CN103020260A CN2012105671361A CN201210567136A CN103020260A CN 103020260 A CN103020260 A CN 103020260A CN 2012105671361 A CN2012105671361 A CN 2012105671361A CN 201210567136 A CN201210567136 A CN 201210567136A CN 103020260 A CN103020260 A CN 103020260A
Authority
CN
China
Prior art keywords
moving target
video
target
moving
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012105671361A
Other languages
Chinese (zh)
Inventor
李卫军
阮晓虎
董肖莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Semiconductors of CAS
Original Assignee
Institute of Semiconductors of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Semiconductors of CAS filed Critical Institute of Semiconductors of CAS
Priority to CN2012105671361A priority Critical patent/CN103020260A/en
Publication of CN103020260A publication Critical patent/CN103020260A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a video query method which comprises the steps of: detecting a moving object in a video; establishing an index list of the moving object and storing a video fragment content of the moving object detected; and searching for the video fragment content of the moving object selected when an input operation conducted by a user to select the moving object in the index list is detected, so as to playback the video fragment. According to the method provided by the invention, the video fragments interested can be quickly and efficiently queried through the video query method based on the moving object index, and the application efficiency of video information is improved.

Description

The query video method
Technical field
The present invention relates to field of video monitoring, relate in particular to a kind of query video method.
Background technology
At present, monitoring is adopted as equipment day by day universal in many instances, and coverage is more and more wider, and digital video information has become important information carrier in our actual life.According to the IMS of international market research firm measuring and calculating, China in 2010 has set up 10 million and has propped up camera, and after this quantity of camera also will increase fast.Along with the quickening that multimedia information technology and digital video technology are used paces, the problem that we face is how fast finding is to our interested content from the video information of magnanimity, and this just need to use query video.
The most direct method is artificial query video, by the monitor video file that playback is stored, searches the video segment at object of interest place.There are a lot of problems in artificial query video method, first, the memory capacity of monitor video is often very large, the video segment at inquiry object of interest place in huge video database, often need to expend a large amount of time and manpower removes playback video, efficient is extremely low, has greatly affected the case inspection efficient after security incident occurs, and can not well bring into play the effect of monitor video.Second, the artificial enquiry video very easily is subject to the impact of subjective factor and objective factor, the artificial enquiry video is easy to cause fatigue, if the video segment at object of interest place is shorter in this case, can be easy to miss this partial video information, finally cause to inquire corresponding video segment.And different people or the same people description to object of interest under different condition is different, therefore objective not, does not have unified standard, so also can bring difficulty to the inquiry video.The method of artificial enquiry video can not satisfy real needs.
Summary of the invention
The technical matters that (one) will solve
For solving above-mentioned one or more problems, the invention provides a kind of query video method.
(2) technical scheme
According to an aspect of the present invention, provide a kind of query video method.The method comprises: detect the moving target that occurs in the video; Set up the index of moving target, preserve detected moving target information; And in detecting user's Selecting Index tabulation during the input operation of moving target, the video segment of the moving target that retrieval is chosen is realized this video segment playback.
(3) beneficial effect
Can find out from technique scheme, query video method of the present invention has following beneficial effect:
(1) by setting up take the query video method of moving target index as the basis, inquiry video segment interested that can be rapidly and efficiently can improve the application efficiency of video information;
(2) by the moving object detection algorithm, fast detecting goes out the moving target in the video, and can adapt to the multiple special circumstances such as moving target is divided, moving target separation, set up accordingly the index of moving target, thereby the interesting target fast in the positioning video information;
(3) by tracking and kinematic parameter analysis to the moving target in the video, calculate and draw boundary rectangle frame and the continuous motion track of moving target, make the user can grasp fast the detailed movement information of moving target in video.
Description of drawings
Fig. 1 is the process flow diagram of a kind of fast video querying method provided by the invention;
Fig. 2 is the boundary rectangle frame of drawing moving target in the expression detection process of moving target.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Need to prove, in accompanying drawing or instructions description, similar or identical part is all used identical figure number.The implementation that does not illustrate in the accompanying drawing or describe is form known to a person of ordinary skill in the art in the affiliated technical field.In addition, although this paper can provide the demonstration of the parameter that comprises particular value, should be appreciated that, parameter need not definitely to equal corresponding value, but can be similar to corresponding value in acceptable error margin or design constraint.In addition, the direction term of mentioning in following examples, such as " on ", D score, 'fornt', 'back', " left side ", " right side " etc., only be direction with reference to the accompanying drawings.Therefore, the direction term of use is to illustrate not to be to limit the present invention.
In one exemplary embodiment of the present invention, provide a kind of query video method.As shown in Figure 1, this query video method comprises:
Steps A: detect the moving target that occurs in the video;
In this step, at first carry out pre-service to catching the image of getting in the monitor video, to improve the quality of image to be detected, improve and detect effect, secondly from monitor video, detect moving target with the moving object detection algorithm, but the moving target that detects should not comprise the change of background in the monitoring scene, such as the rustle of leaves in the wind or light variation etc.
Moving object detection can be selected suitable algorithm as required, such as based on code book and improved moving object detection algorithm, mixed Gauss model method etc.Hereinafter be elaborated as the process of example to moving object detection to adopt based on the moving object detection algorithm of code book.
By setting up the background model code book, mate the moving target that obtain in video with the pixel code word that the pixel in the image to be detected is corresponding with the background model code book based on the moving object detection algorithm of Codebook (code book), realize the detection of moving target.Then steps A can comprise again:
Substep A1 gets T frame before the monitor video, based on code book (Codebook) background modeling of real-time update, to the continuous sampling value of each pixel according to color similarity and brightness range generation background code book thereof;
Setting up the background model code book stage is also referred to as the training stage, gets the foundation (but for example T value 25 or 50) that T frame before the monitor video is used for the background model code book, moving target can occur in training stage (front T two field picture) video.
According to the difference of pixel sampling value situation of change, the code word number that each code book comprises can be different.Definition X={x 1, x 2..., x NThe sequential sampling value of a pixel, wherein x t(t=1 ..., N) be the RGB vector.If C={c 1, c 2..., c LThe code book of this pixel, each code word c i(i=1 ..., L) be defined as two tuple structures
Figure BDA00002637961000031
Figure BDA00002637961000032
Wherein,
Figure BDA00002637961000033
With
Figure BDA00002637961000034
Minimum and the maximum brightness value of code word respective pixel; F represents the number of times that this code word occurs; λ represents the maximum time interval that code word does not occur again in training; P and q represent respectively the 1st time and last 1 match time after this code word occurs, and can simply be made as the frame ordinal number.
At the training period that makes up code book, with t pixel x constantly tSampled value and current code book compare, if code word c is arranged m(m is the index of certain code word) and its coupling are then with code word c mAs the Approximation Coding of this sampled point, represent that this code word occurs once.The code word of coupling may have a plurality of, and it is best that algorithm can determine which codeword matching gets according to color similarity and brightness range.The detailed process of code book extraction algorithm is as follows:
A1a puts sky with the code book of each pixel, L=0 step by step;
Wherein, L is the number of code word in the code book.
A1b step by step is for the sequential value X={x of each pixel of training video 1, x 2..., x N, x t=(R t, G t, B t), t=1 ..., N;
1) if code book is empty, L=0 then creates a code word:
L=L+1, I = R t 2 + G t 2 + B t 2
v L=(R t,G t,B t),μ L=(I,I,1,t-1,t,t)
Wherein, R, G, B are respectively the three primary colors redness of pixel, green, blue component, and I is the brightness value of pixel, v LExpression t is the three primary colors R of certain pixel constantly t, G t, B tThe vector that component consists of, μ L6 dimensional vectors of expression record codeword information.
2) if code book is not empty, then from code book, find out and x according to following 2 conditions tThe code word c of coupling m:
(a)colordist(x t,v m)≤ε 1
(b)
Figure BDA00002637961000042
Wherein, colordist (x t, v m) be pixel x tThe v corresponding with code word mThe color similarity of component is defined as colordist ( x t , v m ) = &delta; = | | x t | | 2 - p 2 , Wherein, p 2 = | | x t | | 2 cos 2 &theta; = < x t , v i > | | v i | | 2 , < x t , v i > 2 = ( R &OverBar; i R + G &OverBar; i G + B &OverBar; i B ) , ||x t|| 2=R 2+G 2+B 2 | | v i | | 2 = R &OverBar; i 2 + G &OverBar; i 2 + B &OverBar; i 2 ; ε 1The global threshold variable, must be for specifically being used as suitable adjustment.
Be pixel x tBrightness I whether in the brightness range of code word
Figure BDA00002637961000048
In, be defined as
Figure BDA00002637961000049
Wherein,
Figure BDA000026379610000410
α<1, β>1, With
Figure BDA000026379610000413
Minimum and maximum brightness value in the code word.
The situation that satisfies of these two conditions is x tAnd c mColor very approximate, and x tBrightness at c mIn the acceptable brightness range.
Determining of color similarity and brightness range: for pixel x t=(R t, G t, B t), code word c iAnd v i = ( R &OverBar; i , G &OverBar; i , B &OverBar; i ) , Then: | | x t | | 2 = R t 2 + G t 2 + B t 2 , | | v i | | 2 = R &OverBar; i 2 + G &OverBar; i 2 + B &OverBar; i 2 , < x t , v i > 2 = ( R &OverBar; i R t + G &OverBar; i G t + B &OverBar; i B t ) 2 , Color similarity is calculated as follows:
cos 2 &theta; = < x t , v i > 2 | | x t | | 2 * | | v i | | 2
colorsim ( x t , v i ) = cos 2 &theta;
Determine brightness: the brightness in the moving object detection changes individual scope, and to each code word, its scope is [I Low, I Hi]:
I Low=α I Max, I Hi=min{ β I Max, I Min/ α }, wherein, α<1, β>1.
Typically, get 0.4<α<0.7,1.1<β<1.5, this scope is more stable in the code book renewal process.Then luminance function is defined as follows:
brightness ( I , < I min , I max > ) = true if I low &le; | | x t | | &le; I hi false others
3) if can not find, namely do not have code word to satisfy above-mentioned condition in the code book, then utilize 1), 2) be that this pixel creates a code word;
4) if code word c is arranged mSatisfy above-mentioned condition, its content is:
v m = ( R &OverBar; m , G &OverBar; m , B &OverBar; m ) , &mu; m = < I m min , I m max , f m , &lambda; m , p m , q m >
So code word is done following renewal:
v m = ( ( f m R &OverBar; m + R t ) / ( f m + 1 ) , ( f m G &OverBar; m + R t ) / ( f m + 1 ) , ( f m B &OverBar; m + R t ) / ( f m + 1 ) )
&mu; m = < min ( I , I m min ) , max ( I , I m max ) , f m + 1 , max ( &lambda; m , t - q m ) , p m , t >
A1c after training finishes, calculates the maximum time interval that each code word of this pixel does not occur again, namely for c step by step i, I=1 ..., L:
λ i=max(λ i,(N-q i+p i-1))
Introducing time criterion λ is because there is redundancy in the code book that obtains in the training process, and the code word of some expression foreground moving targets and noise wherein may be arranged, and utilizes formula M={c k| c k∈ C, λ k≤ T MThese code words can be separated at probability meaning, thereby allow to have moving target in the initial training process.
A1d utilizes λ to eliminate redundant code word step by step, obtains representing the initial codebook M (k is the index of code word) of real background:
M={c k|c k∈C,λ k≤T M}
Wherein, threshold value T MUsually get half of training frame number, namely N/2 represents that all code words that represent background must occur at least in the N/2 frame.
So far, the background modeling process finishes.
Simultaneously, in order to be applicable to video monitoring system, also must the real-time update code book in the detection process of moving target.Need to consider the change of background that for example illumination variation (such as switch lamp or black clouds) and moving target itself (as the vehicle that stops or travelling) cause.Upgrade in order to reach adaptive code book, to realize the real-time update of code book, then steps A comprises again:
Substep A2, other frames to outside the T frame before the monitor video subtract each other itself and background model, judge whether pixel sampling value and its code book mate, if the pixel value of new input and background codebook coupling then are judged as background, otherwise are judged as moving target.This process detection speed is very fast.
This substep is divided into again:
A2a for new input pixel x=(R, G, B) and corresponding code book C thereof, calculates brightness step by step
Figure BDA00002637961000061
Definition Boolean variable Match=0, and give threshold value variable ε 2Assignment;
A2b finds out the code word c that mates with x from its code book C according to following two conditions step by step mIf find, then Match=1:
colordist(x,v m)≤ε 2
Figure BDA00002637961000062
Wherein, the corresponding parameter meaning among the step by step A1b among the parameters in the formula and the substep A1 is identical.
A2c step by step, judge the foreground moving object pixel:
BGS ( x ) = foreground Match = 0 background Match = 1
Wherein, foreground is prospect, and background is background.
Namely the pixel that satisfies following two conditions is judged as background: the color similarity of (1) pixel and certain code word is less than detection threshold; (2) brightness of pixel is in the brightness range of this code word.Otherwise without code word and its coupling, be considered to foreground pixel.Wherein, detection threshold ε 2Need to appropriately adjust for concrete application.
Not only can detect moving target by above-mentioned moving object detection algorithm, all right real-time update background codebook, upgrade real-time code book way as follows:
In substep A2 process, the known background codebook that obtains in the training stage is M, and the background codebook after buffer memory is concise is M DIf the sampled value of a pixel and existing code book M do not mate, then create a new code word to M for it DIn, and the cycle of new code word is by the time limit Filter, with the cycle greater than
Figure BDA00002637961000072
Deletion, less than
Figure BDA00002637961000073
Reservation.Then, M DIn reappear number of times and surpass T AddThe code word increase advance among the background model M, M DMiddle overtime T DelAlso the code word that does not again occur from M DMiddle deletion.Wherein,
Figure BDA00002637961000074
For filtering the time threshold of new code word, can be replaced by frame number; T AddFor whether with code word from M DIn add occurrence number threshold value among the background model M, if M DThe occurrence number of middle code word surpasses T Add, then it is joined among the M; T DelFor whether with code word from M DIf the time interval threshold value of twice time of occurrence of middle deletion is M DIn time interval of twice time of occurrence of code word surpass T Del, then with it from M DMiddle deletion.
Above be elaborated as the process of example to moving object detection to adopt based on the moving object detection algorithm of code book.When adopting the mixed Gauss model method to moving object detection, this steps A can comprise again:
Substep A1 ' makes up background model, to the front N frame of video sequence (for example: N=25) each pixel of image makes up gauss hybrid models, and its step is as follows:
A1 ' a is step by step represented the time series of each pixel by K Gauss model, be designated as respectively: η (x, μ T, i, ∑ T, i), i=1,2 ... K;
A1 ' b is the different weight w of each Gaussian distribution distribution step by step T, iWith priority p i=w T, i| ∑ T, i| -1/2, wherein,
A1 ' c step by step is according to priority p iOrder arrange from high to low K Gaussian distribution;
A1 ' d step by step, get a front B Gaussian distribution weights and, be set as T, following formula is namely arranged: arg B min { &Sigma; t = 1 B w i = T } ;
Substep A2 ' detects the foreground target zone, the pixel of each frame of video input is regarded as a normal distribution, to each pixel X tMake the following judgment:
X tJ distribution with the background Gauss model is complementary, and namely whether following formula is set up; η (x, μ I, j, ∑ T, j)<T; Be background if mate then judge to change the time, otherwise, judge that then this point is impact point, so far can detect dynamic object;
In substep A2 ' process, upgrade the Gaussian Background model parameter, the renewal of mixed Gaussian distribution background model is comparatively complicated, namely will upgrade the parameter of Gaussian distribution self, also wants to upgrade the weight, priority of each Gaussian distribution etc.
At detection-phase, if without any distributing and X tBe complementary, then remove current K Gaussian distribution medium priority minimum one, and according to X tIntroduce a new Gaussian distribution, and give the less weights of the Gaussian distribution of new adding and larger variance, then all Gaussian distribution are re-started the weights normalized.If m Gaussian distribution and X tCoupling, then upgrade as follows to the parameter of all K Gaussian distribution:
μ t+1=(1-α)·μ t+α·d t
&Sigma; t + 1 = ( 1 - &alpha; ) &CenterDot; &Sigma; t + &alpha; &CenterDot; d t d t T
Wherein, α is the Gaussian parameter turnover rate, has represented the speed that Gaussian parameter upgrades;
Corresponding Gauss's right value update is as follows:
w t + 1 , i = w t , i i = m ( 1 - &beta; ) &CenterDot; w t , i otherwise
Wherein, β is the constant of another expression context update speed, i.e. right value update rate.Show and only have and X tThe weights of the Gaussian distribution that is complementary just are maintained, and the weights of other distribution all are lowered.
Behind the parameter of having upgraded Gaussian distribution and each distribution weights, again priority and ordering are recomputated in each distribution, simultaneously, the selectivity of background distributions may cause the variation of the number of background distributions.
Substep A3 owing to also having analyzed the continuous motion track of each moving target in the target detection process, according to continuity rules in motion process such as ohject displacement and attitude variations, is identified as same moving target with two targets of piecemeal.
In the process that detects moving target, some special circumstances can appear, for example the people in the embodiment video the color on habited some part and floor close, the situation that the people is divided into two can appear in the target detection process, if this situation is not handled it, then can be identified as two moving targets with these two, can affect follow-up query video.In addition, suspend in the process of moving such as automobile and to travel, the people that gets off in the situation that people and Che overlap, should be identified as two moving targets with people and Che, prevents undetected survey.Therefore after substep A2, this substep will be finished the process that the continuity that adopts motion process is judged same target, comprise:
A3a to detected each moving target, makes its boundary rectangle at video pictures step by step;
A3b step by step, target setting interframe can be accepted the maximal value D of displacement, calculate present frame four angles of target boundary rectangle and four angles of former frame target boundary rectangle apart from d;
A3c step by step judges the size of d and D, if d less than D, then two moving targets of this between judgment frame are same moving target.
Above steps A 3 has illustrated the specific practice of utilizing displacement relation between target frame to judge same target, but when utilize the interframe displacement relation to judge same target, just needs the SIFT feature of employing target to identify different targets, is specifically described as follows:
The SIFT feature is a kind of feature extraction algorithm of image, can extract image to yardstick, rotation, and the variations such as brightness have the feature of robustness, do the identification of video object and follow the tracks of to show good effect with the SIFT feature.The SIFT feature identification of target area can be finished as follows:
A3a ' shears out with the target area that detects step by step, does the SIFT feature extraction;
The SIFT feature of image can have the following steps and obtain:
A3a ' 1, and the Gaussian function of image and the different scale factor is done convolution, obtains Gauss's metric space of image;
A3a ' 2, Gauss's metric space is done difference obtain the difference of Gaussian metric space;
A3a ' 3, detect maximin in the difference of Gaussian metric space, thereby determine unique point;
A3a ' 4, the window of the 16*16 size centered by each unique point is divided into the 4*4 piece, in each piece, ask for the gradient direction information of each pixel, and the statistical gradient direction histogram obtains the gradient accumulated value of 8 directions in the 4*4 window, centered by unique point, consisted of like this proper vector of 4*4*8 dimension, pass through again the Gauss's window weighting with 16*16, form 128 final dimensional feature vectors.
A3b ' step by step, ask in the SIFT feature of current goal and the feature database clarification of objective and do coupling, specific practice is: the nearest the first two point of Euclidean distance of each unique point of current goal being obtained target signature in the characteristic of correspondence storehouse, if minimum distance and time in-plant ratio are then accepted this a pair of match point less than a threshold value;
A3c ' step by step, with current goal be identified as with the maximum feature database of current goal Feature Points Matching number in target, if current goal not with feature database in object matching, then be considered as emerging target, the current goal feature is added feature database.
Preferably, in this steps A, for detected moving target, each frame that occurs moving target in video image is drawn the boundary rectangle that comprises this moving target, preserves the image in this target boundary rectangle.The i.e. frame of each in video image, determine that the top of detected moving target and the ordinate of bottom pixel are respectively top, bottom, the horizontal ordinate of the left side and right side pixel is left, right, so just can determine two of boundary rectangle to angle point: the upper left corner (left, top) and the lower right corner (right, bottom), can obtain the target boundary rectangle with this group to angle point, thereby determine the movement locus of target, as shown in Figure 2.
Step B: for by detected each moving target of steps A, analyze and preserve the continuous motion track of this moving target, calculate and record it from the video segment beginning and ending time to the disappearance process occurring;
In this step, the beginning and ending time that moving target occurs in video, it is the continuity of in monitor video, moving according to moving target, calculate and record the beginning and ending time that this moving target is present in the whole process in the monitoring visual field, namely begin monitoring visual field, to occur until the beginning and ending time of the whole process that disappears from moving target.If moving target never disappears after occurring, the existence of this target is continued until the end of monitor video so.
In this step, the trajectory analysis method that adopts utilizes it to move the movement locus of variation as the movement locus of moving target for calculating the centroid point of the moving target that detects.To according to the continuity rule that exists in the motion processes such as occurring in nature ohject displacement, deformation of body or gestures of object variation, judge the kinematic relation of same moving target between the adjacent continuous picture frame during this time.Calculate the centroid point coordinate and also draw movement locus, with track display in original monitor video.For example moving target is by partial occlusion, so that target distortion, so centroid point can great changes will take place, this kind situation then can according to the continuity of object of which movement, suitably be revised centroid point so that too large change can't occur the movement locus of moving target continuously.
Continuity during according to detected moving target motion, because the people in the embodiment video begins to finish to be in the monitoring visual field to video playback from occurring always, be its incipient time and monitor video final time in whole monitor video preserving what preserve when this moving target is present in beginning and ending time of monitor video therefore.
Above step B draws its continuous motion track to detected moving target, and accompanying drawing 2 has shown the testing result of target.
Step C: according to preset order, set up the index of moving target, preserve detected moving target information, such as table 1;
Table 1 moving target index
Figure BDA00002637961000111
In this step, " preset order " can be the priority chronological order that moving target occurs, and also can be the classification order under the moving target, or the size sequence of moving target, but is not limited to above three kinds of sortords.
In the index of moving target, also comprise moving target information, the start-stop information (beginning and ending time or start-stop frame number information) that this " moving target information " exists in monitor video except moving target, the image information that can also comprise moving target, perhaps textual description information, perhaps temporal information can be the combination of above information also, but is not limited to above information.
In this step, the moving target index can according to user's needs, show with part tabulation or whole tabulation on display device.For example the user thinks query monitor video moving target of occurring in section sometime, shows that then the interior moving target index of fixed time section gets final product.
In this step, moving target was stored according to the priority time that occurs, because the image information of having stored moving target, the order when setting up index during according to storage offers user selection with the image of the moving target of storage with the form of tabulation.
Detected moving target is behaved in the present embodiment, after occurring the motion of the first back side in its motion process move in the front, in the motion process by partial occlusion, the situation of targeted fractured in the motion process, therefore in order when setting up index, to offer the image of a comprehensive Describing Motion whole object of energy of user feature, be chosen in the positive motion of moving target and preserve image when nearest from camera lens, the image of preserving is the image in the boundary rectangle frame, most information all is the information of moving target in this image, guaranteed so the good observability of moving target, prevented because the moving target of selecting is smaller or the problem of the relatively fuzzyyer naked eyes None-identified moving target that causes.
After all moving targets in the monitor video all are detected, the image of all moving targets of preserving is set up index with tabular form offer the user, from image list, select its interested destination object to carry out query video by the user.
Step D: in detecting user's Selecting Index tabulation, during the input operation of a moving target, according to the index of moving target, retrieve the video segment of this moving target, realize this video segment playback.
In this step, the playback of video segment refers to that beginning has just appearred in the moving target from appointment monitor video, until the broadcast of this intended target video segment of this whole process when disappearing in monitor video.Refer to that perhaps beginning has just appearred in the moving target from appointment monitor video, until monitor video is last, because probably moving target is in the monitoring visual field always after beginning to occur.
In this step, quick-searching comprises the video segment of certain target, refers to select specific objective by the user from the moving target index, orients according to selected moving target to comprise this target from the video segment of the whole process that occurs disappearing.
When query video, because the user has selected a certain moving target from the movement destination image tabulation, can utilize this target of preserving from the video segment start-stop information to the disappearance process occurring, from original monitor video, orient the video segment that comprises selected moving target, and play.
Query video method of the present invention has overcome artificial query video method needs plenty of time and manpower, interesting target to lack the shortcomings such as unified standard description, have effectively simple and clear fast, robustness is high, applicable to the advantages such as query video with complex background and compound movement target conditions.
Above-described specific embodiment; purpose of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the above only is specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a query video method is characterized in that, comprising:
Detect the moving target that occurs in the video;
Set up the index of moving target, preserve and detect the video segment content that comprises this moving target; And
During the input operation of moving target, the video segment content of the moving target that retrieval is chosen realizes this video segment playback in detecting user's Selecting Index tabulation.
2. query video method according to claim 1 is characterized in that, in the step of the moving target that occurs in the described detection video, for detected each moving target:
Its boundary rectangle is made in the moving target of the two frame video pictures outside in front and back;
Target setting interframe can be accepted the maximal value D of displacement, calculate present frame four angles of target boundary rectangle and four angles of former frame target boundary rectangle apart from d; And
Judge the size of d and D, if d less than D, then two moving targets of this between judgment frame are same moving target.
3. query video method according to claim 1 is characterized in that, in the step of the moving target that occurs in the described detection video,, for detected each moving target:
Adopt the SIFT feature of target area to identify to detect same moving target.
4. query video method according to claim 1 is characterized in that, in the step of the video segment content of the detected moving target of described preservation, for detected each moving target:
For each frame in the video image, draw the target boundary rectangle that comprises this moving target, only preserve the image in this target boundary rectangle.
5. query video method according to claim 1 is characterized in that:
Also comprise after the step of the moving target that occurs in the described detection video: for detected each moving target, calculate and record it from the video segment beginning and ending time to the disappearance process occurring;
The described step of setting up the index of moving target comprises, preserves described moving target from the video segment beginning and ending time to the disappearance process occurring in described index.
6. query video method according to claim 5 is characterized in that:
The described step of setting up the index of moving target also comprises: for detected each moving target, record and preserve the continuous motion track of this moving target;
Described realization chooses the step of Moving Targets Based on Video Streams playback also to comprise: the continuous motion track that adds dynamic object in video pictures.
7. each described query video method in 6 according to claim 1 is characterized in that, adopts moving target detecting method or mixed Gauss model method based on code book to detect the moving target that occurs in the video.
8. query video method according to claim 7 is characterized in that, the step of the moving target that occurs in the moving target detecting method detection video of described employing based on code book comprises:
Get T frame before the monitor video, to the continuous sampling value of each pixel according to color similarity and brightness range generation background code book thereof; And
Other frames to outside the T frame before the monitor video subtract each other itself and background model, judge whether pixel sampling value and its code book mate, if the pixel value of new input and background codebook coupling then are judged as background, otherwise are judged as moving target.
9. query video method according to claim 7 is characterized in that, the described step of judging whether pixel adopted value and its code book mate also comprises afterwards:
If the sampled value of a pixel and existing code book M do not mate, then create a new code word to M for it DIn, and the cycle of new code word is by the time limit
Figure FDA00002637960900021
Filter; And
M DIn reappear number of times and surpass T AddThe code word increase advance among the background model M, M DMiddle overtime T DelAlso do not mated the code word of access from M DMiddle deletion.
10. each described query video method in 6 according to claim 1, it is characterized in that, according to preset order the moving target information that detects is tabulated, wherein, this preset order is classification order or moving target size sequence under time-series, the moving target.
CN2012105671361A 2012-12-24 2012-12-24 Video query method Pending CN103020260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012105671361A CN103020260A (en) 2012-12-24 2012-12-24 Video query method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012105671361A CN103020260A (en) 2012-12-24 2012-12-24 Video query method

Publications (1)

Publication Number Publication Date
CN103020260A true CN103020260A (en) 2013-04-03

Family

ID=47968863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012105671361A Pending CN103020260A (en) 2012-12-24 2012-12-24 Video query method

Country Status (1)

Country Link
CN (1) CN103020260A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902229A (en) * 2015-05-19 2015-09-09 吴晗 Video monitoring method, system and camera shooting monitoring system
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN108174132A (en) * 2016-12-07 2018-06-15 杭州海康威视数字技术股份有限公司 The back method and device of video file
CN110413801A (en) * 2019-07-31 2019-11-05 关振宇 Wisdom security system data sharing method based on big data
CN110533795A (en) * 2018-05-23 2019-12-03 丰田自动车株式会社 Data recording equipment
CN111159476A (en) * 2019-12-11 2020-05-15 智慧眼科技股份有限公司 Target object searching method and device, computer equipment and storage medium
CN111816279A (en) * 2019-12-23 2020-10-23 谷歌有限责任公司 Identifying physical activity performed by a user of a computing device based on media consumption
CN112019789A (en) * 2019-05-31 2020-12-01 杭州海康威视数字技术股份有限公司 Video playback method and device
CN112040325A (en) * 2020-11-02 2020-12-04 成都睿沿科技有限公司 Video playing method and device, electronic equipment and storage medium
CN112866817A (en) * 2021-01-06 2021-05-28 浙江大华技术股份有限公司 Video playback method, device, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US20050185823A1 (en) * 2004-02-24 2005-08-25 International Business Machines Corporation System and method for generating a viewable video index for low bandwidth applications
US20100011297A1 (en) * 2008-07-09 2010-01-14 National Taiwan University Method and system for generating index pictures for video streams
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102222111A (en) * 2011-06-30 2011-10-19 山东神戎电子股份有限公司 Method and system for retrieving high-definition video content
CN102609548A (en) * 2012-04-19 2012-07-25 李俊 Video content retrieval method and system based on moving objects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US20050185823A1 (en) * 2004-02-24 2005-08-25 International Business Machines Corporation System and method for generating a viewable video index for low bandwidth applications
US20100011297A1 (en) * 2008-07-09 2010-01-14 National Taiwan University Method and system for generating index pictures for video streams
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102222111A (en) * 2011-06-30 2011-10-19 山东神戎电子股份有限公司 Method and system for retrieving high-definition video content
CN102609548A (en) * 2012-04-19 2012-07-25 李俊 Video content retrieval method and system based on moving objects

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902229A (en) * 2015-05-19 2015-09-09 吴晗 Video monitoring method, system and camera shooting monitoring system
US10701301B2 (en) 2016-05-16 2020-06-30 Hangzhou Hikvision Digital Technology Co., Ltd. Video playing method and device
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN107396165B (en) * 2016-05-16 2019-11-22 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN108174132A (en) * 2016-12-07 2018-06-15 杭州海康威视数字技术股份有限公司 The back method and device of video file
CN110533795A (en) * 2018-05-23 2019-12-03 丰田自动车株式会社 Data recording equipment
CN112019789A (en) * 2019-05-31 2020-12-01 杭州海康威视数字技术股份有限公司 Video playback method and device
CN110413801A (en) * 2019-07-31 2019-11-05 关振宇 Wisdom security system data sharing method based on big data
CN111159476A (en) * 2019-12-11 2020-05-15 智慧眼科技股份有限公司 Target object searching method and device, computer equipment and storage medium
CN111816279A (en) * 2019-12-23 2020-10-23 谷歌有限责任公司 Identifying physical activity performed by a user of a computing device based on media consumption
CN111816279B (en) * 2019-12-23 2024-04-05 谷歌有限责任公司 Identifying physical activities performed by a user of a computing device based on media consumption
CN112040325A (en) * 2020-11-02 2020-12-04 成都睿沿科技有限公司 Video playing method and device, electronic equipment and storage medium
CN112040325B (en) * 2020-11-02 2021-01-29 成都睿沿科技有限公司 Video playing method and device, electronic equipment and storage medium
CN112866817A (en) * 2021-01-06 2021-05-28 浙江大华技术股份有限公司 Video playback method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN103020260A (en) Video query method
Neubert et al. Appearance change prediction for long-term navigation across seasons
CN110414559B (en) Construction method of intelligent retail cabinet commodity target detection unified framework and commodity identification method
US11765321B2 (en) Intelligent video surveillance system and method
CN102932605B (en) Method for selecting camera combination in visual perception network
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN105069472A (en) Vehicle detection method based on convolutional neural network self-adaption
CN106097391A (en) A kind of multi-object tracking method identifying auxiliary based on deep neural network
WO2023065395A1 (en) Work vehicle detection and tracking method and system
CN104303193A (en) Clustering-based object classification
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN109800692A (en) A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
Milford et al. Condition-invariant, top-down visual place recognition
CN110119726A (en) A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model
CN102298781A (en) Motion shadow detection method based on color and gradient characteristics
CN110633678B (en) Quick and efficient vehicle flow calculation method based on video image
CN107491749A (en) Global and local anomaly detection method in a kind of crowd&#39;s scene
CN103971384B (en) Node cooperation target tracking method of wireless video sensor
CN111161313A (en) Multi-target tracking method and device in video stream
CN103559478A (en) Passenger flow counting and event analysis method for video monitoring of pedestrians in overlooking mode
CN100442307C (en) Goal checking and football video highlight event checking method based on the goal checking
Shafie et al. Smart video surveillance system for vehicle detection and traffic flow control
CN114708537A (en) Multi-view-angle-based system and method for analyzing abnormal behaviors of complex places
CN101794383A (en) Video vehicle detection method of traffic jam scene based on hidden Markov model
CN109919107A (en) A kind of traffic police&#39;s gesture identification method and unmanned vehicle based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130403