WO1999059116A1 - Method and apparatus for detecting motion across a surveillance area - Google Patents

Method and apparatus for detecting motion across a surveillance area Download PDF

Info

Publication number
WO1999059116A1
WO1999059116A1 PCT/GB1999/001457 GB9901457W WO9959116A1 WO 1999059116 A1 WO1999059116 A1 WO 1999059116A1 GB 9901457 W GB9901457 W GB 9901457W WO 9959116 A1 WO9959116 A1 WO 9959116A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
state
zone
moving
frame
Prior art date
Application number
PCT/GB1999/001457
Other languages
French (fr)
Inventor
Geoffrey Lawrence Thiel
Jonathan Hedley Fernyhough
Andrew Evripedes Kyricaou Georgiou
Peter Richard Seed
Original Assignee
Primary Image Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Primary Image Limited filed Critical Primary Image Limited
Publication of WO1999059116A1 publication Critical patent/WO1999059116A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19604Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/1961Movement detection not involving frame subtraction, e.g. motion detection on the basis of luminance changes in the image
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19652Systems using zones in a single scene defined for different treatment, e.g. outer zone gives pre-alarm, inner zone gives alarm

Definitions

  • the invention relates to a method and apparatus for detecting motion of an object across a surveillance area, and m particular for raising an alarm m the event of detecting motion of an object across a surveillance area in a predetermined direction in the presence of objects traversing the area m the opposite direction.
  • the invention finds particular application, for example, in using a video camera to monitor people passing through a departure gate in an airport, processing images captured by the video camera to detect individuals passing through the gate in the wrong direction, and raising an alarm if such illegal motion occurs.
  • the simplest approach may be frame-to-frame differencing, wnereny the differences between pixel intensities m a ⁇ jacent frames can be use ⁇ to identify objects moving through a scene.
  • this technique highlights the leading and trailing edge of each moving object ratner than the actual objects.
  • the contrast polarity of the leading and trailing edges can then be used to obtain the direction of motion, assuming the background has a constant contrast and that the leading and trailing edges can be accurately grouped.
  • the background has a constant contrast and it is not a trivial matter to group leading and trailing edges correctly.
  • Overlapping or closely-spaced objects cause additional complications when identifying matcninc e ⁇ ges.
  • This technique is also sensitive to light conditions, contrast and camera motion/shaking. Should the scene be densely populated w th objects moving close together and in various directions it is likely that only a single moving region would be identified.
  • a similar approach uses background subtraction, followed by thresholding and segmentation to identify objects in the scene. This technique assumes that a good background image is available or can be generated. From frame to frame, each identified region can then be tracked and labelled through proximity measurements, and the direction of motion obtained by considering the location of the labelled regions in each frame. As above, tnis technique is sensitive to light conditions, contrast and camera motion/shake. Additionally, as thresholding is used prior to segmentation, overlapping or closely-spaced objects may be grouped together and identified as a single amalgamated region, including objects moving in different directions. With objects being lost in this way, it is not possible to accurately track objects moving in the wrong direction. Additionally, as with frame differencing, tracking objects moving in different directions is not possible.
  • Model-based tracking techniques using a spatio-temporal filter (for example, a Kalman filter), could be used to accurately track individual objects through the scene.
  • these techniques require a large amount of training footage to accurately model the typical kinds of objects which move through the scene. If unusually-shaped objects then travel through the scene, the tracker is likely to become confused and not track those objects successfully.
  • the application is highly specific to the direction of motion ana the camera angle; if the camera is moved to a different location the application is likely to require further training data.
  • this kj.nd of tracking application is highly processor-intensive and may require specialised hardware to perform a ful._ frame rate.
  • Gates can detect motion based on the order, and time, in which the gates are activated. Motion within a gate can be detected using any of a number of simple techniques; for example, oackground subtraction and histogram analysis. These techniques are prone to aliasing problems, whereby a gate may be triggered by an object moving in the correct direction followed shortly afterwards by a second object, also moving in the correct direction, triggering an adjacent gate. By adding more gates, along with a combination of logical and timing parameters, more complex scenarios can be catered for, but this tends to lead to overly-complex configuration requirements.
  • the invention provides m its various aspects a motion detection method and apparatus and a method and apparatus for raising an alarm as defined in the appended independent claims. Preferred or advantageous features of the invention are defined m dependent subclaims .
  • the system of the invention advantageously does not attempt to reliably identify and track individual objects through a field of view. Rather, the invention is xntended to identify the motion of individual or grouped oDjects and to track the motion in order to recognise illegal movement through the field of view or surveillance area .
  • the invention advantageously allows an alarm to be raised if an object traverses a surveillance area in an illegal direction, when many objects may simultaneously be traversing the surveillance area in the opposite, legal direction.
  • the invention is therefore particularly applicable to monitoring gates through which people may only pass in one direction, such as gates at airports, ports or railway stations, or entrance or exit gates at sporting events or other public events.
  • a significant advantage of the invention is its capacity to recognise objects moving in different directions. It may therefore find application for, for example, counting objects traversing a surveillance area when those objects may not always move in one direction. An example would be monitoring motor traffic.
  • the invention provides a robust motion detector which can accommodate motion of objects in different directions, and discontinuous motion of objects, using an image processing technique which is preferably sufficiently compact to be executed on a suitably-programmed personal computer (PC) , even at full video frame rate.
  • PC personal computer
  • the robust motion detection advantageously derives from the combination of several features as set out in the independent claims.
  • the invention does not attempt to track individual objects, but looks for regions of similar motion and then tracks those regions througn the scene. If a tracked region is lost, a fallback advantageously determines if the region is still valid from a oackground mask. If the region still cannot be identified, a preliminary state may be set to identify the object as it proceeds across the surveillance area.
  • the invention may advantageously provide the following benefits; robust motion detection which performs well with crowd scenes; a system having only a simple configuration and wnich may require no learning or training and so be able to cope with unexpected changes in data: for example, the system may track a person in a wheel chair even though this type of object may not have been seen before; a system which may advantageously not suffer from (aliasing) false alarms caused by several objects moving in a legal direction; a system having good rejection characteristics to lighting changes and camera movements; and a system which need not require specialised hardware but may function at full frame rate performance on currently available nardware (off-the-shelf PC) .
  • system of the invention may alternatively be implemented on a dedicated microprocessor or similar circuitry for mass production to reduce costs.
  • the system of the invention may be self-configuring and may only require further external configuration for a delay period used m determining the Alarm State.
  • an aspect of the invention may advantageously provide the following system.
  • a camera captures, in successive frames, images of an object moving across a surveillance area. Successive frames are correlated to identify moving regions within each frame and to evaluate motion vectors for each region. Each moving region is thus tracked across the surveillance area. If no moving region is found m a frame to correspond to a moving region in a previous frame, a check is made for a corresponding region m a background mask; the background mask is updated frame-by-frame. If no corresponding region is found in the background mask, the moving region is considered lost and a preliminary alarm state is set.
  • a full alarm state is then set if either a moving region is tracked traversing the surveillance area in an illegal direction or a moving region is tracked entering the surveillance area, is lost, and then another moving region (which may, or is expected to, correspond to the same object) is tracked leaving or about to leave the surveillance area in an illegal direction while a preliminary alarm state is set.
  • Figure 1 shows a field of view of a video camera for capturing images for motion detection
  • Figure 2 shows a correlator grid overlaid on a portion of a frame captured by the camera
  • Figure 3 shows a search area and a correlator in both its correlator position and a correlator match position
  • Figure 4 shows a flow chart of tne overall operation of an embodiment of the invention
  • Figure 5 shows a flow chart of the procedure for labelling regions in successive frames according to the embodiment
  • Figure 6 shows a flow chart of the alarm-state-determ nmg routine of the embodiment.
  • a preferred embodiment of the invention provides a system for identifying objects that illegally traverse a surveillance area in which legal motion is limited to a single direction.
  • the system uses l ve video images from a static camera, received at full frame rate (25fps (frames per second) with PAL and 30fps with NTSC) .
  • Motion through the area can be extremely dense such that automatically identifying individual objects may not be possible by simple segmentation techniques based on frame differences.
  • the system therefore tracks regions of motion through the field of view using localised optical flow.
  • the embodiment of the invention is implemented at, for example, a departure gate at an airport, where people are permitted to pass in one direction through the gate and may do so m large numbers, closely-spaced. It is important, however, that no-one passes througn the gate in the wrong direction and the system of the embodiment is for raising an alarm if anyone does so.
  • an overhead camera provides live data from the surveillance area (the departure gate) with an alarm state being activated when illegal motion is detected. At that time, relays may be activated to sound an alarm and/or to flash a warning lamp and/or a video printer may be triggered to print an image of tne object performing the illegal motion.
  • the video printer does not nave to share tne same view as the surveillance camera. Instead, a separate camera, which has a clearer view of the object illegally leaving the surveillance area, can be used.
  • the field of view of the camera is split into four zones or areas as shown in figure 1. These are an entrance zone 2 an exit zone 4, a travel zone 6, and the remaining area 8.
  • the entrance zone, exit zone and travel zone combine to form a surveillance area 10.
  • the surveillance area must span the full width of the departure gate and is transversely divided by the entrance and exit zones, separated by the travel zone.
  • the sizes of these different zones are completely configureable and may be predetermined for different applications.
  • For maximum alarm protection the entrance zone and exit zone would meet, giving a travel zone of zero width. This means that any object just crossing the line between the zones will raise the alarm, which may increase the number of false alarms triggered.
  • Even if the travel zone is of zero width it is still functionally present in that illegal traversal of the travel zone is the criterion for raising an alarm, even if the travel zone is merely a line.
  • the entrance and exit zones do not have to be large; a reasonable width for each zone is about the distance travelled by a typical object in about half a second. The width of the travel zone is less significant and can be predetermined depending on the application of the system.
  • the surveillance area in figure 1 is rectangular and is illustrated as being divided by transverse, linear poundaries nto rectangular entrance, travel and exit zones, the invention is not limited to such a geometry.
  • the surveillance area may be of any shape as determined by particular applications, as may the entrance, travel and exit zones.
  • the setting of an alarm if a ⁇ object moves m an illegal direction across the surveillance area may be modelled in terms of a state machine. If an object is first detected by the system near the exit, it may be assigned a first state (state 1, or illegal entry state) . Once it has been tracked for a predetermined time, such as a number of frames, or for a predetermined distance, so that the system can be reasonably certain that it is tracking a real object, the state of the object may be changed to a second state (state 2, or traversing state) .
  • the state of the object may be changed to a third state (state 3, or illegal exit state) .
  • An alarm is raised if an object changes from state 1 to state 2 to state 3.
  • the entrance and exit zones may be defined m terms of both space and time.
  • a spatial boundary between the exit zone and the travel zone may be predetermined to provide a limit to the motion of an object before its state is changed from state 1 to state 2. If an object can be tracked reliably enough (for example over a predetermined number of frames sufficient to ensure that the moving region is not a noise artifact) before it reaches the boundary, it may be assigned state 2 even though it is still inside the spatial boundaries of the exit zone. If, however, an object crosses the boundary of the exit zone before it can be securely tracked, it may be assigned state 2 as it crosses the boundary even though the tracking requirement has not yet been achieved.
  • the system For each frame captured by the camera, the system performs the following basic steps, as described in detail below and as illustrated in figure 4; update background mask (step 52), calculate motion vectors (step 54), identify moving regions (step 56), label regions (step 58), and determine alarm state (step 60) .
  • a background image b is updated dynamically (step 52) using weighted averaging W of the current frame f with the existing background image.
  • the background mask provides a template Highlighting all areas in the field of view that are not part of the perceived background - in particular, Highlighting all objects that have "recently" entered tne surveillance area or that are moving within the surveillance area.
  • This technique is known for use as a first step in many vision and tracking applications.
  • the technique used by itself may be disadvantageously sensitive to sudden changes m light conditions, contrast and camera movement.
  • the specific embodiment of the invention does not use this technique to initially identify objects. Rather, this technique is used later in the image processing procedure of the embodiment to locate objects from the previous frame which have not yet been identified in the current frame using the motion vector image processing method detailed below.
  • the updating of the background mask need not be the first step of the image processing procedure. It only needs to be carried out before the background mask is used the step of labelling moving regions, as described below. Updating the background mask has been described first herein only for convenience.
  • Motion vectors are calculated (step 54) in the embodiment by correlating portions of successive frames. The calculation is based on tne frames as captured by the camera (full grey scale) .
  • a fixed two-dimensional grid overlay is defined covering the surveillance area.
  • Each intersection 12 in the grid identifies the central point of an area to be correlated between successive frames, obtaining a localised optical flow or motion vector.
  • Each such area is known as a correlator 14.
  • correlators can be any size and may overlap other correlators. In the present embodiment, as shown m figure 2, the correlators match the grid spacing so that the correlators do not overlap.
  • the grid spacing and correlator sizes are preferaply small enough (for example, 16x16 pixels) such that a moving object will cover more than one correlator.
  • a single correlator h located at grid position (u,v) relative to an origin ( ⁇ , j ) in frame f with dimensions w and h (measured in pixels) ..s defined as:
  • eacn frame is compared witn the previous frame in order to assign a motion vector to eacn correlator, where possible.
  • a search area 16 is defined around each correlator position 12 and a search made withm that area for a pattern of pixel intensities similar to the pattern of pixel intensities m the corresponding correlator 14 in the previous frame.
  • the best matcn position 18 for eacn correlator in the previous frame is identified within a search area around the position of that correlator.
  • the offset between the nest matcn position and the correlator position is the motion vector 20 of that correlator.
  • Pixels outside the search area are not used in any correlation attempt. This ⁇ imits the size of tne maximum motion vector ( dx max , dy max) to an offset with the search area as follows:
  • S x and S y are the search area width and height and C and C are the width and heignt cf eacn correlator.
  • - correlation area 22 can therefore ce defined as the area itn the search area wnicr. contains pixels on whicn a correlator area can be centered and not extend outside the search area.
  • the size of the search area is predetermined depending on the maximum expected travel speed with the surveillance area. For example, if the width of the surveillance area is 4 metres, the image width is 400 pixels, the correlator mask size is 20x20 pixels and the search area is 40x40 pixels, the maximum detectable object travel speed m the image is 10 pixels/frame which corresponds to a real object speed of about 2.5 metres per second (at 25 frames per second) . Limiting the size of the search area advantageously limits the amount of calculation required to perform a correlation operation and, in practice, does not significantly restrict the motion detection capabilities of the system. This is because an object is unlikely to exceed the maximum expected travel speed and because if an object is travelling very fast across the surveillance area, it is unlikely that a good correlator match will be identified. Real-time image processing is thus made easier without significant performance reduction.
  • a correlation value C is calculated from the sum of the absolute grey scale differences between each pixel in the correlator h (from the previous frame) and the corresponding search pixels (in the present frame) .
  • an exact match would be a correlation value of zero.
  • an exact match is unlikely to occur even if there is no movement from one frame to the next.
  • High frequency noise from the camera and slight digitiser errors introduce differences such that a perfect match is unlikely to occur.
  • the lowest correlation value for each pixel position withm the correlation area indicates the oest match. i.e. if f is the current frame image , and h (u , v) is the correlator mask from the previous frame at position (u, v) then:
  • a threshold value is applied to the match allowing an early exit from correlation search as soon as a correlation value below the thresnold is found.
  • the threshold value for each correlator may advantageously be based on the match value C for tnat correlator in the previous frame. This dynamic update improves performance from one frame to the next when the image changes, for example, due to light changes or automatic ga .
  • the initial correlator search position can be taken from the previous motion vector. If there was no motion then the initial search position will be the centre of the search area (previous motion vector is of zero magnitude) .
  • the threshold value technique is also applied, there is tnen a reasonable possibility that an immediate match will be found, thus further reducing the time required for tne correlation and motion vector calculations .
  • a further performance improvement tnat may be applied to reduce the correlation search time is to cneck early m the search the same offset as the motion vectors calculated m adjacent correlators.
  • the motion vector (dx, ay) for correlator position (u, v) _s then calculated from the point wnere, during the search procedure for that correlator, the correlation value C(x, y) first falls below the threshold value.
  • the search should be continued to find the point that has the minimum correlation value C- ⁇ n ( x, y) , but the search would then require a correlation value to be calculated for all correlator positions with the search area.
  • the image processing method of the embodiment avoids this requirement and thus advantageously enables an effective search procedure to be performed at full, real-time frame rate with a reduced demand for data processing power and therefore at reduced cost.
  • the step of calculating motion vectors (step 54) generates a 2-d ⁇ mens ⁇ onal (2-d) grid of motion vectors, one for each correlator. By grouping together adjacent motion vectors that have similar properties (i.e. approximately the same direction and magnitude, as shown in figure 2), moving regions with the surveillance area can be identified.
  • the range between the extremes should be no more than 30-45 degrees. It is difficult to provide a value for the magnitude comparison and depends on tne surveillance area and the speeds to be distinguished. For example, if it is desired to identify separate regions for objects travelling faster or slower than 10 metres per second across a surveillance area of 40 metres, assuming a frame rate of 25 frames/second an object moving at 10 m/s can travel 40 cm each frame. If eacn pixel represents 10 cm the magnitude of the motion vectors at 10 m/s is 4 pixels. Any larger magnitude is travelling faster than 10 m/s and anything smaller is slower. Label Regions
  • each region in the previous frame represents one or more objects moving through the surveillance area and that those objects, typically, follow a fixed course through that area.
  • a single object may be represented by a number of regions. This is partially dealt with when labelling regions in this step.
  • step 62 An attempt is made (step 62) to match eacn region (already labelled) in the previous frame (steps 64, 66) to a region identified in the current frame on the basis of close proximity of the regions.
  • the current spatial extent of a region normally overlaps the spatial extent of the region in the previous frame.
  • motion of a region may be perceived as a set of discrete jumps to adjacent correlators providing no overlap between one frame and the next.
  • the spatial extent of a region in the previous frame is extended by the width of a correlator in the perceived direction of motion.
  • the region in the new frame is given the same label as the region in the previous frame (steps 68, 70) .
  • a second location process is attempted to determine if the object has stopped moving. If it is determined that a moving region is a noise artifact, its loss is ignored (step 75) and the region is no longer tracked.
  • noise for example, digitizer errors and line interference
  • these will typically only appear as motion artifacts for a frame or two. Additionally, movement of a person's limbs can show brief motion in directions other than that of the main body. Experimentation by the inventor has demonstrated that a sustained motion of at least five frames appears sufficient to remove most forms of "noise" .
  • the motion vectors which previously identified that object no longer exist.
  • the background mask (generated as described above) , which shows the locations of all objects whicr. have recently entered the field of view, will still contain a mask of the now stationary object.
  • the region is then labelled with the same label as the region in the previous frame (step 78) . If the region remains stationary for several frames it can be "trac eo" in the background mask.
  • a preliminary alarm state will be activated (step 80) .
  • An expected arrival time into the entrance area is calculated from the known motion and distance travelled (step 82) . This information is used when determining the Alarm State in the next step.
  • the object has n this case changed from state 1 (illegal entry state) to state 2 (travel state) and has then been lost. State 2 is then retained during the period of the preliminary alarm state.
  • any region in the current frame that has not been labelled is assigned a new label (step 84) .
  • the Alarm State is set for either of two reasons, as shown in figure 6, which expands step 60 of figure 4, starting at the step of entering the "determme-alarm-state routine" (step 100) .
  • step 106 The first reason for an alarm state occurs (step 106) wnen a region is clearly tracked from the exit zone (step 102) (state 1), through the travel zone (state 2) and into the entrance zone (step 104) (state 3) .
  • An alarm state may also be set after a region has been tracked into the travel zone from the exit zone.
  • a preliminary alarm state is set when an object cannot be located from one frame to the next.
  • a full alarm state will be set if a region is subsequently tracked from the travel zone into the entrance zone (step 104) (state 3) at about the expected arrival time (ETA) of the first region m the exit zone or withm a pre-defined delay period.
  • ETA expected arrival time
  • the delay period nas elapsed, or after a delay based en the ETA, the preliminary alarm state is reset .
  • the pre-defined delay period is operator-configurable and determines the time for which an object may be lost before cancelling the preliminary alarm status.
  • an alarm procedure is initiated (step 110) and then the system returns to capture and process frames from the video camera (step 112) .
  • any appropriate action may be triggered automatically, such as sounding an alarm, triggering a recording camera (to record an image of a person passing the wrong way through an airport departure gate, for example) , alerting security personnel or automatically closing a door to block the exit of a person who has passed the wrong way through the surveillance area (step 114).
  • the system of the embodiment thus does not attempt to track individual objects, but looks for regions of similar motion and then tracks those regions through the scene. If a tracked region is lost, a fallback will determine if the region is still valid from the background mask. If the region still cannot be identified, a preliminary alarm state is set to identify the object if it enters the next zone .

Abstract

A camera captures, in successive frames, images of an object moving across a surveillance area. Successive frames are correlated to identify moving regions within each frame and to evaluate motion vectors for each region. Each moving region is thus tracked across the surveillance area. If no moving region is found in a frame to correspond to a moving region in a previous frame, a check is made for a corresponding region in a background mask; the background mask is updated frame-by-frame. If no corresponding region is found in the background mask, the moving region is considered lost and a preliminary alarm state is set. A full alarm state is then set if either a moving region is tracked traversing the surveillance area in an illegal direction or a moving region is tracked entering the surveillance area, is lost, and then another moving region (which may, or is expected to, correspond to the same object) is tracked leaving or about to leave the surveillance area in an illegal direction while a preliminary alarm state is set.

Description

Method and Apparatus for Detecting Motion Across a Surveillance Area
The invention relates to a method and apparatus for detecting motion of an object across a surveillance area, and m particular for raising an alarm m the event of detecting motion of an object across a surveillance area in a predetermined direction in the presence of objects traversing the area m the opposite direction.
The invention finds particular application, for example, in using a video camera to monitor people passing through a departure gate in an airport, processing images captured by the video camera to detect individuals passing through the gate in the wrong direction, and raising an alarm if such illegal motion occurs.
A number of image processing methods for detecting the motion of objects are known, but give rise to disadvantages in applications such as described above. Examples are as follows:
The simplest approach may be frame-to-frame differencing, wnereny the differences between pixel intensities m aαjacent frames can be useα to identify objects moving through a scene. Typically , this technique highlights the leading and trailing edge of each moving object ratner than the actual objects. The contrast polarity of the leading and trailing edges can then be used to obtain the direction of motion, assuming the background has a constant contrast and that the leading and trailing edges can be accurately grouped. However, it is not practical to assume that the background has a constant contrast and it is not a trivial matter to group leading and trailing edges correctly. Overlapping or closely-spaced objects cause additional complications when identifying matcninc eαges. This technique is also sensitive to light conditions, contrast and camera motion/shaking. Should the scene be densely populated w th objects moving close together and in various directions it is likely that only a single moving region would be identified.
A similar approach uses background subtraction, followed by thresholding and segmentation to identify objects in the scene. This technique assumes that a good background image is available or can be generated. From frame to frame, each identified region can then be tracked and labelled through proximity measurements, and the direction of motion obtained by considering the location of the labelled regions in each frame. As above, tnis technique is sensitive to light conditions, contrast and camera motion/shake. Additionally, as thresholding is used prior to segmentation, overlapping or closely-spaced objects may be grouped together and identified as a single amalgamated region, including objects moving in different directions. With objects being lost in this way, it is not possible to accurately track objects moving in the wrong direction. Additionally, as with frame differencing, tracking objects moving in different directions is not possible.
Model-based tracking techniques, using a spatio-temporal filter (for example, a Kalman filter), could be used to accurately track individual objects through the scene. Typically these techniques require a large amount of training footage to accurately model the typical kinds of objects which move through the scene. If unusually-shaped objects then travel through the scene, the tracker is likely to become confused and not track those objects successfully. Once trained, the application is highly specific to the direction of motion ana the camera angle; if the camera is moved to a different location the application is likely to require further training data. Typically, this kj.nd of tracking application is highly processor-intensive and may require specialised hardware to perform a ful._ frame rate. Techniques based on a number of surveillance "gates" can detect motion based on the order, and time, in which the gates are activated. Motion within a gate can be detected using any of a number of simple techniques; for example, oackground subtraction and histogram analysis. These techniques are prone to aliasing problems, whereby a gate may be triggered by an object moving in the correct direction followed shortly afterwards by a second object, also moving in the correct direction, triggering an adjacent gate. By adding more gates, along with a combination of logical and timing parameters, more complex scenarios can be catered for, but this tends to lead to overly-complex configuration requirements.
The invention provides m its various aspects a motion detection method and apparatus and a method and apparatus for raising an alarm as defined in the appended independent claims. Preferred or advantageous features of the invention are defined m dependent subclaims .
Circumstances where objects are not all moving m one direction, and may reverse their direction of motion, present difficulties for motion detection systems. These difficulties are increased where large numbers of non- _dentιcal objects may be moving close together. The invention seeks to solve these problems while requiring as little processing power as possible to carry out the necessary image processing.
The system of the invention advantageously does not attempt to reliably identify and track individual objects through a field of view. Rather, the invention is xntended to identify the motion of individual or grouped oDjects and to track the motion in order to recognise illegal movement through the field of view or surveillance area . The invention advantageously allows an alarm to be raised if an object traverses a surveillance area in an illegal direction, when many objects may simultaneously be traversing the surveillance area in the opposite, legal direction. The invention is therefore particularly applicable to monitoring gates through which people may only pass in one direction, such as gates at airports, ports or railway stations, or entrance or exit gates at sporting events or other public events.
A significant advantage of the invention is its capacity to recognise objects moving in different directions. It may therefore find application for, for example, counting objects traversing a surveillance area when those objects may not always move in one direction. An example would be monitoring motor traffic.
The invention provides a robust motion detector which can accommodate motion of objects in different directions, and discontinuous motion of objects, using an image processing technique which is preferably sufficiently compact to be executed on a suitably-programmed personal computer (PC) , even at full video frame rate. This gives very significant advantages of cost and ease of manufacture and implementation .
The robust motion detection advantageously derives from the combination of several features as set out in the independent claims. Thus, the invention does not attempt to track individual objects, but looks for regions of similar motion and then tracks those regions througn the scene. If a tracked region is lost, a fallback advantageously determines if the region is still valid from a oackground mask. If the region still cannot be identified, a preliminary state may be set to identify the object as it proceeds across the surveillance area. To summarise, the invention may advantageously provide the following benefits; robust motion detection which performs well with crowd scenes; a system having only a simple configuration and wnich may require no learning or training and so be able to cope with unexpected changes in data: for example, the system may track a person in a wheel chair even though this type of object may not have been seen before; a system which may advantageously not suffer from (aliasing) false alarms caused by several objects moving in a legal direction; a system having good rejection characteristics to lighting changes and camera movements; and a system which need not require specialised hardware but may function at full frame rate performance on currently available nardware (off-the-shelf PC) .
Advantageously, the system of the invention may alternatively be implemented on a dedicated microprocessor or similar circuitry for mass production to reduce costs.
Advantageously, apart from external configuration of the surveillance area within the field of view, the system of the invention may be self-configuring and may only require further external configuration for a delay period used m determining the Alarm State.
Thus, an aspect of the invention may advantageously provide the following system. A camera captures, in successive frames, images of an object moving across a surveillance area. Successive frames are correlated to identify moving regions within each frame and to evaluate motion vectors for each region. Each moving region is thus tracked across the surveillance area. If no moving region is found m a frame to correspond to a moving region in a previous frame, a check is made for a corresponding region m a background mask; the background mask is updated frame-by-frame. If no corresponding region is found in the background mask, the moving region is considered lost and a preliminary alarm state is set. A full alarm state is then set if either a moving region is tracked traversing the surveillance area in an illegal direction or a moving region is tracked entering the surveillance area, is lost, and then another moving region (which may, or is expected to, correspond to the same object) is tracked leaving or about to leave the surveillance area in an illegal direction while a preliminary alarm state is set.
Detailed Description of Specific Embodiment Specific embodiments of the invention will now be described by way of example, with reference to the drawings, in which;
Figure 1 shows a field of view of a video camera for capturing images for motion detection;
Figure 2 shows a correlator grid overlaid on a portion of a frame captured by the camera;
Figure 3 shows a search area and a correlator in both its correlator position and a correlator match position;
Figure 4 shows a flow chart of tne overall operation of an embodiment of the invention;
Figure 5 shows a flow chart of the procedure for labelling regions in successive frames according to the embodiment, and
Figure 6 shows a flow chart of the alarm-state-determ nmg routine of the embodiment.
A preferred embodiment of the invention provides a system for identifying objects that illegally traverse a surveillance area in which legal motion is limited to a single direction. The system uses l ve video images from a static camera, received at full frame rate (25fps (frames per second) with PAL and 30fps with NTSC) . Motion through the area can be extremely dense such that automatically identifying individual objects may not be possible by simple segmentation techniques based on frame differences. Father than attempting to tracK individual objects, the system therefore tracks regions of motion through the field of view using localised optical flow.
The only alarm situation is when an object illegally traverses the surveillance area from the normal exit to the normal entrance. Objects entering tne surveillance area correctly, reversing direction and then leaving through the entrance should not raise the alarm. Similarly, objects which illegally enter through the exit but reverse direction and leave througn the exit should not raise the alarm.
The embodiment of the invention is implemented at, for example, a departure gate at an airport, where people are permitted to pass in one direction through the gate and may do so m large numbers, closely-spaced. It is important, however, that no-one passes througn the gate in the wrong direction and the system of the embodiment is for raising an alarm if anyone does so.
In the embodiment, an overhead camera provides live data from the surveillance area (the departure gate) with an alarm state being activated when illegal motion is detected. At that time, relays may be activated to sound an alarm and/or to flash a warning lamp and/or a video printer may be triggered to print an image of tne object performing the illegal motion. The video printer does not nave to share tne same view as the surveillance camera. Instead, a separate camera, which has a clearer view of the object illegally leaving the surveillance area, can be used. In the embodiment, the field of view of the camera is split into four zones or areas as shown in figure 1. These are an entrance zone 2 an exit zone 4, a travel zone 6, and the remaining area 8.
The entrance zone, exit zone and travel zone combine to form a surveillance area 10. The surveillance area must span the full width of the departure gate and is transversely divided by the entrance and exit zones, separated by the travel zone. The sizes of these different zones are completely configureable and may be predetermined for different applications. For maximum alarm protection the entrance zone and exit zone would meet, giving a travel zone of zero width. This means that any object just crossing the line between the zones will raise the alarm, which may increase the number of false alarms triggered. Even if the travel zone is of zero width it is still functionally present in that illegal traversal of the travel zone is the criterion for raising an alarm, even if the travel zone is merely a line. The entrance and exit zones do not have to be large; a reasonable width for each zone is about the distance travelled by a typical object in about half a second. The width of the travel zone is less significant and can be predetermined depending on the application of the system.
Alternative Surveillance Area Implementations
Although the surveillance area in figure 1 is rectangular and is illustrated as being divided by transverse, linear poundaries nto rectangular entrance, travel and exit zones, the invention is not limited to such a geometry. The surveillance area may be of any shape as determined by particular applications, as may the entrance, travel and exit zones.
In more general terms, the setting of an alarm if a^ object moves m an illegal direction across the surveillance area may be modelled in terms of a state machine. If an object is first detected by the system near the exit, it may be assigned a first state (state 1, or illegal entry state) . Once it has been tracked for a predetermined time, such as a number of frames, or for a predetermined distance, so that the system can be reasonably certain that it is tracking a real object, the state of the object may be changed to a second state (state 2, or traversing state) . If the object is then tracked until it is near the entrance (or the system otherwise determines that the object has moved to the entrance as described in more detail below in relation to the specific embodiments) then the state of the object may be changed to a third state (state 3, or illegal exit state) . An alarm is raised if an object changes from state 1 to state 2 to state 3.
Other state change sequences are possible. For example if an object moves from the exit (state 1) into the centre of the surveillance area (state 2) and back to the exit (state 1) no alarm should be raised.
Viewed in terms of states in this way, it is apparent that the entrance and exit zones described above may not be defined in terms of space, but in terms of t_.me and m terms of tne states assigned to objects. For example an object which is moving slowly may be assigned state 1 and then changed to state 2 while it is still very near the exit, while an object which is fast moving may have moved further from the exit before it can be changed from state 1 to state 2. In the latter case the exit (illegal entry) zone is effectively larger than m the former.
Alternatively, the entrance and exit zones may be defined m terms of both space and time. For example a spatial boundary between the exit zone and the travel zone may be predetermined to provide a limit to the motion of an object before its state is changed from state 1 to state 2. If an object can be tracked reliably enough (for example over a predetermined number of frames sufficient to ensure that the moving region is not a noise artifact) before it reaches the boundary, it may be assigned state 2 even though it is still inside the spatial boundaries of the exit zone. If, however, an object crosses the boundary of the exit zone before it can be securely tracked, it may be assigned state 2 as it crosses the boundary even though the tracking requirement has not yet been achieved.
The specific description set out below refers to the entrance, travel and exit zones m terms of spatial regions only but the invention should not, m the l_σnt of the foregoing description, be construed as being so limited.
Imaσe Frame Handling
For each frame captured by the camera, the system performs the following basic steps, as described in detail below and as illustrated in figure 4; update background mask (step 52), calculate motion vectors (step 54), identify moving regions (step 56), label regions (step 58), and determine alarm state (step 60) .
Update Background Mask
When eacn new frame is captured by the camera (which captures monochrome frames, comprising pixels each having only a grey-scale intensity) , a background image b is updated dynamically (step 52) using weighted averaging W of the current frame f with the existing background image. A rounding coefficient (either the floor or ceiling) is applied to reduce rounding errors in the calculation for the weighted average. So, at each point (x , y) withm the surveillance area, the oackground is updated using: [floor ( (f(x,y) -b{x,y) ) /W+b(x,y) ) for(f{x,y) -b{x,y) ) ≥N b(χ ry) =
[ceilmgi {f(x,y) -b{x,y) ) /W+b(x,y) ) otherwise
Without the floor and ceiling coefficients, high intensity pixels in the background image can build up over time and not be representative of the "true" background. The dynamic updating of the background image has the effect that any object which stops moving and remains stationary is slowly merged into the background over a number of irames
- "differenced" image s then ootained by applying imaαe subtraction between the updated background image and the current frame. Image subtraction is used to subtract the image intensities at each pixel in the background image from the current frame image. The resulting differenced image is then thresholded by flagging all pixels having image intensity greater than a predetermined threshold to obtain a background mask in the form of a binary image where the flagged pixels represent significant changes in intensity such as objects moving in the scene. A background mask corresponding to each successive frame is generated. Thus, the background mask provides a template Highlighting all areas in the field of view that are not part of the perceived background - in particular, Highlighting all objects that have "recently" entered tne surveillance area or that are moving within the surveillance area.
This technique is known for use as a first step in many vision and tracking applications. The background mask _s used to segment and identify moving objects n the scene oy clustering connected groups of flagged pixels, nowever, __f several moving objects overlap in the image or are travelling too close together to be separately identified in the background mask) a single amalgamated region may be obtained rather than individual regions for each object. Additionally, the technique used by itself may be disadvantageously sensitive to sudden changes m light conditions, contrast and camera movement.
The specific embodiment of the invention does not use this technique to initially identify objects. Rather, this technique is used later in the image processing procedure of the embodiment to locate objects from the previous frame which have not yet been identified in the current frame using the motion vector image processing method detailed below.
It should therefore be noted that the updating of the background mask need not be the first step of the image processing procedure. It only needs to be carried out before the background mask is used the step of labelling moving regions, as described below. Updating the background mask has been described first herein only for convenience.
Calculate Motion Vectors Motion vectors are calculated (step 54) in the embodiment by correlating portions of successive frames. The calculation is based on tne frames as captured by the camera (full grey scale) .
As illustrated figure 2, a fixed two-dimensional grid overlay is defined covering the surveillance area. Each intersection 12 in the grid identifies the central point of an area to be correlated between successive frames, obtaining a localised optical flow or motion vector. Each such area is known as a correlator 14.
correlators can be any size and may overlap other correlators. In the present embodiment, as shown m figure 2, the correlators match the grid spacing so that the correlators do not overlap. The grid spacing and correlator sizes are preferaply small enough (for example, 16x16 pixels) such that a moving object will cover more than one correlator.
A single correlator h located at grid position (u,v) relative to an origin ( ι , j ) in frame f with dimensions w and h (measured in pixels) ..s defined as:
w/2, l2 h { u , v) E l + U , + v) ι = - w/2 , j -nil
As illustrated in figure 3, eacn frame is compared witn the previous frame in order to assign a motion vector to eacn correlator, where possible. Thus, after each new frame is captured, a search area 16 is defined around each correlator position 12 and a search made withm that area for a pattern of pixel intensities similar to the pattern of pixel intensities m the corresponding correlator 14 in the previous frame. Thus, in each new frame, the best matcn position 18 for eacn correlator in the previous frame is identified within a search area around the position of that correlator. The offset between the nest matcn position and the correlator position is the motion vector 20 of that correlator.
Pixels outside the search area are not used in any correlation attempt. This ±imits the size of tne maximum motion vector ( dx max , dy max) to an offset with the search area as follows:
dxmax = (S, - C ) /2 dy^x = (S - C ) /2
Where Sx and Sy are the search area width and height and C and C are the width and heignt cf eacn correlator. - correlation area 22 can therefore ce defined as the area itn the search area wnicr. contains pixels on whicn a correlator area can be centered and not extend outside the search area.
The size of the search area is predetermined depending on the maximum expected travel speed with the surveillance area. For example, if the width of the surveillance area is 4 metres, the image width is 400 pixels, the correlator mask size is 20x20 pixels and the search area is 40x40 pixels, the maximum detectable object travel speed m the image is 10 pixels/frame which corresponds to a real object speed of about 2.5 metres per second (at 25 frames per second) . Limiting the size of the search area advantageously limits the amount of calculation required to perform a correlation operation and, in practice, does not significantly restrict the motion detection capabilities of the system. This is because an object is unlikely to exceed the maximum expected travel speed and because if an object is travelling very fast across the surveillance area, it is unlikely that a good correlator match will be identified. Real-time image processing is thus made easier without significant performance reduction.
With a correlator area centred at each pixel position (x,y) in the correlation area (i.e. each possible offset for the motion vector) a correlation value C is calculated from the sum of the absolute grey scale differences between each pixel in the correlator h (from the previous frame) and the corresponding search pixels (in the present frame) . As such, an exact match would be a correlation value of zero. However, an exact match is unlikely to occur even if there is no movement from one frame to the next. High frequency noise from the camera and slight digitiser errors introduce differences such that a perfect match is unlikely to occur. Hence, the lowest correlation value for each pixel position withm the correlation area indicates the oest match. i.e. if f is the current frame image , and h (u , v) is the correlator mask from the previous frame at position (u, v) then:
u>dx, v*dy " (χ, y) = 2^ I \ f iχ+ι, y+j ) -h { u+ι , v+ι ) \ =u-dx, y=v-dy = -n/2 ι = -w/2
To improve performance, a threshold value is applied to the match allowing an early exit from correlation search as soon as a correlation value below the thresnold is found. The threshold value for each correlator may advantageously be based on the match value C for tnat correlator in the previous frame. This dynamic update improves performance from one frame to the next when the image changes, for example, due to light changes or automatic ga .
As motion tends to continue in the same direction from one frame to the next the initial correlator search position can be taken from the previous motion vector. If there was no motion then the initial search position will be the centre of the search area (previous motion vector is of zero magnitude) . When the threshold value technique is also applied, there is tnen a reasonable possibility that an immediate match will be found, thus further reducing the time required for tne correlation and motion vector calculations .
A further performance improvement tnat may be applied to reduce the correlation search time is to cneck early m the search the same offset as the motion vectors calculated m adjacent correlators.
The motion vector (dx, ay) for correlator position (u, v) _s then calculated from the point wnere, during the search procedure for that correlator, the correlation value C(x, y) first falls below the threshold value.
To obtain the most accurate possible motion vector, the search should be continued to find the point that has the minimum correlation value C-ιn ( x, y) , but the search would then require a correlation value to be calculated for all correlator positions with the search area. The image processing method of the embodiment avoids this requirement and thus advantageously enables an effective search procedure to be performed at full, real-time frame rate with a reduced demand for data processing power and therefore at reduced cost.
Identify Moving Regions
The step of calculating motion vectors (step 54) generates a 2-dιmensιonal (2-d) grid of motion vectors, one for each correlator. By grouping together adjacent motion vectors that have similar properties (i.e. approximately the same direction and magnitude, as shown in figure 2), moving regions with the surveillance area can be identified.
For matching directions between two motion vectors the range between the extremes should be no more than 30-45 degrees. It is difficult to provide a value for the magnitude comparison and depends on tne surveillance area and the speeds to be distinguished. For example, if it is desired to identify separate regions for objects travelling faster or slower than 10 metres per second across a surveillance area of 40 metres, assuming a frame rate of 25 frames/second an object moving at 10 m/s can travel 40 cm each frame. If eacn pixel represents 10 cm the magnitude of the motion vectors at 10 m/s is 4 pixels. Any larger magnitude is travelling faster than 10 m/s and anything smaller is slower. Label Regions
This procedure is illustrated m figure 5 as well as in part of figure 4.
The steps of identifying moving regions and then labelling those regions are used to "track" regions from one frame to the next. It is assumed that each region in the previous frame represents one or more objects moving through the surveillance area and that those objects, typically, follow a fixed course through that area. As mentioned above, a single object may be represented by a number of regions. This is partially dealt with when labelling regions in this step.
An attempt is made (step 62) to match eacn region (already labelled) in the previous frame (steps 64, 66) to a region identified in the current frame on the basis of close proximity of the regions. At full frame rate, it is assumed that the current spatial extent of a region normally overlaps the spatial extent of the region in the previous frame. However, if the size of a region is only a single correlator width or height, motion of a region may be perceived as a set of discrete jumps to adjacent correlators providing no overlap between one frame and the next. To compensate for this perceived motion, tor the purposes of matching regions in successive frames the spatial extent of a region in the previous frame is extended by the width of a correlator in the perceived direction of motion.
If one region in the new frame matches one in tne previous frame, the region in the new frame is given the same label as the region in the previous frame (steps 68, 70) .
It is possible that more than one region in tne current frame will matcn a region in the previous frame (for example, objects moving close together, or large oojects "breaking" up. When this nappens, those matcnmg regions are combined into a single region and labelled with same label accordingly (step 72) .
When a region in the previous frame is lost (i.e. cannot be located m the current frame) (step 73) and has demonstrated sufficient sustained motion over a number of previous frames to be considered more than just "noise" (step 74) , a second location process is attempted to determine if the object has stopped moving. If it is determined that a moving region is a noise artifact, its loss is ignored (step 75) and the region is no longer tracked. (Although there are different forms of "noise", for example, digitizer errors and line interference, these will typically only appear as motion artifacts for a frame or two. Additionally, movement of a person's limbs can show brief motion in directions other than that of the main body. Experimentation by the inventor has demonstrated that a sustained motion of at least five frames appears sufficient to remove most forms of "noise") .
When an object has stopped moving, the motion vectors which previously identified that object no longer exist. However the background mask (generated as described above) , which shows the locations of all objects whicr. have recently entered the field of view, will still contain a mask of the now stationary object. By examining the location of the missing region in the background mask, it is usually possible to find and identify a stationary region corresponding to the lost moving region (step 76) . The region is then labelled with the same label as the region in the previous frame (step 78) . If the region remains stationary for several frames it can be "trac eo" in the background mask.
In case of a region corresponding to an object entering the travel area (zone) via the exit area (zone), if the lost region can still not oe identified after inspection of the background mask and the last known position was in the travel zone, a preliminary alarm state will be activated (step 80) . An expected arrival time into the entrance area is calculated from the known motion and distance travelled (step 82) . This information is used when determining the Alarm State in the next step. In terms of the state machine model described above, the object has n this case changed from state 1 (illegal entry state) to state 2 (travel state) and has then been lost. State 2 is then retained during the period of the preliminary alarm state.
Finally, any region in the current frame that has not been labelled is assigned a new label (step 84) .
Determine Alarm State The Alarm State is set for either of two reasons, as shown in figure 6, which expands step 60 of figure 4, starting at the step of entering the "determme-alarm-state routine" (step 100) .
The first reason for an alarm state occurs (step 106) wnen a region is clearly tracked from the exit zone (step 102) (state 1), through the travel zone (state 2) and into the entrance zone (step 104) (state 3) .
An alarm state may also be set after a region has been tracked into the travel zone from the exit zone. As discussed above, a preliminary alarm state is set when an object cannot be located from one frame to the next. When the preliminary alarm state has been set (step 108) a full alarm state will be set if a region is subsequently tracked from the travel zone into the entrance zone (step 104) (state 3) at about the expected arrival time (ETA) of the first region m the exit zone or withm a pre-defined delay period. When the delay period nas elapsed, or after a delay based en the ETA, the preliminary alarm state is reset . The pre-defined delay period is operator-configurable and determines the time for which an object may be lost before cancelling the preliminary alarm status. While the preliminary alarm status is in effect, false alarms may be raised due to an aliasing effect (for example a person entering the surveillance area through the entrance, reversing and then leaving) . As such, the benefit of this delay period will be lost should the magnitude be of the order of minutes or hours rather than seconds.
When an alarm state is set (step 106) , an alarm procedure is initiated (step 110) and then the system returns to capture and process frames from the video camera (step 112) . When the alarm procedure is initiated, any appropriate action may be triggered automatically, such as sounding an alarm, triggering a recording camera (to record an image of a person passing the wrong way through an airport departure gate, for example) , alerting security personnel or automatically closing a door to block the exit of a person who has passed the wrong way through the surveillance area (step 114).
The system of the embodiment thus does not attempt to track individual objects, but looks for regions of similar motion and then tracks those regions through the scene. If a tracked region is lost, a fallback will determine if the region is still valid from the background mask. If the region still cannot be identified, a preliminary alarm state is set to identify the object if it enters the next zone .

Claims

1. A method for detecting motion of an object across a surveillance area from a first zone to a second zone, comprising the steps of; capturing in successive video frames a series of images of the surveillance area; correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of each moving region; tracking moving regions in successive frames on the basis of their position in each frame and their motion to identify corresponding moving regions in successive frames; if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in a background mask, the background mask being updated frame-by-frame; if no corresponding region is found m the background mask, setting a preliminary alarm state; and setting an alarm state if; either a moving region is tracked leaving the first zone and subsequently entering the second zone; or a moving region is tracked leaving the first zone, is lost, and another moving region subsequently enters tne second zone while a preliminary a.arm state is set.
2. A method according to claim _. , in whicn the first and second zones are separated by a travel zone and an alarm state is set if either a moving region is tracked entering the travel zone from the first zcne and subsequently leaving the travel zone by entering the second zone or a moving region is tracked entering tne travel zone from the first zone, is lost while it is withm the travel zone, and another moving region subsequently leaves the travel zone by entering the second zone vhile a preliminary alarm state is set.
3. A method according to claim 2, in which the surveillance area is transversely divided into the first zone, the travel zone and the second zone.
4. A method according to claim 1, m which when an alarm state is set, an alarm is raised and/or an image of the surveillance area, or at least of the second zone, is recorded and/or a door is closed or other barrier set to block an exit from the second zone.
5. A method according to claim 1, in which successive frames are correlated by defining a plurality of correlators withm one frame, calculating a correlation value for a plurality of possible positions of each correlator withm a respective search area in the next frame, and if a correlator match is found m the searcn area assigning a motion vector to the correlator based on the offset between the correlator position and the correlator match position.
6. A method according to claim 5, in which the size of each search area is predetermined depending on the expected maximum speed of objects moving across the surveillance area and/or on the reliability of correlator matching for fast-moving objects.
7. A method according to claim 5, in which the calculation of correlation values for a particular correlator at different positions withm the respective search area is ended when a correlation value oelow a predetermined correlation value threshold is found.
8. A method according to claim 7, in which the correlation value thresnold varies depending on the magnitude of a correlation value calculated for the same correlator m the previous frame.
9. A method according to claim 5, in which the calculation of correlation values for a correlator is started at or near a correlator position withm the respective search area corresponding to the motion vector for the same correlator in the previous frame.
10. A method according to claim 1, in which, when a moving region is lost during the tracking of that moving region, no attempt is made to look for a corresponding region m the background mask if the moving region has not demonstrated sustained motion over a plurality of frames.
11. A method according to claim 1, in which a preliminary alarm state is cancelled after a predetermined time.
12. A method according to claim 1, m which the preliminary alarm state is cancelled after a time evaluated depending on the position and motion of the moving region which was lost and thus triggered the preliminary alarm state, by calculating the expected time of arrival of that moving region in the second zone.
13. A method according to claim 1, which can be implemented by a suitably-programmed personal computer (PC) to process real time trames captured by a conventional monochrome video camera.
14. A method of raising an alarm wnen an object traverses a surveillance area m a predetermined direction from a first zone to a second zone, tne method comprising the steps of, capturing m successive video frames a series of images of the surveillance area, correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of eacn moving region; tracking moving regions in successive frames on tne basis of their position m each frame and their motion to identify corresponding moving regions in successive frames; if no moving region is found in a frame corresponding to a moving region m the previous frame, looking for a corresponding region in a background mask, the background mask being updated frame-by-frame; if no corresponding region is found in the background mask, setting a preliminary alarm state; and setting an alarm state if; either a moving region is tracked leaving the first zone and subsequently entering the second zone; or a moving region is tracked leaving the first zone, is lost, and another moving region subsequently enters the second zone while a preliminary alarm state is set.
15. A method for detecting motion of an object across a surveillance area m a predetermined direction comprising the steps of; capturing in successive video frames a series of images of the surveillance area; correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of each moving region; tracking moving regions in successive frames on the Dasis of their position in each frame and their motion to identify corresponding moving regions in successive frames; if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in a background mask, the background mask being updated frame-by-frame; if no corresponding region is found in the background mask, setting a preliminary alarm state; assigning states to moving regions such that a first state is assigned to a region as it is first detected on or after entry into the surveillance area in the predetermined direction, a second state is assigned after the region has been tracked for a predetermined time and/or distance and/or when the moving region passes a predetermined position and/or when a tracking reliability threshold has been exceeded and a third state is assigned either as the region is tracked until it moves near to leaving the surveillance area in the predetermined direction or, if the moving region is lost while the second state, as another moving region moves near to leaving the surveillance area in the predetermined direction while a preliminary alarm state is set; and setting an alarm state if the assigned state changes sequentially from the first state to the second state to the third state.
16. A method according to claim 15, in which a preliminary alarm state is cancelled after a predetermined time.
17. A method according to claim 15, in which the preliminary alarm state is cancelled after a time evaluated depending on the position and motion of the moving region which was lost and thus triggered the preliminary alarm state, by calculating the expected time of assignment of the third state to that moving region.
18. A method of raising an alarm when an object traverses a surveillance area in a predetermined direction, the method comprising the steps of; capturing in successive video frames a series of images of the surveillance area; correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of each moving region; tracking moving regions m successive frames on the basis of their position in each frame and their motion to identify corresponding moving regions m successive frames; if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in a background mask, the background mask being updated frame-by-frame; if no corresponding region is found in the background mask, setting a preliminary alarm state; assigning states to moving regions such that a first state is assigned to a region as it is first detected on or after entry into the surveillance area in the predetermined direction, a second state is assigned after the region has been tracked for a predetermined time and/or distance and/or when the moving region passes a predetermined position and/or when a tracking reliability threshold has been exceeded and a third state is assigned either as the region is tracked until it moves near to leaving the surveillance area in the predetermined direction or, if the moving region is lost while m the second state, as another moving region moves near to leaving the surveillance area in the predetermined direction while a preliminary alarm state is set; and setting an alarm state if the assigned state changes sequentially from the first state to the second state to the third state.
19. An apparatus for detecting motion of an object across a surveillance area from a first zone to a second zone, comprising; a video camera for capturing successive video frames a series of images of the surveillance area; a correlator coupled to an output of the camera for correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of each moving region; a tracking means coupled to an output of the correlator for tracking moving regions successive frames on the basis of their position m each frame and their motion to identify corresponding moving regions m successive frames; a background mask updating means coupled to an output of the camera for generating an updated background mask corresponding to each successive frame; a background mask inspection means for, if no moving region is found m a frame corresponding to a moving region in the previous frame, looking for a corresponding region in the background mask; a preliminary alarm state setting means for setting a preliminary alarm state if no corresponding region is found the background mask by the background mask inspection means; and, an alarm state setting means for setting an alarm if; either a moving region is tracked by the tracking means leaving the first zone and subsequently entering the second zone; or a moving region is tracked by the tracking means leaving the first zone, is lost, and another moving region subsequently enters the second zone while a preliminary alarm state is set.
20. An apparatus according to claim 19, n which the first and second zones are separated by a travel zone and an alarm state is set if either a moving region is tracked entering the travel zone from the first zone and subsequently leaving the travel zone by entering the second zone or a moving region is tracked entering the travel zone from the first zone, is lost while it is withm the travel zone, and another moving region subsequently leaves the travel zone by entering the second zone while preliminary alarm state is set.
21. A method according to claim 20, in which the surveillance area is transversely divided into the first zone, the travel zone and the second zone.
22. An apparatus according to claim 19, comprising an alarm means for, when an alarm state is set, raising an alarm, and/or recording an image of the surveillance area, or at least of the second zone, and/or closing a door or setting a barrier to block an exit from the second zone.
23. An apparatus according to claim 19, in which the video camera is a conventional monochrome video camera and the correlator, tracking means, background mask updating means and inspection means, preliminary alarm state setting means and alarm state setting means are implemented on a suitably-programmed personal computer (PC) operating in real time.
24. An apparatus for raising an alarm when an object traverses a surveillance area in a predetermined direction from a first zone to a second zone, the apparatus comprising; a video camera for capturing in successive video frames a series of images of the surveillance area; a correlator coupled to an output of the camera for correlating successive frames to identify moving regions within each frame and to evaluate motion vectors describing the motion of each moving region; a tracking means coupled to an output of the correlator for tracking moving regions in successive frames on the basis of their position in each frame and their motion to identify corresponding moving regions in successive frames; a background mask updating means coupled to an output of the camera for generating an updated background mask corresponding to each successive frame; a background mask inspection means for, if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in the background mask; a preliminary alarm state setting means for setting a preliminary alarm state if no corresponding region is found in the background mask by the background mask inspection means; an alarm state setting means for setting an alarm if; either a moving region is tracked by the tracking means leaving the first zone and subsequently entering the second zone; or a moving region is tracked by the tracking means leaving the first zone, is lost, and another moving region subsequently enters the second zone while a preliminary alarm state is set; and, an alarm raising means for raising an alarm and/or recording an image of the surveillance area and/or closing a door or setting a barrier if an alarm state is set.
25. An apparatus for detecting motion of an object across a surveillance area in a predetermined direction, comprising; a video camera for capturing in successive video frames a series of images of the surveillance area; a correlator coupled to an output of the camera for correlating successive frames to identify moving regions within each frame and to evaluate motion vectors describing the motion of each moving region; a tracking means coupled to an output of the correlator for tracking moving regions in successive frames on the basis of their position in each frame and their motion to identify corresponding moving regions in successive frames; a background mask updating means coupled to an output of the camera for generating an updated background mask corresponding to each successive frame; a background mask inspection means for, if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in the background mask; a preliminary alarm state setting means for setting a preliminary alarm state if no corresponding region is found in the background mask by the background mask inspection means; a state assigning means for assigning states to moving regions such that a first state is assigned to a region as it is first detected on or after entry into the surveillance area in the predetermined direction, a second state is assigned after the region has been tracked for a predetermined time and/or distance and/or when the moving region passes a predetermined position and/or after a tracking reliability threshold has been exceeded and a third state is assigned either as the region is tracked until it moves near to leaving the surveillance area in the predetermined direction or, if the moving region is lost while in the second state, as another moving region moves near to leaving the surveillance area in the predetermined direction while a preliminary alarm state is set; and an alarm state setting means for setting an alarm state if the assigned state changes sequentially from the first state to the second state to the third state.
26. An apparatus for raising an alarm when an object traverses a surveillance area in a predetermined direction, comprising; a video camera for capturing in successive video frames a series of images of the surveillance area; a correlator coupled to an output of the camera for correlating successive frames to identify moving regions withm each frame and to evaluate motion vectors describing the motion of each moving region; a tracking means coupled to an output of the correlator for tracking moving regions in successive frames on the basis of their position in each frame and their motion to identify corresponding moving regions in successive frames; a background mask updating means coupled to an output of the camera for generating an updated background mask corresponding to each successive frame; a background mask inspection means for, if no moving region is found in a frame corresponding to a moving region in the previous frame, looking for a corresponding region in the background mask; a preliminary alarm state setting means for setting a preliminary alarm state if no corresponding region is found in the background mask by the background mask inspection means; a state assigning means for assigning states to moving regions such that a first state is assigned to a region as it is first detected on or after entry into the surveillance area in the predetermined direction, a second state is assigned after the region has been tracked for a predetermined time and/or distance and/or when the moving region passes a predetermined position and/or after a tracking reliability threshold has been exceeded and a third state is assigned either as the region is tracked until it moves near to leaving the surveillance area in the predetermined direction or, if the moving region is lost while in the second state, as another moving region moves near to leaving the surveillance area in the predetermined direction while a preliminary alarm state is set; an alarm state setting means for setting an alarm state if the assigned state changes sequentially from the first state to the second state to the third state; and, an alarm raising means for raising an alarm and/or recording an image of the surveillance area and/or closing a door or setting a barrier if an alarm state is set.
27. A method for detecting motion of an object and raising an alarm substantially as described herein, with reference to the drawings.
28. An apparatus for detecting motion of an object and raising an alarm substantially as described herein with reference to the drawings.
PCT/GB1999/001457 1998-05-08 1999-05-10 Method and apparatus for detecting motion across a surveillance area WO1999059116A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9809807A GB2337146B (en) 1998-05-08 1998-05-08 Method and apparatus for detecting motion across a surveillance area
GB9809807.2 1998-05-08

Publications (1)

Publication Number Publication Date
WO1999059116A1 true WO1999059116A1 (en) 1999-11-18

Family

ID=10831643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1999/001457 WO1999059116A1 (en) 1998-05-08 1999-05-10 Method and apparatus for detecting motion across a surveillance area

Country Status (2)

Country Link
GB (1) GB2337146B (en)
WO (1) WO1999059116A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072370A2 (en) * 2005-12-20 2007-06-28 Koninklijke Philips Electronics, N.V. Method and apparatus for estimating object speed
US8212669B2 (en) 2007-06-08 2012-07-03 Bas Strategic Solutions, Inc. Remote area monitoring system
US8948903B2 (en) 2007-12-21 2015-02-03 Robert Bosch Gmbh Machine tool device having a computing unit adapted to distinguish at least two motions
CN109859149A (en) * 2019-01-25 2019-06-07 成都泰盟软件有限公司 A kind of setting target lookup region toy motion tracking method
US10789472B1 (en) * 2017-06-14 2020-09-29 Amazon Technologies, Inc. Multiple image processing and sensor targeting for object detection
US10816658B2 (en) 2016-09-07 2020-10-27 OmniPreSense Corporation Radar enabled weapon detection system
WO2021015672A1 (en) * 2019-07-24 2021-01-28 Hitachi, Ltd. Surveillance system, object tracking system and method of operating the same
CN112435440A (en) * 2020-10-30 2021-03-02 成都蓉众和智能科技有限公司 Non-contact type indoor personnel falling identification method based on Internet of things platform
CN113379985A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Nursing electronic fence alarm device
CN113379984A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Electronic nursing fence system
CN113379987A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Design method of nursing electronic fence module

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766098B1 (en) * 1999-12-30 2004-07-20 Koninklijke Philip Electronics N.V. Method and apparatus for detecting fast motion scenes
DE10042935B4 (en) * 2000-08-31 2005-07-21 Industrie Technik Ips Gmbh Method for monitoring a predetermined area and system
US9892606B2 (en) 2001-11-15 2018-02-13 Avigilon Fortress Corporation Video surveillance system employing video primitives
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US8564661B2 (en) 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US7424175B2 (en) 2001-03-23 2008-09-09 Objectvideo, Inc. Video segmentation using statistical pixel modeling
AUPR899401A0 (en) 2001-11-21 2001-12-13 Cea Technologies Pty Limited Method and apparatus for non-motion detection
GB0218982D0 (en) * 2002-08-15 2002-09-25 Roke Manor Research Video motion anomaly detector
GB0219665D0 (en) * 2002-08-23 2002-10-02 Scyron Ltd Interactive apparatus and method for monitoring performance of a physical task
US20040233282A1 (en) * 2003-05-22 2004-11-25 Stavely Donald J. Systems, apparatus, and methods for surveillance of an area
GB2409029A (en) * 2003-12-11 2005-06-15 Sony Uk Ltd Face detection
WO2005058668A2 (en) * 2003-12-19 2005-06-30 Core E & S. Co., Ltd Image processing alarm system and method for automatically sensing unexpected accident related to train
GB2416943A (en) 2004-08-06 2006-02-08 Qinetiq Ltd Target detection
TW200643824A (en) * 2005-04-12 2006-12-16 Koninkl Philips Electronics Nv Pattern based occupancy sensing system and method
CA2649389A1 (en) 2006-04-17 2007-11-08 Objectvideo, Inc. Video segmentation using statistical pixel modeling
RU2509355C2 (en) 2008-07-08 2014-03-10 Нортек Интернэшнл (Пти) Лимитед Apparatus and method of classifying movement of objects in monitoring zone
DE102009000173A1 (en) 2009-01-13 2010-07-15 Robert Bosch Gmbh Device for counting objects, methods and computer program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4249207A (en) * 1979-02-20 1981-02-03 Computing Devices Company Perimeter surveillance system
WO1996006368A1 (en) * 1994-08-24 1996-02-29 Sq Services Ag Image-evaluation system and process

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8925298D0 (en) * 1989-11-09 1990-08-08 Marconi Gec Ltd Object tracking
DE4314483A1 (en) * 1993-05-03 1994-11-10 Philips Patentverwaltung Surveillance system
US5574762A (en) * 1994-08-31 1996-11-12 Nippon Telegraph And Telephone Corp. Method and apparatus for directional counting of moving objects
US5666157A (en) * 1995-01-03 1997-09-09 Arc Incorporated Abnormality detection and surveillance system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4249207A (en) * 1979-02-20 1981-02-03 Computing Devices Company Perimeter surveillance system
WO1996006368A1 (en) * 1994-08-24 1996-02-29 Sq Services Ag Image-evaluation system and process

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DUANE ARLOWE H ET AL: "THE MOBILE INTRUSION DETECTION AND ASSESSMENT SYSTEM (MIDAS)", PROCEEDINGS OF THE INTERNATIONAL CARNAHAN CONFERENCE ON SECURITY TECHNOLOGY: CRIME COUNTERMEASURES, LEXINGTON, OCT. 10 - 12, 1990, 10 October 1990 (1990-10-10), JACKSON J S, pages 54 - 61, XP000222754 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072370A2 (en) * 2005-12-20 2007-06-28 Koninklijke Philips Electronics, N.V. Method and apparatus for estimating object speed
WO2007072370A3 (en) * 2005-12-20 2007-12-13 Koninkl Philips Electronics Nv Method and apparatus for estimating object speed
US8212669B2 (en) 2007-06-08 2012-07-03 Bas Strategic Solutions, Inc. Remote area monitoring system
US8948903B2 (en) 2007-12-21 2015-02-03 Robert Bosch Gmbh Machine tool device having a computing unit adapted to distinguish at least two motions
US10816658B2 (en) 2016-09-07 2020-10-27 OmniPreSense Corporation Radar enabled weapon detection system
US10789472B1 (en) * 2017-06-14 2020-09-29 Amazon Technologies, Inc. Multiple image processing and sensor targeting for object detection
CN109859149A (en) * 2019-01-25 2019-06-07 成都泰盟软件有限公司 A kind of setting target lookup region toy motion tracking method
CN109859149B (en) * 2019-01-25 2023-08-08 成都泰盟软件有限公司 Small animal motion tracking method for setting target searching area
WO2021015672A1 (en) * 2019-07-24 2021-01-28 Hitachi, Ltd. Surveillance system, object tracking system and method of operating the same
CN113379987B (en) * 2020-02-25 2022-10-21 北京君正集成电路股份有限公司 Design method of nursing electronic fence module
CN113379985A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Nursing electronic fence alarm device
CN113379984A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Electronic nursing fence system
CN113379987A (en) * 2020-02-25 2021-09-10 北京君正集成电路股份有限公司 Design method of nursing electronic fence module
CN113379984B (en) * 2020-02-25 2022-09-23 北京君正集成电路股份有限公司 Electronic nursing fence system
CN113379985B (en) * 2020-02-25 2022-09-27 北京君正集成电路股份有限公司 Nursing electronic fence alarm device
CN112435440A (en) * 2020-10-30 2021-03-02 成都蓉众和智能科技有限公司 Non-contact type indoor personnel falling identification method based on Internet of things platform
CN112435440B (en) * 2020-10-30 2022-08-09 成都蓉众和智能科技有限公司 Non-contact type indoor personnel falling identification method based on Internet of things platform

Also Published As

Publication number Publication date
GB9809807D0 (en) 1998-07-08
GB2337146B (en) 2000-07-19
GB2337146A (en) 1999-11-10

Similar Documents

Publication Publication Date Title
WO1999059116A1 (en) Method and apparatus for detecting motion across a surveillance area
Cucchiara et al. The Sakbot system for moving object detection and tracking
Mukojima et al. Moving camera background-subtraction for obstacle detection on railway tracks
Yang et al. Real-time multiple objects tracking with occlusion handling in dynamic scenes
JP2002536646A (en) Object recognition and tracking system
US20080212099A1 (en) Method for counting people passing through a gate
Liu et al. Moving object detection and tracking based on background subtraction
Bartolini et al. Counting people getting in and out of a bus by real-time image-sequence processing
Xu et al. Segmentation and tracking of multiple moving objects for intelligent video analysis
CN108932464A (en) Passenger flow volume statistical method and device
Koller et al. Towards realtime visual based tracking in cluttered traffic scenes
JP4025007B2 (en) Railroad crossing obstacle detection device
Ekinci et al. Background estimation based people detection and tracking for video surveillance
Black et al. A real time surveillance system for metropolitan railways
JP2001043383A (en) Image monitoring system
Landabaso et al. Robust tracking and object classification towards automated video surveillance
Kim et al. Real-time system for counting the number of passing people using a single camera
Akhloufi Pan and tilt real-time target tracking
Corvee et al. Occlusion tolerent tracking using hybrid prediction schemes
AU760298B2 (en) Object recognition and tracking system
Li et al. Real time tracking of moving pedestrians
Fuentes et al. Tracking people for automatic surveillance applications
Shao et al. Eleview: an active elevator video surveillance system
Rotaru et al. Captive Animal Stress Study by Video Analysis
Draganjac et al. Dual camera surveillance system for control and alarm generation in security applications

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): IL JP KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

NENP Non-entry into the national phase

Ref country code: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WA Withdrawal of international application