WO2008088409A2 - Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system - Google Patents

Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system Download PDF

Info

Publication number
WO2008088409A2
WO2008088409A2 PCT/US2007/021316 US2007021316W WO2008088409A2 WO 2008088409 A2 WO2008088409 A2 WO 2008088409A2 US 2007021316 W US2007021316 W US 2007021316W WO 2008088409 A2 WO2008088409 A2 WO 2008088409A2
Authority
WO
WIPO (PCT)
Prior art keywords
regions
region
image
vehicle
single vehicle
Prior art date
Application number
PCT/US2007/021316
Other languages
French (fr)
Other versions
WO2008088409A3 (en
Inventor
Yingzi Du
Francis Bowen
Original Assignee
Indiana University Research & Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indiana University Research & Technology Corporation filed Critical Indiana University Research & Technology Corporation
Publication of WO2008088409A2 publication Critical patent/WO2008088409A2/en
Publication of WO2008088409A3 publication Critical patent/WO2008088409A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/015Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the present invention relates generally to an object monitoring system and, in particular, to a system for monitoring vehicles.
  • Traffic data has been widely used in transportation planning, highway operations, traffic analysis, and performance measurement.
  • Traffic data may be obtained from different sensors such as pneumatic sensors, loop detectors, or cameras.
  • loop detectors are often used to count vehicles.
  • vehicle monitoring systems that use cameras offer a number of advantages. For instance, a larger set of traffic parameters, such as lane changing, congestion, and accidents, can be obtained and measured based on the information content obtained from image sequences, and cameras are easier to install and are usually less costly.
  • Vehicle tracking and traffic monitoring systems based on video image processing have been an active research topic in computer vision and image processing.
  • One system implements a feature-based method with occlusion reasoning for tracking vehicles in congested traffic scenes. This approach is computationally expensive.
  • Another system employs an adaptive background subtraction method to track moving vehicles. Subtracting the background is a popular technique for moving track moving vehicles. Subtracting the background is a popular technique for moving object tracking. These systems differ in how they obtain the background and how they subtract the background from a captured frame. Some subtraction systems attempt to track lane changes while another endeavors to detect and classify vehicles using instantaneous backgrounds. Some systems define the background as a slow, time- varying image sequence, while another one updates the background by adding the weight of the current background obtained from the current frame to the previous background.
  • a key aspect of vision based monitoring systems is image segmentation.
  • Fig. 1 shows an example of a cargo truck with a number of cars being carried in a trailer coupled to the truck. The truck should only be counted as one vehicle. The region of this object, however, is neither uniform, nor homogeneous.
  • Fig. 2 shows a group of similar looking trucks. While two vehicles are in the group, they are difficult to distinguish because the characteristics of the two objects are very close to each other. Consequently, some image processing systems may connect these two vehicle images into a single region. This region would be detected as one vehicle if the first assumption is used.
  • Content-based image segmentation methods are implemented with a variety of different approaches. Some systems use a relationship tree matching approach to achieve hierarchical color region segmentation. This approach is intended to facilitate retrieval of content based image information. Adaptive perceptual color- texture based methods are sometimes used for segmentation of nature scenes. Matching feature distribution based on color gradients for content-based image retrieval of textured objects may also be used to process nature scenes. Semiautomatic video object segmentation using vsnakes is effective in some applications, but the semantic objects require initialization with human assistance. In a wrapper-based approach for image segmentation and classification, the shape of the desired object is used for feature extraction and classification as an integrated part of image segmentation.
  • Fig. 3 shows the 13 vehicle classes used by the Federal Highway Administration (FHWA) to identify vehicles. In actual video image processing even more variety is encountered as individual vehicles in the same class can be very different.
  • FHWA Federal Highway Administration
  • a method of monitoring and classifying vehicle traffic overcomes the limitations of previously known image processing methods by implementing a dynamic content-based image segmentation method.
  • the method comprises receiving a plurality of video frames of traffic data; subtracting a background image from each video frame of the plurality to form a segmented image having at least one object region; connecting adjacent object regions in each segmented image that have centers in a common lane region of the video frame and distances between the centers less than a threshold distance and that have a similar characteristic in proximate boundary portions into a combined region; separating object regions in each segmented image that are located in multiple lane regions of the segmented image into sub-regions corresponding to the lane regions; subtracting sub-regions from the segmented image that are smaller than a threshold size; and labeling the combined regions and the remaining sub-regions as single vehicle regions.
  • a vehicle monitoring and classifying system comprises a background subtractor for subtracting a background image from a plurality of video frames to form a plurality of segmented images; a region connector for connecting adjacent object regions in the plurality of segmented images that have centers in a common lane region of the segmented image and distances between the centers less than a threshold distance and that have a similar characteristic in proximate boundary portions into a combined region; a region separator for separating object regions in the plurality of segmented images that are located in multiple lane regions of the segmented image into sub-regions corresponding to the lane regions and for subtracting sub-regions from a respective segmented image that are smaller than a threshold size; and a labeler for labeling the combined regions and the remaining sub-regions as single vehicle regions.
  • FIG. 1 is a photo of an embodiment of a vehicle that presents problems for traditional image segmentation methods.
  • FIG. 2 is a photo of another embodiment of a vehicle that presents problems for traditional image segmentation methods.
  • FIG. 3 shows the Federal Highway Administration (FHWA) vehicle classification system.
  • FHWA Federal Highway Administration
  • FIG. 4 is an example of how a vehicle can appear in multiple lanes.
  • FIG. 5 is another example of how a vehicle can appear in multiple lanes.
  • FIG. 6 shows a video image before processing.
  • FIG. 7 shows the video image of FIG. 6 downsampled.
  • FIG. 8 shows the calculated background image of the video image of FIG.
  • FIG. 9 shows image after the background image has been subtracted.
  • FIG. 10 shows the image after being compared to a subtraction threshold value.
  • FIG. 11 shows the image after sub-regions have been subtracted.
  • FIG. 12 shows the image after adjacent cars have been separated.
  • FIG. 13 shows the final segmentation image.
  • FIG. 14 shows a flowchart of an embodiment of a method for monitoring and classifying vehicle traffic.
  • FIG. 15 shows a schematic diagram of a system for implementing the method of FIG. 14.
  • FIG. 16 shows a schematic diagram of an image segmenter of the system of FIG. 15.
  • FIG. 17 depicts a representation of a camera view adjustment to compensate for geometric distortions.
  • FIG.15 is a diagram of a system for monitoring and classifying vehicle traffic.
  • System 10 may be used to monitor and/or classify any type of object acquired in a series of video images, and to track the object through a succession of digitized images.
  • the system 10 comprises a video image capturing device 14, which is coupled to an image processing system 18.
  • An exemplary video image capturing device may include a camera with a video recording or video streaming function and a frame resolution greater than 200 x 240.
  • the camera 14 is positioned to capture video images of vehicle traffic.
  • the camera may be mounted on a support, such as, for example, an overpass or pylon, so the camera is above vehicles passing by the camera.
  • the camera may be mounted at any suitable angle for capturing video images of traffic.
  • the camera may be mounted having a horizontal angle of approximately -60 to approximately 60 degrees.
  • the vertical angle may be within a range of approximately 20 to approximately 90 degrees.
  • Video images of the flow of traffic are provided to the image processing system 16 as a plurality of video frames.
  • the format of the video images may be any suitable format such as, for example, mpeg, mpeg2, avi, and the like. This system can be used in a real-time situation in which video streaming from the camera is processed, or an off-line situation in which recorded video frames are processed.
  • Each frame may comprise an array of pixels. Each pixel has a light intensity value for a corresponding portion of the captured image.
  • the pixels may have color values, although the dynamic content-based segmentation method discussed herein may also be practiced with pixels not having color values.
  • the value of each pixel is stored as digital data on a tape, disk, or other memory device, such as the memory 20, for manipulation by the image processing system 18.
  • the image processing system 18 may include an image preprocessor 24, an image segmenter 28, a region tracker 30, and a region classifier 34.
  • the image preprocessor 24, image segmenter 28, region tracker 30 and region classifier 34 may be implemented as software programs in the image processing system 18.
  • the image processing system 18 also preferably comprises at least one processor (not shown) and input/output (I/O) interface (not shown) for implementing the functions of the image preprocessor, image segmenter, region tracker, and region classifier.
  • the image preprocessor 24 is configured to filter noise from the video frame data that may occur during video frame collection.
  • the preprocessor may be configured to scale down, or down sample, the video image to improve the speed of image processing.
  • the requirement for image resolution is very low. Ih at least one embodiment, the system can work on video images in which with the resolution of the frames is more than 2OK pixels (around 100x200) and/or having a frame speed of more than 15 frame/second. Higher resolutions typically require more computational time for image processing.
  • the image segmenter 28 is configured to segment the image into object regions (described more fully below).
  • the image segmenter may include a background subtractor, a region connector, region separator and a region labeler as shown in FIG. 16.
  • the background subtractor subtracts a background image from the frame to reveal foreground or object regions.
  • the region connector and the region separator then either combines or separates the object regions which may then be tagged or labeled as single vehicle regions representing single vehicles (described more fully below) by the labeler.
  • the region tracker 30 is configured to track each object region, or vehicle region, from frame to frame to determine direction of travel, flow of traffic, lane changing habits, and the like.
  • the region classifier 34 may then classify the vehicle based on characteristics of the object region, such as, for example, relative size of the object region. The classification results may then be output in any suitable manner.
  • FIG. 14 a flowchart of a method for monitoring and classifying vehicle traffic is shown.
  • the method comprises receiving a plurality of video frames of traffic data (block 100).
  • FIG. 6 shows an example of a video frame.
  • the plurality of video frames is provided to an image processing system where the video frames may be filtered to remove noise from the video frame data that may occur during video frame collection.
  • the video frames may be scaled down, or down sampled to improve the speed of image processing.
  • FIG. 7 shows the image of FIG. 6 after it has been downsampled. After the frames have been filtered and/or downsampled, the video frames may be partitioned into lane regions.
  • a background image is then calculated by averaging a selected number of video frames (block 104).
  • the image segmenter may store a "background" image.
  • the background image forms a default or base image to which all of the source images are compared.
  • the background image may be an image that is captured when it is known that no extraneous objects (e.g., vehicles) are within the field of view of the camera.
  • the background image may be more typically formed by averaging together a number of video frames. Any suitable number of frames may be used to determine a background image. For example, 100 frames may be used. This allows the background image to be continuously updated, which allows environmental changes, such as subtle changes in lighting conditions, to be gradually incorporated into the background image.
  • FIG. 8 shows the calculated background image of the video image of FIG. 7.
  • the background image may be subtracted from each video frame to form a preliminary segmented image having a plurality of object regions (block 108).
  • FIG. 9 shows the image of FIG. 7 after the background image has been subtracted.
  • the preliminary segmented image may be calculated by:
  • T is the threshold
  • dc(f j (x, y),bk(x, y)) is the color distance, i.e. the Euclidean distance in RGB.
  • the fi(x,y) is the pixel (x,y) in the ith input image; and bk(x,y) ⁇ s the pixel (x,y) in the background image.
  • Ar(A 1 (x, y), A 2 (JC, JO) ⁇ [h;(x,y) - h 2 r (x,y)] + [h, s (x,y) - h 2 g (x,y)] + [h;(x,y) - h 1 b (x,y) [001]
  • A/Ot.jO . h ⁇ g (x,y) , ancl h l b (x,y) are red, green, and blue dimensions of the pixel (x,y) of image h x respectively.
  • FIG. 10 shows the image after being compared to the subtraction threshold value T.
  • object regions of the segmented image that satisfy a region connection test may be connected (block 110).
  • One aspect of this test is the elimination of small areas to reduce the noise blocks.
  • the purpose of region connection is to connect the regions that are possibly from the same vehicle.
  • the region of a vehicle in an image is often neither uniform, nor homogeneous.
  • the regions may be connected based on their location, and similarities of boundary characteristics. Regions are connected only if the following three rules are satisfied: A) The centers of the regions are in a same lane. The center of the regions is calculated by:
  • Rule A is to test if both the center of the mth region (CTM,CTM) and the center of the nth
  • region (C", C") belong to a same lane using Eq. (5).
  • the distance of the regions is smaller than the threshold T d .
  • the distance of the centers from two different regions is calculated using Euclidean distance:
  • L 1 and L 1 represent the areas of the /th and the/th lanes.
  • the boundaries of the two regions have similar characteristics in the adjacent boundary areas. Typically, regions with similar characteristics in the entire areas are connected. This criterion is based on the assumption that the object has uniform/homogenous characteristics. In dynamic content-based segmentation, the characteristics of the close side of boundaries of regions are used instead of the entire areas or entire boundaries. This criterion relaxes the requirement for uniformity or homogeny characteristics of the object or the boundaries of the object. If the average color of the close up side satisfies (Ic(B n B.) ⁇ T bc , the two regions satisfy Rule C. T bc is
  • B 1 and B . are the average colors of the close up side for region / and/
  • object regions that are in more than one lane region may be separated into object sub-regions corresponding to the lane region in which an object sub-region is located (block 114).
  • Object sub-regions from the segmented image that are smaller than a sub-region threshold value are subtracted from the segmented image (block 118).
  • the object sub- regions that are larger than the sub-region threshold value are labeled as object regions.
  • some regions may cross more than one lane in a video frame, which may be from the same vehicle (Fig. 4), or multiple vehicles (Fig. 5). Proper disconnecting of the regions from multiple vehicles is necessary for accurate traffic count.
  • the region from a single vehicle crossing multiple lanes
  • M X JJ , M/ 1 is the mass center of the /th sub-region of Region j, sj .
  • region s J . is in the nth lane.
  • FIGS. 11 , 12 and 13 show the region separation results based on Eq. (7).
  • the regions may be relabeled. Object regions that satisfy the three rules listed above for connecting regions, if any, may then be searched for and connected.
  • each object region may be tracked identifying the object regions in successive video frames (block 120).
  • the moving direction of the regions may be found by finding the related location of in two consecutive video frames.
  • the object regions may be classified as a vehicle type based on a size of the object region (block 124). To classify the vehicle accurately, multiple counts of the same vehicle and missed counts should be avoided. In real traffic situations, the patterns of the traffic may be abnormal. For example, a vehicle may stop in the middle of driving for mechanical problems.
  • the camera is setup to monitor the incoming or outgoing traffic, an invisible counting line is initialized at a position that is horizontally close to the bottom of the video image. However, this invisible counting line can be initialized vertically if the vehicles are in a side view. Only when the center of the vehicle passes the counting line in the same direction of the traffic patterns for that lane, is the vehicle counted.
  • vehicle types may be defined as follows:
  • Type 1 passenger vehicles and pickup trucks, including vehicle classes 1-3 in Fig. 2. This type of vehicle causes the least amount of damage to the road.
  • Type 2 Buses and trucks, including vehicle classes 4-7 in Fig. 2.
  • Type 3 Heavy trucks, including vehicle classes 8-10 in Fig. 2. This type of vehicles has special needs in road pavement.
  • Type 4 Extremely long trucks, including vehicle classes 11-13 in Fig. 2. Similar to Type 3, they have special needs in road pavement.
  • Classification may begin with a training period, which is a length of time measured in frames and which may be user specified. By default, the training period is 2000 frames (about 1 minute video frames) in one embodiment.
  • the training period is 2000 frames (about 1 minute video frames) in one embodiment.
  • the segmented vehicle in color is shown to the user for vehicle type input.
  • a matrix is then updated. Each row in the matrix corresponds to a lane number and each column corresponds to a class number. There are total of 4 types where the higher the type number, the larger the car.
  • the return from the classification function is the size of the car for a particular lane.
  • a lane area view 200 is depicted.
  • the camera or other image capture device is located near the apex of the triangular field of view for a lane.
  • Each lane has a corresponding lane view, however, the orientation of the image capture device with respect to each lane is different.
  • the width of the field of view for a lane is measured in the X direction and the length of an object in the field of view for a lane is measured in the Y direction.
  • a standard object used to detect that classification is defined as an object having a length L detected in an object space having a length L and a width W as shown in FIG. 17. Width W is measured at the end of the length L furthest from the image capture device.
  • This standard classification object space is selected to be one half of the field of view for the lane.
  • the area of the two regions in the field of view 200 that are outside the standard object space defined by L and W is the same as the area in the standard object space.
  • the following projective geometry correction may be used.
  • the length L' of the detected object is used to identify a width W for an object space associated with the detected object.
  • the L' and W parameters for the detected object space may be used to calculate a correction coefficient, C.
  • the object space width to the detected object space width times the detected object length provides a detected object length in the standard object space. Comparing this length to the various classification lengths provides a better indicator of the classification to which the detected object belongs. More specifically, if the projected vehicle size meets any of the following criteria, where the x's represent the class number, the user is presented with the current unmodified frame and asked to specify a corresponding class. If the user assigns a car size with a particular class number and that car size is less than the class size below it, the user is prompted to fix the contradiction. — + 1 > vehicle _ size
  • any entries in the type size matrix that are zero are automatically computed based on neighboring values. If two entries in a column are defined, the third is determined to be the average. If there are less than two entries in a column, the values across the rows are evaluated. The second entry is approximately twice that of the first entry while the third entry is one and half multiple of the second entry and, finally, the forth entry is one and a quarter of the third entry.
  • the user While the program is running, the user is allowed to change any of the classification i sizes.
  • the class sizes are not counted because an accurate count cannot be taken while the user is specifying the class for a specific car size.
  • the vehicles that appear during the training period are not counted.
  • the training period may be as short as a couple of minutes. Compared to the longer period of time for installation and testing of an implementation for previously known vehicle monitoring and tracking processes, the dynamic content-based segmentation disclosed herein is very computational efficient.
  • a log file is updated every time a car is detected.
  • the file stores the time of the frame, the frame number in which a car is counted, and a vehicle's classification.
  • the time stamp is also user specified and is set prior to program execution.
  • a summary of the types and their respective car counts are displayed in a user-friendly interface.
  • This system can work robustly under most circumstances. It correctly counts and tracks vehicles even when multiple vehicles having similar characteristics occlude one another.
  • the system only requires a minimum amount of initial parameters to calibrate the data obtained from the camera.
  • the resolution requirement for this system is relatively low. It can be used in real-time situations and offline applications. It is very flexible in camera selecting and camera mounting. Compared to other approaches, it is very economical to build such a system and it is very effective and accurate.
  • the system may be used in traffic monitoring, vehicle tracking, vehicle classification, road pavement research, and bench marking for other vehicle classification/tracking methods. This system can also be changed easily to monitor other moving objects, like human beings, animals, air planes, military vehicles (like tanks), missiles, and any other moving targets.

Abstract

A method of monitoring and classifying vehicle traffic (figure 14) comprises Eceiving a plurality of video frames of traffic (item 100) data; subtracting a background image from each video frame of the plurality to form a segmented image having at least one object region (item 108); connecting adjacent object regions in each segmented image into a combined region in response to the adjacent object regions having centers in a common lane region of the video frame, distances between the centers less than a threshold distance, and a similar characteristic in proximate boundary portions (item 110); separating object regions in each segmented image that are located in multiple lane regions of the segmented image into sub-regions Corresponding to the lane regions (item 1 14); subtracting sub- regions from the segmented image that are smallerthan a threshold size (item 1 18); and labeling the combined regions and the remaining sub-regions as single vehicle regions (item 124).

Description

Real-Time Dynamic Content Based Vehicle Tracking, Traffic Monitoring, and Classification System
Technical Field
[0001] The present invention relates generally to an object monitoring system and, in particular, to a system for monitoring vehicles.
Background
[0002] In the past decades, traffic data has been widely used in transportation planning, highway operations, traffic analysis, and performance measurement. Traffic data may be obtained from different sensors such as pneumatic sensors, loop detectors, or cameras. Among them, loop detectors are often used to count vehicles. Compared to loop detectors, vehicle monitoring systems that use cameras offer a number of advantages. For instance, a larger set of traffic parameters, such as lane changing, congestion, and accidents, can be obtained and measured based on the information content obtained from image sequences, and cameras are easier to install and are usually less costly.
[0003] Vehicle tracking and traffic monitoring systems based on video image processing have been an active research topic in computer vision and image processing. One system implements a feature-based method with occlusion reasoning for tracking vehicles in congested traffic scenes. This approach is computationally expensive. Another system employs an adaptive background subtraction method to track moving vehicles. Subtracting the background is a popular technique for moving track moving vehicles. Subtracting the background is a popular technique for moving object tracking. These systems differ in how they obtain the background and how they subtract the background from a captured frame. Some subtraction systems attempt to track lane changes while another endeavors to detect and classify vehicles using instantaneous backgrounds. Some systems define the background as a slow, time- varying image sequence, while another one updates the background by adding the weight of the current background obtained from the current frame to the previous background.
[0004] A key aspect of vision based monitoring systems is image segmentation.
Typically, the segmentation is often assumed to be able to extract the object of interest from the background image accurately and autonomously. Existing image segmentation algorithms assume that the region of the object of interest is uniform and homogeneous and that adjacent regions should differ significantly. These typical assumptions are often wrong. Fig. 1 shows an example of a cargo truck with a number of cars being carried in a trailer coupled to the truck. The truck should only be counted as one vehicle. The region of this object, however, is neither uniform, nor homogeneous. Fig. 2 shows a group of similar looking trucks. While two vehicles are in the group, they are difficult to distinguish because the characteristics of the two objects are very close to each other. Consequently, some image processing systems may connect these two vehicle images into a single region. This region would be detected as one vehicle if the first assumption is used.
[0005] Content-based image segmentation methods are implemented with a variety of different approaches. Some systems use a relationship tree matching approach to achieve hierarchical color region segmentation. This approach is intended to facilitate retrieval of content based image information. Adaptive perceptual color- texture based methods are sometimes used for segmentation of nature scenes. Matching feature distribution based on color gradients for content-based image retrieval of textured objects may also be used to process nature scenes. Semiautomatic video object segmentation using vsnakes is effective in some applications, but the semantic objects require initialization with human assistance. In a wrapper-based approach for image segmentation and classification, the shape of the desired object is used for feature extraction and classification as an integrated part of image segmentation. [0006] None of these methods, however, are able to effectively monitor arid classify vehicles on a road segment. Vehicle classification is difficult because vehicles on a highway have very different shapes. This variety is exemplified by Fig. 3, which shows the 13 vehicle classes used by the Federal Highway Administration (FHWA) to identify vehicles. In actual video image processing even more variety is encountered as individual vehicles in the same class can be very different.
Summary
[0007] A method of monitoring and classifying vehicle traffic overcomes the limitations of previously known image processing methods by implementing a dynamic content-based image segmentation method. The method comprises receiving a plurality of video frames of traffic data; subtracting a background image from each video frame of the plurality to form a segmented image having at least one object region; connecting adjacent object regions in each segmented image that have centers in a common lane region of the video frame and distances between the centers less than a threshold distance and that have a similar characteristic in proximate boundary portions into a combined region; separating object regions in each segmented image that are located in multiple lane regions of the segmented image into sub-regions corresponding to the lane regions; subtracting sub-regions from the segmented image that are smaller than a threshold size; and labeling the combined regions and the remaining sub-regions as single vehicle regions.
[0008] A vehicle monitoring and classifying system comprises a background subtractor for subtracting a background image from a plurality of video frames to form a plurality of segmented images; a region connector for connecting adjacent object regions in the plurality of segmented images that have centers in a common lane region of the segmented image and distances between the centers less than a threshold distance and that have a similar characteristic in proximate boundary portions into a combined region; a region separator for separating object regions in the plurality of segmented images that are located in multiple lane regions of the segmented image into sub-regions corresponding to the lane regions and for subtracting sub-regions from a respective segmented image that are smaller than a threshold size; and a labeler for labeling the combined regions and the remaining sub-regions as single vehicle regions.
Brief Description of the Drawings
[0009] FIG. 1 is a photo of an embodiment of a vehicle that presents problems for traditional image segmentation methods. [0010] FIG. 2 is a photo of another embodiment of a vehicle that presents problems for traditional image segmentation methods.
[0011] FIG. 3 shows the Federal Highway Administration (FHWA) vehicle classification system.
[0012] FIG. 4 is an example of how a vehicle can appear in multiple lanes.
[0013] FIG. 5 is another example of how a vehicle can appear in multiple lanes.
[0014] FIG. 6 shows a video image before processing.
[0015] FIG. 7 shows the video image of FIG. 6 downsampled.
[0016] FIG. 8 shows the calculated background image of the video image of FIG.
7.
[0017] FIG. 9 shows image after the background image has been subtracted.
[0018] FIG. 10 shows the image after being compared to a subtraction threshold value.
[0019] FIG. 11 shows the image after sub-regions have been subtracted.
[0020] FIG. 12 shows the image after adjacent cars have been separated.
[0021] FIG. 13 shows the final segmentation image.
[0022] FIG. 14 shows a flowchart of an embodiment of a method for monitoring and classifying vehicle traffic.
[0023] FIG. 15 shows a schematic diagram of a system for implementing the method of FIG. 14.
[0024] FIG. 16 shows a schematic diagram of an image segmenter of the system of FIG. 15. [0025] FIG. 17 depicts a representation of a camera view adjustment to compensate for geometric distortions.
Detailed Description of Exemplary Embodiments and Processes [0026] FIG.15 is a diagram of a system for monitoring and classifying vehicle traffic. System 10, however, may be used to monitor and/or classify any type of object acquired in a series of video images, and to track the object through a succession of digitized images. The system 10 comprises a video image capturing device 14, which is coupled to an image processing system 18. An exemplary video image capturing device may include a camera with a video recording or video streaming function and a frame resolution greater than 200 x 240. The camera 14 is positioned to capture video images of vehicle traffic. To this end, the camera may be mounted on a support, such as, for example, an overpass or pylon, so the camera is above vehicles passing by the camera. The camera may be mounted at any suitable angle for capturing video images of traffic. For example, the camera may be mounted having a horizontal angle of approximately -60 to approximately 60 degrees. The vertical angle may be within a range of approximately 20 to approximately 90 degrees.
[0027] Video images of the flow of traffic are provided to the image processing system 16 as a plurality of video frames. The format of the video images may be any suitable format such as, for example, mpeg, mpeg2, avi, and the like. This system can be used in a real-time situation in which video streaming from the camera is processed, or an off-line situation in which recorded video frames are processed. Each frame may comprise an array of pixels. Each pixel has a light intensity value for a corresponding portion of the captured image. The pixels may have color values, although the dynamic content-based segmentation method discussed herein may also be practiced with pixels not having color values. Typically, the value of each pixel is stored as digital data on a tape, disk, or other memory device, such as the memory 20, for manipulation by the image processing system 18.
[0028] The image processing system 18 may include an image preprocessor 24, an image segmenter 28, a region tracker 30, and a region classifier 34. In one embodiment, the image preprocessor 24, image segmenter 28, region tracker 30 and region classifier 34 may be implemented as software programs in the image processing system 18. Thus, the image processing system 18 also preferably comprises at least one processor (not shown) and input/output (I/O) interface (not shown) for implementing the functions of the image preprocessor, image segmenter, region tracker, and region classifier.
[0029] The image preprocessor 24 is configured to filter noise from the video frame data that may occur during video frame collection. In addition, the preprocessor may be configured to scale down, or down sample, the video image to improve the speed of image processing. In this system, the requirement for image resolution is very low. Ih at least one embodiment, the system can work on video images in which with the resolution of the frames is more than 2OK pixels (around 100x200) and/or having a frame speed of more than 15 frame/second. Higher resolutions typically require more computational time for image processing.
[0030] The image segmenter 28 is configured to segment the image into object regions (described more fully below). To this end, the image segmenter may include a background subtractor, a region connector, region separator and a region labeler as shown in FIG. 16. The background subtractor subtracts a background image from the frame to reveal foreground or object regions. The region connector and the region separator then either combines or separates the object regions which may then be tagged or labeled as single vehicle regions representing single vehicles (described more fully below) by the labeler. The region tracker 30 is configured to track each object region, or vehicle region, from frame to frame to determine direction of travel, flow of traffic, lane changing habits, and the like. The region classifier 34 may then classify the vehicle based on characteristics of the object region, such as, for example, relative size of the object region. The classification results may then be output in any suitable manner.
[0031] Referring to FIG. 14, a flowchart of a method for monitoring and classifying vehicle traffic is shown. The method comprises receiving a plurality of video frames of traffic data (block 100). FIG. 6 shows an example of a video frame. The plurality of video frames is provided to an image processing system where the video frames may be filtered to remove noise from the video frame data that may occur during video frame collection. In addition, the video frames may be scaled down, or down sampled to improve the speed of image processing. FIG. 7 shows the image of FIG. 6 after it has been downsampled. After the frames have been filtered and/or downsampled, the video frames may be partitioned into lane regions. [0032] A background image is then calculated by averaging a selected number of video frames (block 104). As mentioned, the image segmenter may store a "background" image. The background image forms a default or base image to which all of the source images are compared. In its simplest form, the background image may be an image that is captured when it is known that no extraneous objects (e.g., vehicles) are within the field of view of the camera. However, the background image may be more typically formed by averaging together a number of video frames. Any suitable number of frames may be used to determine a background image. For example, 100 frames may be used. This allows the background image to be continuously updated, which allows environmental changes, such as subtle changes in lighting conditions, to be gradually incorporated into the background image. FIG. 8 shows the calculated background image of the video image of FIG. 7. Once the background image has been calculated, the background image may be subtracted from each video frame to form a preliminary segmented image having a plurality of object regions (block 108). FIG. 9 shows the image of FIG. 7 after the background image has been subtracted. The preliminary segmented image may be calculated by:
Figure imgf000010_0001
Here, T is the threshold, and 5,(x,^) is the subtraction result: s, (JC, y) = dc(f;(x, y),bk(x, y)) . (2) dc(fj (x, y),bk(x, y)) is the color distance, i.e. the Euclidean distance in RGB. The fi(x,y) is the pixel (x,y) in the ith input image; and bk(x,y) \s the pixel (x,y) in the background image.
[0033] Color distance </C(A, (A:,J),/I2(*,J)) between pixel A1(X5^) and fι2(x,y) \s calculated by:
Ar(A1 (x, y), A2 (JC, JO) = ^[h;(x,y) - h2 r (x,y)] + [h,s(x,y) - h2 g(x,y)] + [h;(x,y) - h1 b(x,y) [001] Here, A/Ot.jO . h{ g(x,y) , ancl hl b(x,y)are red, green, and blue dimensions of the pixel (x,y) of image hx respectively. h2 r(x,y) , /ι2 g(x,y) , and
Figure imgf000011_0001
red, green, and blue dimensions of the pixel (x,y) of image It2 respectively. FIG. 10 shows the image after being compared to the subtraction threshold value T. [0034] Once the background image has been subtracted, object regions of the segmented image that satisfy a region connection test may be connected (block 110). One aspect of this test is the elimination of small areas to reduce the noise blocks. The purpose of region connection is to connect the regions that are possibly from the same vehicle. The region of a vehicle in an image is often neither uniform, nor homogeneous. The regions may be connected based on their location, and similarities of boundary characteristics. Regions are connected only if the following three rules are satisfied: A) The centers of the regions are in a same lane. The center of the regions is calculated by:
C; = min{* I g(x,y) e R J + max{* \ g(x, y) e R1n } , And C; = mm{y \ g(x,y) e RJ + max{y \ g(x,y) e RJ. (3)
Here (C" , C" ) is the coordinate of the center for mth region. In this way, the holes in an imperfectly separated region are ignored. Lanes may be assumed to be straight lines. The linear model of each lane boundaries, Eq. (3), is calculated from initial camera calibration parameters: y = kx + b (4)
(C; , C; ) e L1 , if min {D((C; , C; ), LB υ )+ />((C , C; ), LB1 2 )} . (5) Here D[(x,y),LBJ k ) means the shortest distance from the point (x, y)io the jth lane
LBJ 's kth boundary line LB. k (Each lane has two boundary lines).
Rule A is to test if both the center of the mth region (C™,C™) and the center of the nth
region (C", C") belong to a same lane using Eq. (5).
B) The distance of the regions is smaller than the threshold Td. The distance of the centers from two different regions is calculated using Euclidean distance:
= U(c: -c:)2 +(c; -c;γ , (c;,c;) ≡ L, &(cx m,c;) e Lj^ (5) dmn
[ oo, otherwise
L1 and L1 represent the areas of the /th and the/th lanes.
C) The boundaries of the two regions have similar characteristics in the adjacent boundary areas. Typically, regions with similar characteristics in the entire areas are connected. This criterion is based on the assumption that the object has uniform/homogenous characteristics. In dynamic content-based segmentation, the characteristics of the close side of boundaries of regions are used instead of the entire areas or entire boundaries. This criterion relaxes the requirement for uniformity or homogeny characteristics of the object or the boundaries of the object. If the average color of the close up side satisfies (Ic(BnB.) < Tbc , the two regions satisfy Rule C. Tbc is
the threshold. B1 and B . are the average colors of the close up side for region / and/
To connect two regions, the gaps between the adjacent boundaries are closed. The region labels are updated after connecting regions. [0035] Once the regions indicating a single vehicle have been combined, object regions that are in more than one lane region may be separated into object sub-regions corresponding to the lane region in which an object sub-region is located (block 114). Object sub-regions from the segmented image that are smaller than a sub-region threshold value are subtracted from the segmented image (block 118). The object sub- regions that are larger than the sub-region threshold value are labeled as object regions. For example, some regions may cross more than one lane in a video frame, which may be from the same vehicle (Fig. 4), or multiple vehicles (Fig. 5). Proper disconnecting of the regions from multiple vehicles is necessary for accurate traffic count. At the same time, the region from a single vehicle (crossing multiple lanes) should be kept in one piece. [0036] The following processing is used to separate the object regions:
1 ) Separate the region into sub-regions based on the boundaries of the lanes. For each sub-region, if the area is small, the region is eliminated.
2) Calculate the mass centers (which is different from Eq. 3) of the sub-region by:
Figure imgf000013_0001
Here (MX JJ , M/1 ) is the mass center of the /th sub-region of Region j, sj .
3) Calculate the shortest distance from the mass center of this region to the lane boundaries D((MX JJ, M/ J ),LBm<i ) and D((M/ ' , M/ J ), LB11 2) respectively. Here the sub-
region sJ. is in the nth lane.
4) Decide if this sub-region should stay or disappear by:
Figure imgf000014_0001
Here Td is the threshold for the distance. If δj = 1 , the sub-region sJ. should be
eliminated. Otherwise, the sub-region sJ. should be separated as a new region.
FIGS. 11 , 12 and 13 show the region separation results based on Eq. (7).
[0037] After the region separation, the regions may be relabeled. Object regions that satisfy the three rules listed above for connecting regions, if any, may then be searched for and connected.
[0038] Once each object region has been determined to be a region representing a single vehicle, the object regions may be tracked identifying the object regions in successive video frames (block 120). In vehicle tracking, the moving direction of the regions may be found by finding the related location of in two consecutive video frames.
If two regions from two consecutive video frames have a threshold percentage of overlapping, the two regions are from same vehicle.
[0039] If the entire area of a region is used to track the vehicle, more computational complexity exists. Moreover, lane changing, or other abnormal traffic patterns can be misidentified. Therefore, in one embodiment, a center-tracking method is used. Suppose Rn''] and Ri are from same vehicle. (C" , Cy )l~x and (C" , C" )' are
the centers for R^1 and Rm' , respectively. The moving direction of the vehicle is decided by the location of the centers. By tracking vehicle movement of initial video frames, traffic patterns for each lane may be determined. [0040] Once the regions have been identified and tracked, the object regions may be classified as a vehicle type based on a size of the object region (block 124). To classify the vehicle accurately, multiple counts of the same vehicle and missed counts should be avoided. In real traffic situations, the patterns of the traffic may be abnormal. For example, a vehicle may stop in the middle of driving for mechanical problems. [0041] In one embodiment, the camera is setup to monitor the incoming or outgoing traffic, an invisible counting line is initialized at a position that is horizontally close to the bottom of the video image. However, this invisible counting line can be initialized vertically if the vehicles are in a side view. Only when the center of the vehicle passes the counting line in the same direction of the traffic patterns for that lane, is the vehicle counted.
[0042] After the vehicle has been detected and counted, it may be classified into
4 different types: The vehicle types may be defined as follows:
Type 1 : passenger vehicles and pickup trucks, including vehicle classes 1-3 in Fig. 2. This type of vehicle causes the least amount of damage to the road.
Type 2: Buses and trucks, including vehicle classes 4-7 in Fig. 2.
Type 3: Heavy trucks, including vehicle classes 8-10 in Fig. 2. This type of vehicles has special needs in road pavement.
Type 4: Extremely long trucks, including vehicle classes 11-13 in Fig. 2. Similar to Type 3, they have special needs in road pavement.
This 4-type classification scheme has been defined by Indiana Department of Transportation for road pavement research although any classification system may be used. [0043] Classification may begin with a training period, which is a length of time measured in frames and which may be user specified. By default, the training period is 2000 frames (about 1 minute video frames) in one embodiment. When a car is detected during the training period, the segmented vehicle in color is shown to the user for vehicle type input. A matrix is then updated. Each row in the matrix corresponds to a lane number and each column corresponds to a class number. There are total of 4 types where the higher the type number, the larger the car. The return from the classification function is the size of the car for a particular lane. [0044] When detecting vehicles, variations in the mounting of the camera may result in variations in the sizes of vehicle regions in an image. As a result, the detected length of a vehicle may be different depending on the angle and/or height at which the camera is mounted. To compensate for variations in the detected length of a vehicle that may result due to variations in the mounting of the camera, projective geometry may be used. Projective geometry allows the projected vehicle size to be adjusted to ensure more accurate comparisons among vehicle classes.
[0045] With reference to FIG. 17, a lane area view 200 is depicted. The camera or other image capture device is located near the apex of the triangular field of view for a lane. Each lane has a corresponding lane view, however, the orientation of the image capture device with respect to each lane is different. The width of the field of view for a lane is measured in the X direction and the length of an object in the field of view for a lane is measured in the Y direction. For each classification, a standard object used to detect that classification is defined as an object having a length L detected in an object space having a length L and a width W as shown in FIG. 17. Width W is measured at the end of the length L furthest from the image capture device. This standard classification object space is selected to be one half of the field of view for the lane. Thus, the area of the two regions in the field of view 200 that are outside the standard object space defined by L and W is the same as the area in the standard object space. To compensate for geometric distortions arising from an object being detected by an image capture device that does not capture a field of view exactly as the one used for the standard classification object, the following projective geometry correction may be used. The length L' of the detected object is used to identify a width W for an object space associated with the detected object. The L' and W parameters for the detected object space may be used to calculate a correction coefficient, C. The correction
W W coefficient may be described with the equation = - . Solving for the coefficient that
Figure imgf000017_0001
adjusts the length L' in the detected object space to the length L the object would have
in the standard object space yields: L = — w -t * L'= C* L' . Thus, the ratio of the standard
object space width to the detected object space width times the detected object length provides a detected object length in the standard object space. Comparing this length to the various classification lengths provides a better indicator of the classification to which the detected object belongs. More specifically, if the projected vehicle size meets any of the following criteria, where the x's represent the class number, the user is presented with the current unmodified frame and asked to specify a corresponding class. If the user assigns a car size with a particular class number and that car size is less than the class size below it, the user is prompted to fix the contradiction. — + 1 > vehicle _ size
Figure imgf000018_0001
γ γ vehicle size > — + JC,
3
[0046] Once the training period ends, any entries in the type size matrix that are zero are automatically computed based on neighboring values. If two entries in a column are defined, the third is determined to be the average. If there are less than two entries in a column, the values across the rows are evaluated. The second entry is approximately twice that of the first entry while the third entry is one and half multiple of the second entry and, finally, the forth entry is one and a quarter of the third entry.
While the program is running, the user is allowed to change any of the classification i sizes. During the training period, the class sizes are not counted because an accurate count cannot be taken while the user is specifying the class for a specific car size. The vehicles that appear during the training period are not counted. The training period may be as short as a couple of minutes. Compared to the longer period of time for installation and testing of an implementation for previously known vehicle monitoring and tracking processes, the dynamic content-based segmentation disclosed herein is very computational efficient.
[0047] After the training period and the counting commences, a log file is updated every time a car is detected. The file stores the time of the frame, the frame number in which a car is counted, and a vehicle's classification. The time stamp is also user specified and is set prior to program execution. A summary of the types and their respective car counts are displayed in a user-friendly interface.
[0048] This system can work robustly under most circumstances. It correctly counts and tracks vehicles even when multiple vehicles having similar characteristics occlude one another. The system only requires a minimum amount of initial parameters to calibrate the data obtained from the camera. The resolution requirement for this system is relatively low. It can be used in real-time situations and offline applications. It is very flexible in camera selecting and camera mounting. Compared to other approaches, it is very economical to build such a system and it is very effective and accurate. The system may be used in traffic monitoring, vehicle tracking, vehicle classification, road pavement research, and bench marking for other vehicle classification/tracking methods. This system can also be changed easily to monitor other moving objects, like human beings, animals, air planes, military vehicles (like tanks), missiles, and any other moving targets.
[0049] Those skilled in the art will recognize that numerous modifications can be made to the specific implementations described above. While the embodiments above have been described with reference to specific applications, embodiments addressing other applications may be developed without departing from the principles of the invention described above. Therefore, the following claims are not to be limited to the specific embodiments illustrated and described above. The claims, as originally presented and as they may be amended, encompass variations, alternatives, modifications, improvements, equivalents, and substantial equivalents of the embodiments and teachings disclosed herein, including those that are presently unforeseen or unappreciated, and that, for example, may arise from
applicants/patentees and others.

Claims

What is claimed is:
1. A method for monitoring and tracking vehicle traffic, the method comprising: receiving a plurality of video frames of traffic data; subtracting a background image from each video frame of the plurality to form a segmented image having at least one object region; connecting adjacent object regions in each segmented image into a single object region in response to the adjacent object regions having centers in a common lane region of the video frame, distances between the centers that are less than a threshold distance, and detection of a similar characteristic in proximate boundary portions; separating object regions in each segmented image that are located in multiple lane regions of the segmented image into multiple object regions corresponding to the lane regions; and subtracting object regions from the segmented image that are smaller than a threshold size.
2. The method of claim 1 , the background image subtraction further comprising: subtracting the background image from each video frame of the plurality to form a preliminary segmented image; and comparing the preliminary segmented images to a threshold to detect object regions for each of the plurality of video frames.
3. The method of claim 1 , further comprising: tracking a position of at least one single vehicle region in successive video frames.
4. The method of claim 3, the position tracking further comprising: tracking a position of a center of the at least one single vehicle region in successive video frames.
5. The method of claim 4, further comprising: counting single vehicle regions having a center that crosses a counting line.
6. The method of claim 1 , further comprising: classifying at least one single vehicle region as a vehicle type.
7. The method of claim 6, the classification further comprising: classifying the at least one single vehicle region based on a geometric characteristic of the single vehicle region.
8. The method of claim 1 , further comprising: downsampling the plurality of video frames.
9. The method of claim 1 , further comprising: filtering noise from the plurality of video frames before segmenting.
10. A vehicle monitoring and classifying system comprising: a background subtractor configured to subtract a background image from a plurality of video frames to form a plurality of segmented images; a region connector configured to connect adjacent object regions in the plurality of segmented images into a single region, the region connector computing centers in a common lane region of the segmented image, measuring distances between the centers, comparing the measured distances to a threshold distance, and detecting a similar characteristic in proximate boundary portions; a region separator configured to separate object regions in the plurality of segmented images that are located in multiple lane regions of the segmented image into sub-regions corresponding to the lane regions and to subtract sub- regions from a respective segmented image that are smaller than a threshold size; and a labeler configured to label the combined regions and the remaining sub- regions as single vehicle regions.
11. The system of claim 10, the background subtractor being configured to subtract the background image from each video frame of the plurality to form a plurality of preliminary segmented images and to compare the plurality of preliminary segmented images to a threshold to form the plurality of segmented images.
12. The system of claim 10. further comprising: a region tracker configured to track a position of at least one single vehicle region in successive video frames.
13. The system of claim 12, the region tracker being configured to track a position of a center of the at least one single vehicle region in successive video frames.
14. The system of claim 13, further comprising: a region counter configured to count single vehicle regions that cross a counting line.
15. The system of claim 10, further comprising: a single vehicle region classifier configured to classify at least one single vehicle region as a vehicle type.
16. The system of claim 15, the classifier being configured to classify the at least one single vehicle region based on a geometric characteristic of the single vehicle region.
17. The system of claim 10, further comprising: an image preprocessor configured to downsample the plurality of video frames.
18. The system of claim 17, the image preprocessor being configured to filter noise from the plurality of video frames.
19. The system of claim 15, the classifier being configured to a video data receiver for receiving real-time video traffic image data; and the classifier classifying single vehicle regions generated from the realtime video traffic image data.
PCT/US2007/021316 2006-12-19 2007-10-04 Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system WO2008088409A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US87577006P 2006-12-19 2006-12-19
US60/875,770 2006-12-19

Publications (2)

Publication Number Publication Date
WO2008088409A2 true WO2008088409A2 (en) 2008-07-24
WO2008088409A3 WO2008088409A3 (en) 2008-11-20

Family

ID=39636513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/021316 WO2008088409A2 (en) 2006-12-19 2007-10-04 Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system

Country Status (1)

Country Link
WO (1) WO2008088409A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011020997A1 (en) * 2009-08-17 2011-02-24 Pips Technology Limited A method and system for measuring the speed of a vehicle
CN102568206A (en) * 2012-01-13 2012-07-11 大连民族学院 Video monitoring-based method for detecting cars parking against regulations
WO2013053159A1 (en) * 2011-10-09 2013-04-18 青岛海信网络科技股份有限公司 Method and device for tracking vehicle
FR3008214A1 (en) * 2013-07-02 2015-01-09 Rizze DEVICE FOR A VIDEO SURVEILLANCE SYSTEM OF ROAD TRAFFIC TO DETECT THE PRESENCE OF SUSPECTED ELEMENTS ON THE ROAD
CN106652462A (en) * 2016-09-30 2017-05-10 广西大学 Illegal parking management system based on Internet
CN113269004A (en) * 2020-02-14 2021-08-17 富士通株式会社 Traffic counting device and method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5809161A (en) * 1992-03-20 1998-09-15 Commonwealth Scientific And Industrial Research Organisation Vehicle monitoring system
US6121989A (en) * 1996-05-15 2000-09-19 Samsung Electronics Co., Ltd. Transparency having printing surface discriminating area method for discriminating printing surface of transparency in thermal printer and device appropriate therefor
US6160494A (en) * 1996-07-26 2000-12-12 Sodi; Paolo Machine and method for detecting traffic offenses with dynamic aiming systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5809161A (en) * 1992-03-20 1998-09-15 Commonwealth Scientific And Industrial Research Organisation Vehicle monitoring system
US6121989A (en) * 1996-05-15 2000-09-19 Samsung Electronics Co., Ltd. Transparency having printing surface discriminating area method for discriminating printing surface of transparency in thermal printer and device appropriate therefor
US6160494A (en) * 1996-07-26 2000-12-12 Sodi; Paolo Machine and method for detecting traffic offenses with dynamic aiming systems

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011020997A1 (en) * 2009-08-17 2011-02-24 Pips Technology Limited A method and system for measuring the speed of a vehicle
US8964031B2 (en) 2009-08-17 2015-02-24 3M Innovative Properties Company Method and system for measuring the speed of a vehicle
RU2543947C2 (en) * 2009-08-17 2015-03-10 Зм Инновэйтив Пропертиз Компани Vehicle speed measurement method and system
WO2013053159A1 (en) * 2011-10-09 2013-04-18 青岛海信网络科技股份有限公司 Method and device for tracking vehicle
CN102568206A (en) * 2012-01-13 2012-07-11 大连民族学院 Video monitoring-based method for detecting cars parking against regulations
FR3008214A1 (en) * 2013-07-02 2015-01-09 Rizze DEVICE FOR A VIDEO SURVEILLANCE SYSTEM OF ROAD TRAFFIC TO DETECT THE PRESENCE OF SUSPECTED ELEMENTS ON THE ROAD
CN106652462A (en) * 2016-09-30 2017-05-10 广西大学 Illegal parking management system based on Internet
CN113269004A (en) * 2020-02-14 2021-08-17 富士通株式会社 Traffic counting device and method and electronic equipment
CN113269004B (en) * 2020-02-14 2024-03-05 富士通株式会社 Traffic counting device and method and electronic equipment

Also Published As

Publication number Publication date
WO2008088409A3 (en) 2008-11-20

Similar Documents

Publication Publication Date Title
Zhu et al. VISATRAM: A real-time vision system for automatic traffic monitoring
US10943131B2 (en) Image based lane marking classification
Atkočiūnas et al. Image processing in road traffic analysis
Sommer et al. A survey on moving object detection for wide area motion imagery
Tseng et al. Real-time video surveillance for traffic monitoring using virtual line analysis
KR100377067B1 (en) Method and apparatus for detecting object movement within an image sequence
Huang et al. A vision-based vehicle identification system
US8798314B2 (en) Detection of vehicles in images of a night time scene
CN101030256B (en) Method and apparatus for cutting vehicle image
JP2917661B2 (en) Traffic flow measurement processing method and device
WO2003001473A1 (en) Vision-based collision threat detection system_
CN108052904B (en) Method and device for acquiring lane line
JP2003067752A (en) Vehicle periphery monitoring device
WO2008088409A2 (en) Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system
CN111081031B (en) Vehicle snapshot method and system
JP6678552B2 (en) Vehicle type identification device and vehicle type identification method
KR101134857B1 (en) Apparatus and method for detecting a navigation vehicle in day and night according to luminous state
KR101026778B1 (en) Vehicle image detection apparatus
KR101089029B1 (en) Crime Preventing Car Detection System using Optical Flow
Zhu et al. A real-time vision system for automatic traffic monitoring based on 2D spatio-temporal images
CN114049306A (en) Traffic anomaly detection system design based on image camera and high-performance display card
Sharma et al. Automatic vehicle detection using spatial time frame and object based classification
Munajat et al. Vehicle detection and tracking based on corner and lines adjacent detection features
Siyal et al. Image processing techniques for real-time qualitative road traffic data analysis
Kurniawan et al. Image processing technique for traffic density estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07852523

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07852523

Country of ref document: EP

Kind code of ref document: A2