WO1996005569A1 - Airborne video identification system and method - Google Patents

Airborne video identification system and method Download PDF

Info

Publication number
WO1996005569A1
WO1996005569A1 PCT/US1995/010027 US9510027W WO9605569A1 WO 1996005569 A1 WO1996005569 A1 WO 1996005569A1 US 9510027 W US9510027 W US 9510027W WO 9605569 A1 WO9605569 A1 WO 9605569A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
database
image data
image
vessel
Prior art date
Application number
PCT/US1995/010027
Other languages
French (fr)
Inventor
Brent E. Anderson
Robert L. Layton
Original Assignee
Anderson Brent E
Layton Robert L
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anderson Brent E, Layton Robert L filed Critical Anderson Brent E
Priority to AU32780/95A priority Critical patent/AU3278095A/en
Publication of WO1996005569A1 publication Critical patent/WO1996005569A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This invention relates to remote image acquisition, processing, and recognition and more particularly to optically acquiring images, digitizing, storing, and interactively manipulating the images with a computer, and uniquely identifying them with the aid of a database.
  • U.S. Pat. No. 4,992,797 issued February 12, 1991 to Gjessing et al. for a METHOD OF DETECTION AND IDENTIFICATION OF ONE OR MORE REMOTE OBJECTS describes an airborne radar system that employs a processor to compare radar echo waveform signature data with a stored library of waveform signature data relating to various possible objects.
  • the processor compares the waveform signature data in real-time using a weighted algorithm that considers aspect angle, size, and shape of the object in determining a most probable object identification, such as a class of aircraft like a 747 or a 737.
  • the system provides "some form of a display" to an operator.
  • the laser-based system has more resolution than the radar-based system, it still employs a relatively expensive active signal acquisition technique, and has sufficient resolution to provide unique object identification.
  • VIDEO SCAN CONVERSION describes an airborne infra-red line scanner that employs a videotape recorder to acquire video data of scanned objects and their associated features.
  • the recorded video data are digitized to produce high- resolution image data that are stored in a bulk storage device.
  • a system operator may directly view the recorded video data on a video monitor or may view the stored image data, reconstructed by a processor, on the video monitor.
  • the processor also allows the operator to zoom into and roam across specific features revealed by the image data. Further processing of the image data allows the operator to compare recorded video data with stored image data such as a map library, and to display transformed images such as terrain sections.
  • the system does not have an image classification or unique identification capability.
  • U.S. Pat. No. 4,703,347 issued October 27, 1987 to Yasuda et al. for an INDIVIDUALITY DISCRIMINATING SYSTEM describes using a television camera, a digitizer, and a microprocessor to acquire a real-time digitized image of an individual in which the processor extracts for display selected facial features of the image for comparison by an operator with a previously digitized and compressed image of the person's corresponding facial features that are stored on an identification card.
  • the system displays facial features associated with only a single individual per identification card, provides no verification or identification capabilities, and cannot uniquely identify remote objects. What is needed, therefore, is a cost effective, image-based remote object identification system that can uniquely identify objects in real-time.
  • An object of this invention is, therefore, to provide an apparatus and a method for acquiring images of remote objects, digitizing, storing, and interactively manipulating the images with a computer to identify an object classification and a unique object identity within the classification with the aid of a multimedia database.
  • Another object of this invention is to provide an apparatus and a method for uniquely identifying remote objects in real-time or near real-time by interactive cooperation between an operator and a computer to manipulate images of the objects and stored counterparts thereof stored in a database.
  • a further object of this invention is to provide an apparatus and a method for acquiring and identifying remote objects in real-time or near real-time with the aid of a centrally maintained database that can be undated when previously unidentified objects are acquired and identified by the operator.
  • an airborne video identification system acquires video images of objects such as seagoing vessels, digitizes, stores, and processes the images in a computer under interactive operator control to correlate them with images and associated textual data that are stored in a database.
  • the computer employs a fuzzy logic correlation matching filter algorithm to correlate textual data describing the vessel with vessel data prestored in a multimedia database.
  • the fuzzy logic algorithm employs vessel characteristics, such as length, beam, mast height, color, hull shape, material, rigging, speed, and antenna types.
  • An additional method applied to captured images employs an adaptive wavelet algorithm having wavelet decomposition coefficients that are adapted by a back propagation neural net.
  • Resulting data are compared with a prestored database consisting of images of known objects that were compressed in the same manner.
  • Candidate objects found in the database are displayed so a final unique identification decision may be made by an operator.
  • the operator may also query the database to make an independent manual correlation and identification.
  • the database is a subset of a master database that is centrally maintained and updated. Prior to a reconnaissance flight, the subset database is loaded into a removable disk pack for use in the airborne video identification system. During the flight the subset database may be updated with images and associated data pertaining to heretofore unidentified objects. The subset database may also be updated to reflect changes observed in previously identified objects. When the flight is completed, the removable disk pack including the updates and changes is removed from the aircraft and selectively reloaded onto the master database.
  • Fig. 1 is a simplified pictorial block diagram showing elements of an airborne video identification system of this invention.
  • Fig. 2 is a simplified pictorial block diagram showing a centralized master database development, maintenance and updating system of this invention.
  • Fig. 3 is a flow chart showing the steps employed by the system of Fig. l to uniquely identify a remote object.
  • Fig. 4 is a flow chart showing the steps employed by the system of Fig 2. to generate, maintain, and update the master database.
  • Fig. l shows an airborne video identification system ("AVIDS") 10 of this invention that is useful for uniquely identifying seagoing vessels such as a trawler 12.
  • AVIDS airborne video identification system
  • AVIDS 10 includes a video camera 14, a camera mount 16, a computer 18 for running application software that performs image enhancement, identification, and retrieval of images from a database that is stored in a removable media hard disk drive 20.
  • Video camera 14 is preferably a commercially available video camcorder such as a model LI manufactured by Canon Corporation, Tokyo, Japan, and which has a
  • Video camera 14 preferably views trawler 12 through a 15:1 600-millimeter zoom lens 22, or an optional lens such as a 3,000-millimeter fixed telephoto lens.
  • Video data acquired by video camera 14 are stored in a standard analog format on a cassette tape 24.
  • the preferred airborne application may employ both high and low altitude reconnaissance, in which case AVIDS 10 may have two cameras with different lenses, or one camera with a turret of lenses.
  • Camera mount 16 is mounted to a platform 26, such as an aircraft bulkhead, truck bed, seagoing vessel, or a human to provide operational stability and a range of azimuth and elevation pointing angles for video camera 14.
  • Camera mount 16 is preferably a three-axis of freedom hand operated mount.
  • Camera mount 16. allows video camera 14 to be pointed at least plus and minus 90 degrees in elevation and at least plus and minus 130 degrees in azimuth relative to a horizontal platform.
  • Camera mount 16 may optionally include encoders (not shown) for transmitting angular information through a mount interface 28 and a system bus 30 to computer 18.
  • Computer 18 is preferably a commercially available personal computer such as a 486 DX 2-66 that is configured with a video input board 32 such as a "Video Blaster" from Creative Laboratories, video frame digitizing and storage software; a 16-Megabyte RAM memory 34; 1-Megabyte of cache memory (not shown) ; a 512-Megabyte hard drive disk such as a model CP30540 manufactured by Conner Peripherals; removable media hard disk drive 20 such as a Bernoulli 150 Megabyte removable cartridge drive manufactured by Iomega, an operating system (not shown) such as Microsoft* DOS 6.2 with Windows'"; a relational database program (not shown) such as Microsoft* AccessTM; a software development system such as Microsoft* Visual
  • GUI graphic user interface
  • Computer 18 runs application software (not shown) that performs image preprocessing operations such as video frame grabbing, edge detection, noise suppression, and object range determining; graphic user interface operations for Windows* environment control and image coordinate selection for calculating object dimensions; image matching operations that apply algorithms to the video image for correlation with and selection of candidate database images; and image database operations such as indexing, storing, and data retrieval.
  • image preprocessing operations such as video frame grabbing, edge detection, noise suppression, and object range determining
  • graphic user interface operations for Windows* environment control and image coordinate selection for calculating object dimensions
  • image matching operations that apply algorithms to the video image for correlation with and selection of candidate database images
  • image database operations such as indexing, storing, and data retrieval.
  • image preprocessing entails using video input board 32 with which an operator scans through and "grabs" selected video frames originating from real ⁇ time video from camera 14 or prerecorded video from cassette tape 24.
  • Operator selected video detection thresholds reduce video background noise to enhance selected images, and edge detection algorithms prepare the data for subsequent processing.
  • image measurement entails using GUI 38 to drag and size a box 42 tangentially surrounding the image of trawler 12 displayed on display 36 and using a graphic cursor 44 to select key points, such as the prow, stern, and mast top(s).
  • Image coordinates selected by the box and graphic cursor operations provide computer 18 with data required for subsequent beam and length calculations.
  • the length of an observed object can be determined by knowing the focal length and aperture of lens 22 on camera 14, the relative length trawler 12 as viewed on display 36, and the range to the object.
  • the range to the object can be determined by use of optional on-board radar or laser range finder systems.
  • the range may also be calculated from camera altitude and camera sighting angle data relative to the zenith, which data are either manually or automatically entered into computer 18.
  • Data selection entails use of GUI 38 to pick from among appropriate dialogue buttons displaying basic descriptive terms on display 36, such as new/old or wood/metal.
  • Data entry entails entering with keyboard 40 camera elevation and azimuth angles and aircraft orientation angles.
  • Automated entry of the foregoing angle data entry may optionally entail the above-described encoders and mount interface 28, and use of a navigation data interface 46 to receive aircraft heading, camera angle data, and altitude.
  • Image matching operations entail using data elements the above-described data such as beam, length, number of masts, material, and age as criteria in a database search algorithm.
  • the search data are weighted so that absence of a data element does not preclude a successful match.
  • the search algorithm employs "fuzzy logic" techniques that allow for uncertainty and undependability in the data elements.
  • the fuzzy logic algorithm receives a set of image data values for an unidentified object, such a trawler 12, and compares them with a set of fuzzy membership functions corresponding to each image stored in the database.
  • the uncertainty of each data element is encoded as an area of the corresponding fuzzy membership function.
  • a logical AND operation is encoded as a weighted sum of value function returns at the input data value for all data elements.
  • the weighted sum value represents a degree of match between the unknown object image and the stored image.
  • Weight values are determined by conventional least means squared adaptive techniques that are entered in the stored image database between reconnaissance flights.
  • the above-described identification method may be augmented by representing the data elements of an unknown object by measurement distributions surrounding estimated measurement values, the width of which are determined by an estimated uncertainty of the associated measurement. AVIDS 10 may, thereby, consider visibility, distance to object, and other factors that affect the reliability of measured object characteristics.
  • direct adaptive image matching may be performed.
  • a single layer network, or "wavenet, " is created over a cascaded wavelet basis in which each image stored in the database has a corresponding output node.
  • the weights of the stored images are determined adaptively using a least means squared or minimum identification error method.
  • This is a computationally intensive "learning phase” that is preferably accomplished at a centralized location between reconnaissance flights.
  • unclassified and/or unidentified object images are received by the trained network, and a potential match to each stored image is indicated by a level of output node response for each stored image.
  • the image match values may be added to the above-described data element-based matching method as an additional independent datum for each image.
  • Image database operations entail indexing, storing, and data retrieval on the database stored in hard disk drive 20.
  • Database operations also entail centrally generating, maintaining, and updating a master database in which each object, such as trawler 12, has a record that includes textual fields that store characteristic features (beam, length, masts, height, material, age), textual fields that store identification data (name, owner, year built) , and a field pointing at a compressed graphical image of each object.
  • a selected subset of the master database is extracted and stored in removable media hard disk drive 20 prior to each reconnaissance flight. During the flight the subset database may be updated with images and associated data pertaining to heretofore unidentified objects. The database may also be updated to reflect changes observed in previously identified objects.
  • the removable medium including the updated and changed subset database is removed from the aircraft for updating the master database.
  • the identification process entails the presentation of images from the subset database that most strongly correlate to video images that the operator has chosen for identification. Based on a visual inspection of the video images and associated data, the operator can override the system to identifying the images.
  • the only images stored in memory 34 are video images received by video camera 14 and directed to video input board 32.
  • computer 18 examines the field pointing at the compressed graphical image of each candidate object and retrieves and decompresses the candidate images from hard disk drive 20.
  • Conventional image compression and decompression techniques such as JPEG are employed to provide image data compression.
  • Cassette tape 24 stores image (and audio) data for an entire flight.
  • Fig. 2 shows a preferred central data processing system 50 suitable for generating, maintaining, and updating the above-described master database.
  • System 50 includes a computer 52, a system bus 53, a removable media hard disk drive 54, a memory 56, a display 58, a keyboard 60, and a GUI 62, all of which are substantially identical to those in AVIDS 10.
  • System 50 further includes a mass storage system 64 sized to store the master database and its associated images, and a document scanner 66 with an associated scanner controller 68.
  • System 50 functions to generate, maintain, and update the master database, perform database indexing, check data integrity, compress and decompress image data, and extract the subset database for use in AVIDS 10.
  • Records in the master database are generated by employing document scanner 66 to digitize a document 70, such as selected pages from JANE'S FIGHTING SHIPS, or by receiving digital data stored on a removable medium compatible with removable media in hard disk drive 54.
  • Associated textual data entry and image editing are performed by employing keyboard 60 and GUI 62.
  • Trawler 12 may be but one trawler in a fleet of similar trawlers built by a particular shipyard. Its uniqueness may be identified by virtue of a particular image criteria such as mast height and configuration, markings, unique deck equipment, port hole configuration, or by a particular textual data criteria such as expected operating location, sighting history, or identifiable name.
  • Fig. 3 shows the processes employed by AVIDS 10
  • An unidentified seagoing vessel for example trawler 12
  • An object acquisition process block 100 entails an operator pointing video camera 14 toward trawler 12 such that video input board 32 and cassette recorder 24 acquire a video data stream.
  • the video data stream is stored in memory 34 as a sequence of video frames.
  • a selecting process block 102 entails operator viewing of the stored video frames and selection of a viable frame of video data showing trawler 12.
  • a displaying process block 104 entails
  • a posting process block 106 associates the acquisition time and geographic coordinates with the selected freeze frame of trawler 12. Posting process block 106 is carried out automatically if AVIDS 10 has navigation interface 46, or manually by operator entry through keyboard 40.
  • An entering process block 108 entails operator entry of observable relevant object data such as number of masts, material (wood/steel) , age (old/new) , and name.
  • a graphic interface process block 110 entails operator selection with GUI 38 of displayed freeze frame image end points.
  • An entering process block 112 entails operator entry of platform 26 altitude, video camera 14 elevation angle, or slant range to trawler 12, or AVIDS 10 enters the required data automatically by means of navigation data interface 46, camera mount 16 encoders, and mount interface 28.
  • a calculating process block 114 entails computer
  • a video enhancement process block 116 entails operator selection of at least one of a video threshold level, an edge detection technique, and a wavelet compression algorithm to reduce freeze frame image clutter and background noise.
  • a tracing process block 118 entails operator use of GUI 38 to trace with line segments the freeze frame image outline of trawler 12.
  • the line segment outline image is stored for subsequent correlation with object images in the database stored in hard disk drive 20.
  • a querying process block 120 entails operator query of the database with the estimated object length calculated by calculating process block 114 and data entered during entering process block 108.
  • An optional neural net applying process block 121 employs an adaptive wavelet algorithm having wavelet decomposition coefficients that are adapted by a back propagation neural net. Resulting data are compared with a prestored database consisting of images of known objects that were compressed in the same manner.
  • a searching process block 122 employs computer 18 to search the database by using object characteristics and data filters from querying process step 122 to find and retrieve most likely candidate object records.
  • a displaying process block 124 simultaneously displays the freeze frame image and retrieved candidate images on display 36.
  • a graphic editing process block 126 entails operator positioning and resizing (panning and zooming) of the retrieved candidate images for overlaying with the freeze frame image.
  • An identifying decision block 128 entails operator selection of the retrieved candidate image that most closely matches the freeze frame image by comparing the video images, outline images, and associated textual data characteristics. The operator may decide that the freeze frame image is of an unidentified object that requires addition to the subset database.
  • Fig. 4 shows the processes employed by central data processing system 50 (Fig. 2) to generate, maintain, and update the master database in association with the subset database. Determination of adaptive filter coefficients is accomplished at central data processing system 50 and entered into the master database stored in mass storage system 64.
  • a digitizing process block 130 entails document scanner 66 scanning a document 70 to capture image and textual data associated with an object, for example trawler 12, such that a master database record of the object is generated for storage in mass storage system 64. Digitizing process block 130 is repeated a sufficient number of times to generate a comprehensive master database of objects. Digitizing process block 130 also entails record generation by transfer of digital data from a medium in hard disk drive 54 and operator manual data entry.
  • An extracting process block 132 utilizes querying and searching processes such as those of process blocks 120 and 122 to extract from the master database a subset database that primarily includes records of objects that are likely to be detected and acquired for identification during a particular reconnaissance flight.
  • a loading process block 134 loads the subset database records extracted by process block 132 onto the medium in removable hard disk drive 54.
  • An installing process block 136 entails removing the medium containing the subset database from removable hard disk drive 54 and installed it in hard disk drive 20 of AVIDS 10 for use during a subsequent reconnaissance flight.
  • An identifying process block 138 identifies objects and updates and/or adds records to the subset database during the reconnaissance flight in accordance with the above-described process and decision blocks 100 to 128 shown in Fig. 3.
  • a removing process block 140 entails removing the medium from hard disk 20 in AVIDS 10 and loading the medium into hard disk drive 54 of central data processing system 50.
  • a retrieving process block 142 employs the operator to selectively retrieve from the subset database images and associated data records revised and/or added during the reconnaissance flight.
  • the operator is accorded flexible record selection based on object location, time of acquisition, name, type, or associated data.
  • An updating process block 144 employs computer 52 to update master database records stored in mass storage system 64 with selected records and associated digital images retrieved from the subset database in hard disk drive 54.
  • an enhanced capability video identification system may not require operator interaction during the image acquisition and identification process.
  • Use of spatially cascading adaptive wavelet compressions on which a single detection back propogation neural net operates may isolate an object from the background clutter and noise of a viable freeze frame image.
  • Edge detection techniques may be used to determine identifiable extrema.
  • analysis of frame sequences aspects may allow the enhanced capability system to select optimal video freeze frames.
  • AVIDS 10 and central data processing system 50 may have alternate embodiments.
  • hard disk drives 20 and 54 may be replaced by removable media memory devices employing magnetic tape, floppy disks, optical disks, or cartridge disk packs.
  • Displays 36 and 58 may be either color or monochrome CRT, LCD, plasma, or other display types of any usable resolution.
  • GUIs 38 and 62 may include a mouse, joystick, graphic tablet, or cursor keys on keyboards 40 or 60.
  • Computers 18 and 52 may include any computing platform capable of executing the applications programs and controlling the components of this invention and may run under control of any suitable operating system with or without a Windows-like environment.
  • Scanner 66 may be a video camera-based frame grabber system, an optical line scanner or any other scanning means suitable for digitizing printed images at a resolution suitable for use in AVIDS 10. Scanner 66 may optionally have text scanning and recognizing capability useful for entry of relevant textual data in master database records. Finally, video camera 14 may be any suitably passive image scanning device having usable resolution and sensitivity to images in the visible or infrared frequency spectra.
  • AVIDS 10 could also be used on a surface vehicle, such as a ship or a truck for identifying objects such as aircraft, ships, surface vehicles, buildings, geographic features, and landmarks. It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments of this invention without departing from the underlying principles thereof. Accordingly, it will be appreciated that this invention is also applicable to object identification applications other than those found in airborne systems for identifying seagoing vessels. The scope of the present invention should, therefore, be determined only by the following claims.

Abstract

An airborne video identification system (10) acquires video images of objects such as seagoing vessels (12), digitizes (32, 100), stores (56, 100), and processes the images with a computer (18) under interactive operator associated textual data that are stored in a database. In the case of a seagoing vessel, the computer performs length, beam, and mast-height calculations (114) for use as a correlation pre-filter. Fuzzy logic (120) is applied to the images and correlation searches (122) are made in the database. Candidate objects found in the database are displayed (124) so a final unique identification decision (128) may be made by an operator. The database is a subset of a master database that is centrally maintained and updated. The subset database, which is selectively reloaded (144) onto the master database, is loaded into a removable disk pack (20).

Description

AIRBORNE VIDEO IDENTIFICATION SYSTEM AND METHOD
Technical Field
This invention relates to remote image acquisition, processing, and recognition and more particularly to optically acquiring images, digitizing, storing, and interactively manipulating the images with a computer, and uniquely identifying them with the aid of a database. Background of the Invention
There are previously known apparatus and methods for remote image acquisition, processing, and recognition. In particular, U.S. Pat. No. 4,992,797 issued February 12, 1991 to Gjessing et al. for a METHOD OF DETECTION AND IDENTIFICATION OF ONE OR MORE REMOTE OBJECTS describes an airborne radar system that employs a processor to compare radar echo waveform signature data with a stored library of waveform signature data relating to various possible objects. The processor compares the waveform signature data in real-time using a weighted algorithm that considers aspect angle, size, and shape of the object in determining a most probable object identification, such as a class of aircraft like a 747 or a 737. The system provides "some form of a display" to an operator. Unfortunately, radar-based identification systems are active which renders them detectable and unnecessarily expensive. Moreover, inadequate object detail resolution prevents unique object identification. i.e., an object can be identified as an aircraft, even a 737, but not for example, a particular United Airlines 737.
Another remote object identifying system employing higher resolution data acquisition techniques is described in U.S. Pat. No. 5,074,673 issued December 24, 1991 to Sowell et al. for a LASER-BASED TARGET DISCRIMINATOR in which an airborne imaging system employs active laser scanning for real-time identification of surface objects. The laser scanning acquires differential range data that are processed into topological information that relating to the scanned objects. The topological information is compared with stored database information to find specific aspects of the scanned object, such as length and width, that are associated with object classes such as tanks, trucks, and trains. The system does not provide for any operator viewing or interaction.
While the laser-based system has more resolution than the radar-based system, it still employs a relatively expensive active signal acquisition technique, and has sufficient resolution to provide unique object identification.
Optical scanning has been used to overcome resolution problems that plague remote object identification systems. For example, U.S. Pat. No.
4,774,572 issued September 27, 1988 to Kellar et al. for VIDEO SCAN CONVERSION describes an airborne infra-red line scanner that employs a videotape recorder to acquire video data of scanned objects and their associated features. The recorded video data are digitized to produce high- resolution image data that are stored in a bulk storage device. A system operator may directly view the recorded video data on a video monitor or may view the stored image data, reconstructed by a processor, on the video monitor. The processor also allows the operator to zoom into and roam across specific features revealed by the image data. Further processing of the image data allows the operator to compare recorded video data with stored image data such as a map library, and to display transformed images such as terrain sections. However, the system does not have an image classification or unique identification capability.
Visual spectrum optical scanning systems provide some additional benefits. For example, U.S. Pat. No. 4,442,453 issued April 10, 1984 to Verdier for a PH0T0- RECONNAISSANCE SYSTEM describes the use of photo-optical detectors (CCDs) to scan high-resolution photographic film in the visual spectrum, such that an operator may zoom into and roam across individual film frames to identify specific features. Unique object identification by a trained operator is possible, albeit not in real-time.
In another optical scanning example, U.S. Pat. No. 4,703,347 issued October 27, 1987 to Yasuda et al. for an INDIVIDUALITY DISCRIMINATING SYSTEM describes using a television camera, a digitizer, and a microprocessor to acquire a real-time digitized image of an individual in which the processor extracts for display selected facial features of the image for comparison by an operator with a previously digitized and compressed image of the person's corresponding facial features that are stored on an identification card. The system displays facial features associated with only a single individual per identification card, provides no verification or identification capabilities, and cannot uniquely identify remote objects. What is needed, therefore, is a cost effective, image-based remote object identification system that can uniquely identify objects in real-time. Summary of the Invention An object of this invention is, therefore, to provide an apparatus and a method for acquiring images of remote objects, digitizing, storing, and interactively manipulating the images with a computer to identify an object classification and a unique object identity within the classification with the aid of a multimedia database. Another object of this invention is to provide an apparatus and a method for uniquely identifying remote objects in real-time or near real-time by interactive cooperation between an operator and a computer to manipulate images of the objects and stored counterparts thereof stored in a database.
A further object of this invention is to provide an apparatus and a method for acquiring and identifying remote objects in real-time or near real-time with the aid of a centrally maintained database that can be undated when previously unidentified objects are acquired and identified by the operator.
Accordingly, an airborne video identification system acquires video images of objects such as seagoing vessels, digitizes, stores, and processes the images in a computer under interactive operator control to correlate them with images and associated textual data that are stored in a database. In a seagoing vessel example, the computer employs a fuzzy logic correlation matching filter algorithm to correlate textual data describing the vessel with vessel data prestored in a multimedia database. The fuzzy logic algorithm employs vessel characteristics, such as length, beam, mast height, color, hull shape, material, rigging, speed, and antenna types. An additional method applied to captured images employs an adaptive wavelet algorithm having wavelet decomposition coefficients that are adapted by a back propagation neural net. Resulting data are compared with a prestored database consisting of images of known objects that were compressed in the same manner. Candidate objects found in the database are displayed so a final unique identification decision may be made by an operator. The operator may also query the database to make an independent manual correlation and identification.
The database is a subset of a master database that is centrally maintained and updated. Prior to a reconnaissance flight, the subset database is loaded into a removable disk pack for use in the airborne video identification system. During the flight the subset database may be updated with images and associated data pertaining to heretofore unidentified objects. The subset database may also be updated to reflect changes observed in previously identified objects. When the flight is completed, the removable disk pack including the updates and changes is removed from the aircraft and selectively reloaded onto the master database.
Additional objects and advantages of this invention will be apparent from the following detailed description of preferred embodiments thereof that proceed with reference to the accompanying drawings. Brief Description of the Drawings
Fig. 1 is a simplified pictorial block diagram showing elements of an airborne video identification system of this invention.
Fig. 2 is a simplified pictorial block diagram showing a centralized master database development, maintenance and updating system of this invention. Fig. 3 is a flow chart showing the steps employed by the system of Fig. l to uniquely identify a remote object. Fig. 4 is a flow chart showing the steps employed by the system of Fig 2. to generate, maintain, and update the master database.
Detailed Description of Preferred Embodiments Fig. l shows an airborne video identification system ("AVIDS") 10 of this invention that is useful for uniquely identifying seagoing vessels such as a trawler 12.
AVIDS 10 includes a video camera 14, a camera mount 16, a computer 18 for running application software that performs image enhancement, identification, and retrieval of images from a database that is stored in a removable media hard disk drive 20.
Video camera 14 is preferably a commercially available video camcorder such as a model LI manufactured by Canon Corporation, Tokyo, Japan, and which has a
410,000 pixel CCD array with 0.5 Lux light sensitivity. Video camera 14 preferably views trawler 12 through a 15:1 600-millimeter zoom lens 22, or an optional lens such as a 3,000-millimeter fixed telephoto lens. Video data acquired by video camera 14 are stored in a standard analog format on a cassette tape 24. The preferred airborne application may employ both high and low altitude reconnaissance, in which case AVIDS 10 may have two cameras with different lenses, or one camera with a turret of lenses.
Camera mount 16 is mounted to a platform 26, such as an aircraft bulkhead, truck bed, seagoing vessel, or a human to provide operational stability and a range of azimuth and elevation pointing angles for video camera 14. Camera mount 16 is preferably a three-axis of freedom hand operated mount. Camera mount 16.allows video camera 14 to be pointed at least plus and minus 90 degrees in elevation and at least plus and minus 130 degrees in azimuth relative to a horizontal platform. Camera mount 16 may optionally include encoders (not shown) for transmitting angular information through a mount interface 28 and a system bus 30 to computer 18.
Computer 18 is preferably a commercially available personal computer such as a 486 DX 2-66 that is configured with a video input board 32 such as a "Video Blaster" from Creative Laboratories, video frame digitizing and storage software; a 16-Megabyte RAM memory 34; 1-Megabyte of cache memory (not shown) ; a 512-Megabyte hard drive disk such as a model CP30540 manufactured by Conner Peripherals; removable media hard disk drive 20 such as a Bernoulli 150 Megabyte removable cartridge drive manufactured by Iomega, an operating system (not shown) such as Microsoft* DOS 6.2 with Windows'"; a relational database program (not shown) such as Microsoft* Access™; a software development system such as Microsoft* Visual
Basic™ 3.0; a neural net shell such as the Visual Basic Ward system; a display 36 such as a Super VGA color monitor having 768 X 1024 or 1280 X 1024 pixel addressability: a graphic user interface ("GUI") 38 such as a track ball; and a standard 101-key keyboard 40.
Computer 18 runs application software (not shown) that performs image preprocessing operations such as video frame grabbing, edge detection, noise suppression, and object range determining; graphic user interface operations for Windows* environment control and image coordinate selection for calculating object dimensions; image matching operations that apply algorithms to the video image for correlation with and selection of candidate database images; and image database operations such as indexing, storing, and data retrieval.
In particular, image preprocessing entails using video input board 32 with which an operator scans through and "grabs" selected video frames originating from real¬ time video from camera 14 or prerecorded video from cassette tape 24. Operator selected video detection thresholds reduce video background noise to enhance selected images, and edge detection algorithms prepare the data for subsequent processing.
After a video image is selected and preprocessed, the operator performs graphic user interface operations, such as image measurement and associated data selection and entry. Assuming that the selected image is of trawler 12, image measurement entails using GUI 38 to drag and size a box 42 tangentially surrounding the image of trawler 12 displayed on display 36 and using a graphic cursor 44 to select key points, such as the prow, stern, and mast top(s). Image coordinates selected by the box and graphic cursor operations provide computer 18 with data required for subsequent beam and length calculations. The length of an observed object can be determined by knowing the focal length and aperture of lens 22 on camera 14, the relative length trawler 12 as viewed on display 36, and the range to the object. The range to the object can be determined by use of optional on-board radar or laser range finder systems. The range may also be calculated from camera altitude and camera sighting angle data relative to the zenith, which data are either manually or automatically entered into computer 18. Data selection entails use of GUI 38 to pick from among appropriate dialogue buttons displaying basic descriptive terms on display 36, such as new/old or wood/metal. Data entry entails entering with keyboard 40 camera elevation and azimuth angles and aircraft orientation angles. Automated entry of the foregoing angle data entry may optionally entail the above-described encoders and mount interface 28, and use of a navigation data interface 46 to receive aircraft heading, camera angle data, and altitude.
Image matching operations entail using data elements the above-described data such as beam, length, number of masts, material, and age as criteria in a database search algorithm. The search data are weighted so that absence of a data element does not preclude a successful match. The search algorithm employs "fuzzy logic" techniques that allow for uncertainty and undependability in the data elements. The fuzzy logic algorithm receives a set of image data values for an unidentified object, such a trawler 12, and compares them with a set of fuzzy membership functions corresponding to each image stored in the database. The uncertainty of each data element is encoded as an area of the corresponding fuzzy membership function. A logical AND operation is encoded as a weighted sum of value function returns at the input data value for all data elements. The weighted sum value represents a degree of match between the unknown object image and the stored image. Weight values are determined by conventional least means squared adaptive techniques that are entered in the stored image database between reconnaissance flights. The above-described identification method may be augmented by representing the data elements of an unknown object by measurement distributions surrounding estimated measurement values, the width of which are determined by an estimated uncertainty of the associated measurement. AVIDS 10 may, thereby, consider visibility, distance to object, and other factors that affect the reliability of measured object characteristics.
In addition to the above-described data element- based matching algorithm, direct adaptive image matching may be performed. A single layer network, or "wavenet, " is created over a cascaded wavelet basis in which each image stored in the database has a corresponding output node. The weights of the stored images are determined adaptively using a least means squared or minimum identification error method. This is a computationally intensive "learning phase" that is preferably accomplished at a centralized location between reconnaissance flights. During a reconnaissance flight, unclassified and/or unidentified object images are received by the trained network, and a potential match to each stored image is indicated by a level of output node response for each stored image. The image match values may be added to the above-described data element-based matching method as an additional independent datum for each image.
Image database operations entail indexing, storing, and data retrieval on the database stored in hard disk drive 20. Database operations also entail centrally generating, maintaining, and updating a master database in which each object, such as trawler 12, has a record that includes textual fields that store characteristic features (beam, length, masts, height, material, age), textual fields that store identification data (name, owner, year built) , and a field pointing at a compressed graphical image of each object. A selected subset of the master database is extracted and stored in removable media hard disk drive 20 prior to each reconnaissance flight. During the flight the subset database may be updated with images and associated data pertaining to heretofore unidentified objects. The database may also be updated to reflect changes observed in previously identified objects. When the flight is completed, the removable medium including the updated and changed subset database is removed from the aircraft for updating the master database. During the flight, the identification process entails the presentation of images from the subset database that most strongly correlate to video images that the operator has chosen for identification. Based on a visual inspection of the video images and associated data, the operator can override the system to identifying the images. At this point in the identification process, the only images stored in memory 34 are video images received by video camera 14 and directed to video input board 32. To display images of objects stored in the subset database that meet the above described matching criteria, computer 18 examines the field pointing at the compressed graphical image of each candidate object and retrieves and decompresses the candidate images from hard disk drive 20. Conventional image compression and decompression techniques such as JPEG are employed to provide image data compression.
Memory-size limitations allow storage of only a few minutes of real-time camcorder video frames, after which time all but selected frames are deleted. Cassette tape 24 stores image (and audio) data for an entire flight.
Fig. 2 shows a preferred central data processing system 50 suitable for generating, maintaining, and updating the above-described master database. System 50 includes a computer 52, a system bus 53, a removable media hard disk drive 54, a memory 56, a display 58, a keyboard 60, and a GUI 62, all of which are substantially identical to those in AVIDS 10. System 50 further includes a mass storage system 64 sized to store the master database and its associated images, and a document scanner 66 with an associated scanner controller 68. System 50 functions to generate, maintain, and update the master database, perform database indexing, check data integrity, compress and decompress image data, and extract the subset database for use in AVIDS 10.
Records in the master database are generated by employing document scanner 66 to digitize a document 70, such as selected pages from JANE'S FIGHTING SHIPS, or by receiving digital data stored on a removable medium compatible with removable media in hard disk drive 54. Associated textual data entry and image editing are performed by employing keyboard 60 and GUI 62.
Trawler 12 may be but one trawler in a fleet of similar trawlers built by a particular shipyard. Its uniqueness may be identified by virtue of a particular image criteria such as mast height and configuration, markings, unique deck equipment, port hole configuration, or by a particular textual data criteria such as expected operating location, sighting history, or identifiable name. Fig. 3 shows the processes employed by AVIDS 10
(Fig. 1) to uniquely identify an object such as trawler 12.
During a reconnaissance flight, an unidentified seagoing vessel, for example trawler 12, is detected. An object acquisition process block 100 entails an operator pointing video camera 14 toward trawler 12 such that video input board 32 and cassette recorder 24 acquire a video data stream. The video data stream is stored in memory 34 as a sequence of video frames. A selecting process block 102 entails operator viewing of the stored video frames and selection of a viable frame of video data showing trawler 12.
A displaying process block 104 entails
"freezing" the viable frame of video data for display on display 36. This process is commonly referred to as displaying a "freeze frame" or "frame grabbing."
A posting process block 106 associates the acquisition time and geographic coordinates with the selected freeze frame of trawler 12. Posting process block 106 is carried out automatically if AVIDS 10 has navigation interface 46, or manually by operator entry through keyboard 40.
An entering process block 108 entails operator entry of observable relevant object data such as number of masts, material (wood/steel) , age (old/new) , and name.
A graphic interface process block 110 entails operator selection with GUI 38 of displayed freeze frame image end points.
An entering process block 112 entails operator entry of platform 26 altitude, video camera 14 elevation angle, or slant range to trawler 12, or AVIDS 10 enters the required data automatically by means of navigation data interface 46, camera mount 16 encoders, and mount interface 28. A calculating process block 114 entails computer
18 calculating an estimated length for trawler 12 from data provided by entering process block 112 and from the freeze frame image length (number of pixels) .
A video enhancement process block 116 entails operator selection of at least one of a video threshold level, an edge detection technique, and a wavelet compression algorithm to reduce freeze frame image clutter and background noise.
A tracing process block 118 entails operator use of GUI 38 to trace with line segments the freeze frame image outline of trawler 12. The line segment outline image is stored for subsequent correlation with object images in the database stored in hard disk drive 20.
A querying process block 120 entails operator query of the database with the estimated object length calculated by calculating process block 114 and data entered during entering process block 108.
An optional neural net applying process block 121 employs an adaptive wavelet algorithm having wavelet decomposition coefficients that are adapted by a back propagation neural net. Resulting data are compared with a prestored database consisting of images of known objects that were compressed in the same manner. A searching process block 122 employs computer 18 to search the database by using object characteristics and data filters from querying process step 122 to find and retrieve most likely candidate object records.
A displaying process block 124 simultaneously displays the freeze frame image and retrieved candidate images on display 36.
A graphic editing process block 126 entails operator positioning and resizing (panning and zooming) of the retrieved candidate images for overlaying with the freeze frame image.
An identifying decision block 128 entails operator selection of the retrieved candidate image that most closely matches the freeze frame image by comparing the video images, outline images, and associated textual data characteristics. The operator may decide that the freeze frame image is of an unidentified object that requires addition to the subset database.
Fig. 4 shows the processes employed by central data processing system 50 (Fig. 2) to generate, maintain, and update the master database in association with the subset database. Determination of adaptive filter coefficients is accomplished at central data processing system 50 and entered into the master database stored in mass storage system 64. A digitizing process block 130 entails document scanner 66 scanning a document 70 to capture image and textual data associated with an object, for example trawler 12, such that a master database record of the object is generated for storage in mass storage system 64. Digitizing process block 130 is repeated a sufficient number of times to generate a comprehensive master database of objects. Digitizing process block 130 also entails record generation by transfer of digital data from a medium in hard disk drive 54 and operator manual data entry.
An extracting process block 132 utilizes querying and searching processes such as those of process blocks 120 and 122 to extract from the master database a subset database that primarily includes records of objects that are likely to be detected and acquired for identification during a particular reconnaissance flight.
A loading process block 134 loads the subset database records extracted by process block 132 onto the medium in removable hard disk drive 54. An installing process block 136 entails removing the medium containing the subset database from removable hard disk drive 54 and installed it in hard disk drive 20 of AVIDS 10 for use during a subsequent reconnaissance flight. An identifying process block 138 identifies objects and updates and/or adds records to the subset database during the reconnaissance flight in accordance with the above-described process and decision blocks 100 to 128 shown in Fig. 3. A removing process block 140 entails removing the medium from hard disk 20 in AVIDS 10 and loading the medium into hard disk drive 54 of central data processing system 50.
A retrieving process block 142 employs the operator to selectively retrieve from the subset database images and associated data records revised and/or added during the reconnaissance flight. The operator is accorded flexible record selection based on object location, time of acquisition, name, type, or associated data.
An updating process block 144 employs computer 52 to update master database records stored in mass storage system 64 with selected records and associated digital images retrieved from the subset database in hard disk drive 54.
Skilled workers will recognize that portions of this invention may have alternative embodiments. For example, an enhanced capability video identification system may not require operator interaction during the image acquisition and identification process. Use of spatially cascading adaptive wavelet compressions on which a single detection back propogation neural net operates may isolate an object from the background clutter and noise of a viable freeze frame image. Edge detection techniques may be used to determine identifiable extrema. Moreover, analysis of frame sequences aspects may allow the enhanced capability system to select optimal video freeze frames.
Various components of AVIDS 10 and central data processing system 50 may have alternate embodiments. For example, hard disk drives 20 and 54 may be replaced by removable media memory devices employing magnetic tape, floppy disks, optical disks, or cartridge disk packs. Displays 36 and 58 may be either color or monochrome CRT, LCD, plasma, or other display types of any usable resolution. GUIs 38 and 62 may include a mouse, joystick, graphic tablet, or cursor keys on keyboards 40 or 60. Computers 18 and 52 may include any computing platform capable of executing the applications programs and controlling the components of this invention and may run under control of any suitable operating system with or without a Windows-like environment. Scanner 66 may be a video camera-based frame grabber system, an optical line scanner or any other scanning means suitable for digitizing printed images at a resolution suitable for use in AVIDS 10. Scanner 66 may optionally have text scanning and recognizing capability useful for entry of relevant textual data in master database records. Finally, video camera 14 may be any suitably passive image scanning device having usable resolution and sensitivity to images in the visible or infrared frequency spectra.
Of course, AVIDS 10 could also be used on a surface vehicle, such as a ship or a truck for identifying objects such as aircraft, ships, surface vehicles, buildings, geographic features, and landmarks. It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments of this invention without departing from the underlying principles thereof. Accordingly, it will be appreciated that this invention is also applicable to object identification applications other than those found in airborne systems for identifying seagoing vessels. The scope of the present invention should, therefore, be determined only by the following claims.

Claims

C AIMS
1. A substantially real time method of classifying and identifying a remotely sensed object, comprising: sensing the remote object electro-optically; generating image data and textual data associated with the remotely sensed object; displaying on a display an enhanced image of the object; storing compressed image data derived from the enhanced image of the object; retrieving from a computer database records related to the textual data; and identifying the object uniquely by comparing the retrieved records with the compressed image data.
2. The method of claim 1 in which the sensing step further includes sensing the remote object with at least one of an infrared sensor and a radar system and combining electro-optical data with data generated by at least one of the infrared sensor and the radar system.
3. The method of claim 1 in which the sensing step comprises: receiving multiple frames of object image data; selecting a viable one of the frames for object identification; storing the selected viable frame of the object image data; and displaying the viable frame of object image data.
4. The method of claim 3 in which the textual data include an acquisition time and a geographic position that are associated with the sensed object at a time when the selected viable frame was acquired.
5. The method of claim 4 in which the geographic position data are generated automatically by a navigation system interfaced with the computer.
6. The method of claim 1 in which the sensed object is a seagoing vessel and the generated textual data include any of a vessel length, a vessel beam, a vessel position, a vessel course, a vessel speed, a rigging type, a mast height, a number of masts, an antenna type, a hull material, a hull shape, a hull color, a vessel age, and a vessel name.
7. The method of claim 1 in which the sensed object is one of a seagoing vessel, a mobile land vehicle, a fixed man-made object, and a naturally occurring terrain feature, and the method is carried out onboard one of an aircraft, a seagoing vessel, a mobile land vehicle, and a human mounted platform.
8. The method of claim 1 in which the generated textual data include an object length that is derived from a range to the object and a width of the object image data.
9. The method of claim 8 in which the width of the object image data is determined by the steps of: selecting with a graphic user interface a pair of extrema associated with the displayed object; entering the range to the object; and calculating the width by employing the range and a number of pixel positions between the selected extrema.
10. The method of claim 8 in which the range is determined by the steps of: entering from the navigation system an altitude datum into the computer; entering from a sensor mount encoder an elevation angle datum into the computer; and computing the range to the object.
11. The method of claim 8 in which the range is determined from a radar system or a laser-based range finder.
12. The method of claim 1 in which the displaying an enhanced image step further comprises manipulating the image data to reduce an image clutter and an image background noise.
13. The method of claim 12 in which the manipulating step includes any of data panning, data zooming, data thresholding, and data edge detection.
14. The method of claim l in which the storing compressed image data step comprises: providing a graphic user interface for selectively generating line segments on the display; tracing an outline of the enhanced image of the object with line segments; and storing the line segments as the compressed image data representing the displayed image.
15. The method of claim 1 in which the storing compressed image data step comprises: processing the image data with an adaptive wavelet compression algorithm to generate compressed image data; and storing the compressed image data representing the displayed image.
16. The method of claim 1 in which the retrieving step further comprises searching the database with the textual data entered during the generating step to retrieve and display candidate object record data.
17. The method of claim 1 in which the retrieving step further comprises applying a fuzzy logic matching filter to the textual data.
18. The method of claim 17 in which the identifying step further comprises: comparing the retrieved candidate object record data with the compressed image data; and identifying the retrieved candidate object record that most closely matches the compressed image data generated from the sensed remote object.
19. A method of generating and updating a master object database, comprising: generating at a central data processing system a set of master object database records that are associated with a corresponding set of physical objects; extracting from the master object database a subset database including records related to a particular set of the physical objects that are expected to be acquired during a reconnaissance activity; loading the subset database into a portable storage medium; installing the portable storage medium in an object identification system for use during the reconnaissance activity; acquiring and identifying objects during the reconnaissance activity by comparing the acquired objects with the records stored in the subset database; updating the subset database with data associated with the acquired objects; removing the portable storage medium from the object identification system after the reconnaissance activity; loading the removable storage medium into the central data processing system; retrieving from the updated subset database the records updated during the reconnaissance activity; and updating the master object database with the records retrieved from the updated subset database.
20. The method of claim 19 in which the generating step includes employing an adaptive wavelet algorithm to generate adaptive wavelet compression coefficients for a set of known objects.
21. The method of claim 19 in which the retrieving step further includes: processing the updated subset database with a fuzzy logic matching filter to update a corresponding set of fuzzy logic parameters; and updating the master object database with the updated fuzzy logic parameters.
22. The method of claim 19 in which the generating step comprises document scanning.
23. The method of claim 19 in which the reconnaissance activity is carried out onboard one of an aircraft, a seagoing vessel, a mobile land vehicle, and a human mounted platform.
24. The method of claim 19 in which the physical objects are one of a seagoing vessel, a mobile land vehicle, a fixed man-made object, and a naturally occurring terrain feature.
25. The method of claim 19 in which each of the master and subset database records include a textual data field related to a particular physical object and a pointer to stored image data representing the physical object.
26. The method of claim 19 in which the portable storage medium is a removable medium disk drive cartridge.
27. A system for uniquely identifying a remotely sensed object, comprising: an electro optical sensor acquiring an image of the remote object; an object identification system generating image data and textual data associated with the remotely sensed object; a display displaying an enhanced image of the object; a memory storing compressed image data derived from the enhanced image of the object; a computer retrieving database records related to the textual data; and an operator uniquely identifying the object by comparing the retrieved database records with the compressed image data.
28. The system of claim 27 further including at least one of an infrared sensor and a radar system and in which the object identification system combines the image data with data generated by at least one of the infrared sensor and the radar system.
29. The system of claim 27 in which the electro optical sensor includes a CCD array-based camcorder that generates multiple video data frames to a video input board that enables the object identification system to select a viable one of the video data frames for further processing.
30. The system of claim 27 in which the textual data include an acquisition time and a geographic position that are associated with the sensed object at a time when the selected viable video data frame was acquired.
31. The system of claim 30 in which the geographic position data are generated automatically by a navigation system interfaced with the computer.
32. The system of claim 27 in which the sensed object is a seagoing vessel and the generated textual data include any of a vessel length, a vessel beam, a vessel position, a vessel course, a vessel speed, a rigging type, a mast height, a number of masts, an antenna type, a hull material, a hull shape, a hull color, a vessel age, and a vessel name.
33. The system of claim 27 in which the object identification system is mounted onboard one of an aircraft, a seagoing vessel, a mobile land vehicle, and a human mounted platform.
34. The system of claim 27 in which the generated textual data include an object length that is derived from a range to the object and a width of the object image data.
35. The system of claim 34 in which the width of the object image data is determined by selecting with a graphic user interface a pair of extrema associated with the displayed enhanced image of the object and entering the range to the object into the computer which computes the width by employing the range and a number of pixel positions between the selected extrema.
36. The system of claim 34 in which the range is computed by entering into the computer a sensor altitude from the navigation system and a sensed object elevation angle from a sensor mount encoder.
37. The system of claim 34 in which the range is determined from a radar system or a laser-based range finder.
38. The system of claim 27 in which the enhanced image of the object is generated by processing the image data to reduce an image clutter and an image background noise.
39. The system of claim 38 in which the processing includes any of data panning, data zooming, data thresholding, and data edge detection.
40. The system of claim 27 in which the stored compressed image data are generated by using a graphic user interface to trace with line segments an outline of the enhanced image of the object.
41. The system of claim 27 in which the computer searches the database records in response to the textual data and retrieves for presentation on the display candidate object image records.
42. The system of claim 41 further including a fuzzy logic matching filter.
43. The system of claim 41 in which the operator compares the retrieved candidate image records with the compressed image data of the acquired object to uniquely identify a particular one of the retrieved candidate object image records that most closely matches the compressed image data.
PCT/US1995/010027 1994-08-10 1995-08-09 Airborne video identification system and method WO1996005569A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU32780/95A AU3278095A (en) 1994-08-10 1995-08-09 Airborne video identification system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US28894394A 1994-08-10 1994-08-10
US80/288,943 1994-08-10

Publications (1)

Publication Number Publication Date
WO1996005569A1 true WO1996005569A1 (en) 1996-02-22

Family

ID=23109334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/010027 WO1996005569A1 (en) 1994-08-10 1995-08-09 Airborne video identification system and method

Country Status (2)

Country Link
AU (1) AU3278095A (en)
WO (1) WO1996005569A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2344205A (en) * 1998-11-26 2000-05-31 Roke Manor Research Vehicle identification
WO2003092838A2 (en) * 2002-04-29 2003-11-13 Prozone Holdings Limited System and method for processing sport events data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229910A (en) * 2017-05-18 2017-10-03 北京环境特性研究所 A kind of remote sensing images icing lake detection method and its system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4396944A (en) * 1981-09-15 1983-08-02 Phillips Petroleum Company Video image size scaling
US4887304A (en) * 1987-09-30 1989-12-12 Raytheon Company Library image optimization
US5150431A (en) * 1990-01-26 1992-09-22 Brother Kogyo Kabushiki Kaisha Device for converting normal outline data into compressed outline data, or vice versa
US5164829A (en) * 1990-06-05 1992-11-17 Matsushita Electric Industrial Co., Ltd. Scanning velocity modulation type enhancement responsive to both contrast and sharpness controls
US5168529A (en) * 1988-08-29 1992-12-01 Rayethon Company Confirmed boundary pattern matching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4396944A (en) * 1981-09-15 1983-08-02 Phillips Petroleum Company Video image size scaling
US4887304A (en) * 1987-09-30 1989-12-12 Raytheon Company Library image optimization
US5168529A (en) * 1988-08-29 1992-12-01 Rayethon Company Confirmed boundary pattern matching
US5150431A (en) * 1990-01-26 1992-09-22 Brother Kogyo Kabushiki Kaisha Device for converting normal outline data into compressed outline data, or vice versa
US5164829A (en) * 1990-06-05 1992-11-17 Matsushita Electric Industrial Co., Ltd. Scanning velocity modulation type enhancement responsive to both contrast and sharpness controls

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GEO INFO SYSTEMS, Jan. 1994, BOSSLER et al., "Digital Mapping on the Ground and From the Air", pages 44-48. *
KINGSTON TECHNOLOGY CORPORATION, PRODUCT CATALOG, issued 1994, "Fixed/Removable Mass Storage for Any Platform". *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2344205A (en) * 1998-11-26 2000-05-31 Roke Manor Research Vehicle identification
GB2344205B (en) * 1998-11-26 2003-04-30 Roke Manor Research Method of and apparatus for vehicle indentification
WO2003092838A2 (en) * 2002-04-29 2003-11-13 Prozone Holdings Limited System and method for processing sport events data
WO2003092838A3 (en) * 2002-04-29 2004-04-01 Prozone Holdings Ltd System and method for processing sport events data

Also Published As

Publication number Publication date
AU3278095A (en) 1996-03-07

Similar Documents

Publication Publication Date Title
US8102423B2 (en) Method and system for performing adaptive image acquisition
US20130163822A1 (en) Airborne Image Capture and Recognition System
EP0878965A2 (en) Method for tracking entering object and apparatus for tracking and monitoring entering object
US20120069709A1 (en) Method and system for real-time automated change detection and classification for images
EP0866606B1 (en) Method for temporally and spatially integrating and managing a plurality of videos, device used for the same, and recording medium storing program of the method
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
EP2538372A1 (en) Dynamic gesture recognition process and authoring system
US11756287B2 (en) Systems and methods for feature extraction and artificial decision explainability
US4323880A (en) Automatic target screening
Wu et al. Multivehicle object tracking in satellite video enhanced by slow features and motion features
Spagnolo et al. A new annotated dataset for boat detection and re-identification
WO1996005569A1 (en) Airborne video identification system and method
CN116558364A (en) Interference interception system and method for unknown aircraft
KR101723028B1 (en) Image processing system for integrated management of image information changing in real time
Subramanian et al. Subpixel object detection using hyperspectral imaging for search and rescue operations
Nautiyal et al. An automated technique for criminal face identification using biometric approach
Fehlmann et al. Application of detection and recognition algorithms to persistent wide area surveillance
CN113963502B (en) All-weather illegal behavior automatic inspection method and system
KR102028319B1 (en) Apparatus and method for providing image associated with region of interest
Guibas et al. Image retrieval and robot vision research at Stanford
RU2075780C1 (en) Method for thematic image decoding and device which implements said method
Solka et al. Region of interest identification in unmanned aerial vehicle imagery
Petrov et al. Development and Research of an Adaptive Shaper of Features Based on Cyberneuron for a Robust Detector
Gupta et al. Reduction of video license plate data
Clark et al. Automatic Target Recognizer Performance Evaluation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TT UA UG US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA