WO2004027348A2 - A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data - Google Patents

A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data Download PDF

Info

Publication number
WO2004027348A2
WO2004027348A2 PCT/US2003/028727 US0328727W WO2004027348A2 WO 2004027348 A2 WO2004027348 A2 WO 2004027348A2 US 0328727 W US0328727 W US 0328727W WO 2004027348 A2 WO2004027348 A2 WO 2004027348A2
Authority
WO
WIPO (PCT)
Prior art keywords
flight
lines
sensor device
substantially parallel
remote sensing
Prior art date
Application number
PCT/US2003/028727
Other languages
French (fr)
Other versions
WO2004027348A3 (en
Inventor
Tuy Vu Mai
Original Assignee
M7 Visual Intelligence, Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=31992013&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2004027348(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by M7 Visual Intelligence, Lp filed Critical M7 Visual Intelligence, Lp
Priority to AU2003278803A priority Critical patent/AU2003278803A1/en
Priority to CA002534966A priority patent/CA2534966A1/en
Publication of WO2004027348A2 publication Critical patent/WO2004027348A2/en
Publication of WO2004027348A3 publication Critical patent/WO2004027348A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • This invention relates generally to the field of imaging using remote sensors. More specifically, this invention relates to a method of calibrating a vehicle- mounted remote sensor device using remote sensing data collected during conventional operation of the vehicle.
  • Remote sensing involves the acquisition of information or data around a distant object or system without being in physical contact with it.
  • Most remote sensing instruments are designed to analyze the characteristics of the electromagnetic spectra reflected by objects (their spectral signatures) to allow one to determine some of the objects' properties.
  • Human vision uses the same principle when using color (the sensation produced when light of different wavelengths falls on the human eye) to identify objects.
  • the sensors used in remote sensing make it possible to broaden the field of analysis to include parts of the electromagnetic spectrum that are well beyond visible light such as ultraviolet ( ⁇ 0.3 ⁇ m), visible (0.4-0.7 ⁇ m), near- infrared (0.7-1.5 ⁇ m) and thermal infrared (up to 1000 ⁇ m or 1 mm) ranges.
  • visible light ⁇ 0.3 ⁇ m
  • visible 0.4-0.7 ⁇ m
  • near- infrared 0.7-1.5 ⁇ m
  • thermal infrared up to 1000 ⁇ m or 1 mm ranges.
  • remote sensing technology is used in a variety of applications in fields such as hydrology, geology, environment, transportation, ecology, and earthquake engineering.
  • One particular application involves airborne imaging where remote sensors are placed on-board aircraft to make observations and images of the Earth.
  • These airborne remote sensor systems generally use either a mechanical scanning technique or a linear array, along with aircraft motion, to acquire terrestrial imagery.
  • each image scene that is collected from a target area consists of a two- dimensional grid of discrete cells, each of which is referred to as a pixel.
  • a pixel For scanning sensors, adjacent pixels are acquired at different times, while for linear array sensors, adjacent rows of pixels are acquired at different times. Attitude data meanwhile are sampled once per scan revolution. Consequently, any changes in the direction of the aircraft's velocity or attitude results in geometric distortions for different regions within the two-dimensional image.
  • sufficient information is not available to obtain accurate records of the sensor's location or its attitude parameters at the appropriate instant. Therefore, the collected data requires sophisticated and expensive post-mission processing to improve upon the geometric fidelity and to achieve a positioning accuracy that meet the user's requirement.
  • One method of calibrating a remote sensor is to place calibration targets on the target area that is to be sensed.
  • Panels made of cloth have been used as calibration targets but are expensive, difficult to handle, require intensive effort to lay out in a field, are easily damaged, and usually must be gathered up after the calibration exposure is completed.
  • deploying calibration targets requires significant labor costs when sites are remote or when images must be acquired frequently.
  • Another calibration target is described in U.S. Patent No. 6,191,851 (Kirkham et al.). Kirkham et al. discloses a calibration target that can be left in place adjacent to or in the field of interest to permit automatic calibration of the remote sensing system.
  • the calibration target must still be deployed in or near the area to be imaged to provide the imagery characteristics of the target area in order to calibrate the data received by the remote sensor.
  • the aspects for cost reduction include equipment and material cost, mission execution and verification process, reduction of ground support tasks, efficiency and accessibility of deriving accurate position information from remotely sensed images.
  • the present invention provides a method of calibrating a remote sensing system employed in a vehicle, such as an aircraft, using remote sensing data collected during conventional vehicle operation.
  • the method includes mounting at least one remote sensor on a vehicle and moving the vehicle in a self-locking pattern over a target area, the movement comprising any pattern that produces at least three substantially parallel travel lines out of a group of three or more lines, at least one of which travel lines is in an opposing direction to the other substantially parallel travel lines. Swath widths are generated for each substantially parallel travel line with the remote sensor device.
  • Remote sensing data is collected of the target area during vehicle movement, which is inputted into a computer to calculate calibration data.
  • the calibration data is applied to the remote sensing data to remove bias in image output.
  • the present method further includes mounting at least one remote sensor device on an aircraft to generate remote sensing data of a target area below.
  • the method uses a self-locking flight pattern having a number of parallel flight lines arranged so that an individual flight line has one adjacent flight line oriented in a matching direction and the other adjacent flight line oriented in an opposite or crossing direction. Additional flight lines outside the target area of interest can be added to the left and right boundary of the target area. These extra outer-boundary lines are not themselves required, but are present to ensure each flight line in the target area of interest has two adjacent lines.
  • a computer is used post-process to determine the boresight angles, and the range offset, if needed, of the remote sensor device using overlapped areas of adjacent parallel flight lines.
  • the present invention further includes a method to generate an estimated horizontal displacement error and vertical displacement error using parallax values found in the overlapped areas of adjacent flight lines.
  • the estimated horizontal displacement error is the standard deviation of the horizontal displacement errors of a sample of objects having images separated by a certain distance in the overlapped areas.
  • the vertical displacement error is the standard deviation of a sampling of vertical displacement errors; the sampling taken so that each flight line contributes the same number of objects spread evenly along the flight line.
  • the present invention also provides a remote sensing system utilizing the above calibration method.
  • the remote sensing system includes a remote sensor device, an aircraft flying in a self-locking flight pattern and a computer adapted to generate calibration data.
  • FIG. 1 depicts a block diagram of the on-board remote sensing system of a preferred embodiment of the present invention
  • FIG. 2 depicts a remote sensor device of the system of the present invention
  • FIG. 3 depicts a simplified block diagram of a LIDAR remote sensor device in the preferred embodiment of the invention
  • FIG. 4 depicts a self-locking flight pattern of a preferred embodiment of the present invention
  • FIG. 5 depicts a three-axis coordinate system of the present invention
  • FIG. 6 depicts an image plane of the target area of the present invention
  • FIG. 7 depicts a remote sensor device attached to a moving aircraft
  • FIG. 8 depicts a self-locking flight pattern used to calculate the yaw angle
  • FIG. 9 depicts a self-locking flight pattern used to calculate the roll angle
  • FIG. 10 depicts a self-locking flight pattern used to calculate the pitch angle
  • FIG. 11 depicts a self-locking flight pattern used to calculate the range offset.
  • FIG. 1 depicts an aircraft on-board remote sensing system that can utilize a preferred embodiment of the method of the present invention.
  • the on-board remote sensing system has at least one remote sensor device 10 that is designed to obtain data of a site flown over by an aircraft.
  • the remote sensor device 10 is associated with a computer 12 suited to form, select and correct images of the site flown over.
  • the computer 12 is connected to a positioning device 14 to allow continuous association of geographic data with the images acquired.
  • the computer 12 can also be connected to an attitude- sensing device 16 whose indications allow readjustment of the images acquired according to the trajectory of the aircraft.
  • the on-board system can further comprise a memory unit 18 and navigation guidance system 20 to provide immediate feedback to the pilot as to the position of the aircraft relative to the planned flight lines.
  • This system receives position data from real-time positioning device 22 that can include a differential GPS unit.
  • the computer 12 can also be coupled to a communications network 21 to permit direct transmission of data to locations remote from computer 12.
  • the remote sensor device 10 is mounted to the aircraft and generally includes an optical system 30 and a detector 32 as shown in FIG. 2.
  • the remote sensor device can be mounted on a cargo door, in a hole in the floor of the aircraft, under the nose or wing of the plane, or in a pod that is attached beneath the aircraft.
  • the optical system 30 can include a lens 34, an aperture 36 and filter 38 to redirect or focus the energy onto the detector 32.
  • the detector 32 senses energy and generates an emission of electrons that are collected and counted as a signal.
  • the signal is carried to computer 12 that outputs a signal that is used in making images or is analyzed by a computer program. The magnitude of the output signal is proportional to the intensity of the sensed energy.
  • the remote sensor device 10 can either be a passive or active sensor device. In a passive sensor device, energy comes from an external source. In contrast, an active sensor device generates energy within the sensor system, beams the energy outward, and the fraction of energy returned is measured.
  • the remote sensor device can be either an imaging or non-imaging device. Imaging devices use the measured energy related to a specific point in the target area to excite a substance, like silver in film, or to drive an image-producing device like a monitor, to produce an image or a display. Non-imaging devices measure the energy from all points in the target area to generate an electrical strength signal.
  • the remote sensor device 10 includes a charge-coupled device or CCD.
  • CCD is an extremely small, silicon chip that is light sensitive. When energy strikes a CCD, electronic charges develop whose magnitudes are proportional to the intensity of the impinging energy during a short time interval (exposure time). The number of detector elements per unit length, along with the optical system, determines the spatial resolution. Using integrated circuits, each linear array is sampled very rapidly in sequence to produce an electrical signal that varies with the radiation striking the array. This changing signal recording goes through a signal processor then to a recorder, and finally, is used to drive an electro-optical device to make a black and white image.
  • the remote sensor device includes a 3- dimensional sensor device such as LLDAR.
  • LIDAR is similar to the more familiar radar, and can be thought of as laser radar. In radar, radio waves are transmitted into the atmosphere that scatters some of the energy back to the radar's receiver. LIDAR also transmits and receives electromagnetic radiation, but at a higher frequency since it operates in the ultraviolet, visible and infrared region of the electromagnetic spectrum. In operation, LLDAR transmits light out to a target area.
  • the transmitted light interacts with and is changed by the target area. Some of this light is reflected / scattered back to the LLDAR instrument where it can be analyzed. The change in the properties of the light enables some property of the target area to be detennined. The time for the light to travel out to the target area and back to LIDAR device is used to determine the range to the target.
  • Range finders There are presently three basic types of LIDAR: Range finders,
  • DIAL LIDAR Differential Absorption LIDAR
  • Doppler LIDAR Range finder LIDAR is the simplest LIDAR and is used to measure the distance from the LIDAR device to a solid or hard target.
  • DIAL LIDAR is used to measure chemical concentrations (such as ozone, water vapor, pollutants) in the atmosphere.
  • a DIAL LLDAR uses two different laser wavelengths that are selected so that one of the wavelengths is absorbed by the molecule of interest while the other wavelength is not. The difference in intensity of the two return signals can be used to deduce the concentration of the molecule being investigated.
  • Doppler LLDAR is used to measure the velocity of a target.
  • the wavelength of the light reflected/scattered off the target will be changed slightly. This is known as a Doppler-shift and therefore Doppler LIDAR. If the target is moving away from the LIDAR, the return light will have a longer wavelength (sometimes referred to as a red shift), if moving towards the LLDAR the return light will be at a shorter wavelength (blue shifted).
  • the target can be either a hard target or an atmospheric target (e.g. microscopic dust and aerosol particles that are carried by the wind.
  • a simplified block diagram of LLDAR is shown in FIG. 3 and includes a transmitter 40, a receiver 42 and a detector 44.
  • the transmitter 40 is a laser, while its receiver 42 is an optical telescope. Different kinds of lasers are used depending on the power and wavelength required.
  • the laser may be both a continuous wave or pulsed.
  • Gain mediums for the lasers include, gases (e.g. Helium Neon or Xenon Fluoride), solid-state diodes, dyes and crystals (e.g. Neodymium: Yttrium Aluminum Garnet).
  • the receiver 42 records the scattered light received by the receiver at fixed time intervals.
  • Detector 44 is usually an extremely sensitive detector such as photo-multiplier tubes that can detect backscattered light.
  • Photo-multiplier tubes first convert the individual quanta of light/photons into electric currents that are subsequently turned into digital photocounts that can be stored and processed on a computer.
  • the photocounts received are recorded for fixed time intervals during the return pulse.
  • the times are then converted to heights called range bins since the speed of light is well known.
  • the range-gated photocounts can then be stored and analyzed by a computer.
  • Computer 12 can comprise an industry standard model PCI single board computer using a processor and having board slots to handle the I/O functions performed by the board.
  • the LP boards can provide analog-to-digital, digital-to-analog and discrete digital I/O functions.
  • the IP boards are adapted to receive and store data from remote sensor device 10, attitude sensing device 16 and positioning device 14.
  • computer 12 is adapted to perform stereo imaging techniques on collected data from the target area in order to calibrate remote sensor device 10.
  • Positioning device 14 can include a kinematic, post-processed GPS unit, the unit comprising a GPS system antenna connected to a GPS receiver that is part of computer 12. The GPS receiver periodically generates a set of geophysical coordinate and velocity data representative of the position of remote sensor device 10. The set of geophysical coordinate data and velocity data can be directed to computer 12 for processing and storing.
  • Attitude sensing device 16 can include an inertial measurement unit
  • the IMU to provide attitude data to computer 12 that is representative of a set of measured angles.
  • the IMU generally senses change in velocity and rotation rate of the aircraft or remote sensor device, depending on where it is attached, in three coordinate axes.
  • the IMU data obtained is used to determine the roll angle, the pitch angle and yaw angle.
  • a memory unit 18 can also be connected to computer 12 to store remote sensing data and geographic data. Memory unit 18 contains sufficient storage volume to store and transfer remote sensing data and geographic data for the system.
  • the navigation guidance system 20 can include a display console that presents to the pilot the current aircraft position relative to the planned flight lines in the target area of interest. A cross-hair can also be displayed to show whether the aircraft is staying on line at the planned altitude.
  • the method of the present invention provides a method of calibrating a remote sensing system employed in an aircraft or other vehicle using remote sensing data collected during conventional operation. The method includes mounting at least one remote sensor 10 on a vehicle and moving the vehicle in a self-locking pattern 46 over a target area 58.
  • the movement may comprise any pattern that produces at least three substantially parallel travel lines out of a group of three or more lines. Further, at least one of the travel lines should be in an opposing direction to the other substantially parallel travel lines. In other words, out of any group of travel lines, some of which may not be parallel, at least three of the travel lines are parallel. Further, in the most preferred embodiment of the invention, the travel lines are parallel. In one preferred embodiment of the invention, the travel pattern comprises at least one pair of parallel travel lines in a matching direction and at least one pair of travel lines in an opposing direction. [0026] Swath widths 59, as described below, are generated for each substantially parallel travel line with the remote sensor device.
  • Remote sensing data is collected from the target area during vehicle movement, which is inputted into a computer 12 to calculate calibration data.
  • the calibration data is applied to the remote sensing data to remove bias in image output.
  • the vehicle used in the present invention may be an airplane, helicopter, satellite, truck or other transport device.
  • the preferred method of the present invention utilizes an aircraft 6 land a self-locking flight pattern 46 that includes a number of flight lines that are used to obtain the images from the target area as described in FIG. 4.
  • the number of flight lines required to cover a target area can vary depending on the area of interest. It is not always possible or required to have an even number of flight lines.
  • the pattern over the target area includes pairs of adjacent flight lines oriented so that one flight line is up and the other flight line is down or both flight lines are oriented in the same direction.
  • any two adjacent flight lines can form a pair of flight lines in either an opposing or matching direction.
  • the self-locking flight pattern 46 as depicted in FIG. 4 can further include right and left outermost flight lines 47 with a number of inner parallel flight lines 48-51.
  • the flight lines 47-51 can be divided into pairs of adjacent flight lines in a way so that both flight lines of each pair are in the same direction to form a double-up double down pattern.
  • pair 54 including flight lines 49 and 50 is in the opposite direction to its neighboring pair of flight lines including flight lines 47 and 48 and flight lines 47 and 51.
  • the self-locking flight pattern 46 allows each flight line in the pattern to have one adjacent flight line oriented in the same or matching direction and the other adjacent flight line to be in an opposite crossing direction over the target area.
  • the right and left outermost flight lines 47 are not part of the target area of interest but provide uniformity for the inner flight lines and therefore have only one inner adjacent flight line.
  • the remote sensor device 10 As the remote sensor device 10 moves along the self-locking flight pattern 46, it gathers data. In doing so, it generates swath widths 59 where the remote sensor device 10 scans a path covering an area to the sides of a flight line. Because each flight line is parallel to one another, these swath widths 59 overlap. These overlapping swath width areas can be used to calibrate remote sensor device 10 by along-track and cross-track parallax of images in adjacent flight lines with stereo imaging techniques as will be described below.
  • the swath widths 59 are determined by the remote sensor device's field of view and can be varied as desired to obtain the optimum width for use in this method.
  • the remote sensor device 10 can be mounted onto aircraft 61 such that a portion of target area 58 is imaged onto a two- dimensional array 60 whose linear axis defines an image plane 62.
  • An image coordinate system 64 of image plane 62 consists of a forward axis 66 or "x"-axis, a "y"-axis 68 and a "z"-axis 70 having an origin located at the center of array 60.
  • the x- axis 66 is the axis parallel to a linear axis of array 60 and is in the same general direction of the forward flight motion.
  • the y-axis 68 lies on image plane 62 and is perpendicular to x-axis 66 while the z-axis 70 is perpendicular to image plane 62.
  • the set of three world axes include a vertical axis 80, a forward flight axis 82 and a cross-track axis 84.
  • the vertical axis 80 is defined by gravity
  • the forward flight axis 82 is the vector projection of an instantaneous velocity of aircraft 61 in the x- y plane of the image coordinate system 64
  • the cross-track axis 84 is defined by a cross- section between the y-z plane of the image coordinate system 64 and a horizontal plane perpendicular to the vertical axis 80.
  • the three attitude parameters are a roll angle 87 (omega), a pitch angle 88 (phi), and a yaw angle 89 (kappa).
  • the pitch angle 88 is the angle between the x-axis 66 of the image plane 62 and a horizontal axis perpendicular to the vertical axis 80 and lies in the x-z plane of the image coordinate system 64.
  • the roll angle 87 is the angle between the y-axis 68 of the image plane 62 and the cross- track axis 84; while the yaw angle 89 is the angle between the x-axis 66 of the image plane 62 and the forward flight axis 82.
  • this plane does not coincide with image plane 62 whose attitude parameters are required to process sensor data.
  • the boresight angles are the angles such that a series of rotations based on such angles will make the image plane coincide with the IMU reference plane.
  • the order of rotations is roll, pitch and yaw. Once the roll, pitch and yaw angles are determined, the attitude of the image plane is readily available by combining the attitude of the IMU reference plane with these angles.
  • the method of the present invention uses a 3- dimensional remote sensor device.
  • a 3-dimensional remote sensor device is used in this embodiment, a 2-dimensional device can also be used in the calibration method of the present invention.
  • the data are processed using an initial set of assumed roll, pitch and yaw angles that can be obtained from prior calibrations or simply set to zeroes if no calibration data are available.
  • objects in images will be shifted from their true geographical locations.
  • a new set of roll, pitch and yaw angles are derived.
  • the process is then iterated until the values converge. Usually, only two or three iterations are required.
  • the yaw and pitch angles are determined from along-track parallax ("x" parallax) of objects in the overlapping swath width areas of adjacent flight lines.
  • the yaw angle is determined using pairs of adjacent flight lines oriented in the same direction or matching pairs.
  • the pitch angle in contrast is determined using pairs of adjacent flight lines going in opposite directions or crossing pairs.
  • the roll angle and the range offset in comparison are determined using cross-track parallax ("y" parallax) of objects in the overlapping swath width areas of adjacent flight lines.
  • the roll angle is determined using crossing pairs of adjacent flight lines, whereas the range offset is determined by matching pairs of adjacent flight lines.
  • the yaw bias which is a rotation about the z-axis, objects are rotated about the center of the image plane. In FIG. 8, assuming there is a counter- clockwise bias, images rotate clockwise.
  • object 90 in FIG. 8 in the overlapping swath width area is shifted forward to position 91 during flight line 97.
  • object 90 in the overlapping swath width area is shifted backward to position 92 during flight line 98 since object 90 is to the left of flight line 98.
  • d is the along-track parallax ("x" parallax) of a point with a positive value, meaning a forward displacement for objects to the right of a flight line and backward displacement for objects to the left
  • a positive yaw angle is one where the image plane has to be rotated counter clockwise to coincide with the IMU reference plane, for objects located in the overlapping swath width areas and for small yaw angles (which is almost always the case)
  • d AO * sin (yaw angle) *2
  • A the midpoint of the line segment connecting 91 and 92
  • O the nadir point
  • the overlapping swath width area for each matching pair of flight lines of the flight pattern is compared in determining the yaw angle.
  • the yaw angle can be computed for each object in the overlapping swath width area of the matching pair of flight lines then averaged to yield the yaw angle for the matching pair of flight lines.
  • the yaw angles of all matching pairs of flight lines are then averaged to yield the final yaw angle for the flight pattern.
  • the matching of objects in the overlapping swath width areas can be performed either manually or automatically using pattern matching software.
  • a pitch angle is a rotation about the y-axis.
  • a positive pitch angle is defined to be one where the forward edge of the image plane is tilted upward.
  • a pitch angle is computed in the present invention using crossing pairs of adjacent flight lines.
  • a pitch angle creates x parallax.
  • a positive angle shifts object images backward in the final output image.
  • object 90 in the overlapping swath width area during flight line 98 is shifted backward to 91 while during flight line 99, it is shifted backward to 92. Since the flight lines are in an opposite crossing direction, the shift in position of objects in the overlapping swath width area will also be in opposite directions creating the x parallax. If h is the altitude above ground of the center of the image plane, and d is the x parallax (i.e.
  • the pitch angle is computed for each object in the overlapping swath width area of the crossing pair of flight lines then averaged to yield the pitch angle for the crossing pair of flight lines.
  • the pitch angles of all crossing pairs of flight lines are then averaged to yield the final pitch angle for the flight pattern.
  • the yaw angle causes approximately the same along-track shift in the same direction in both flight lines of a crossing pair. Therefore, the yaw angle does not effect the determination of the pitch angle.
  • the roll angle is computed using crossing pairs of flight lines. The roll angle is a rotation about the x-axis, the axis that is in the direction of flight.
  • a positive roll angle is one where the image plane is tilted downward to the right, causing objects in the final output image to be shifted to the right.
  • object 90 during flight line 98 will be shifted to the right to position 91 while during flight line 99, object 90 will be shifted to position 92.
  • flight lines 98 and 99 are in opposite direction, the shifts for each flight line will also be in opposite directions creating a separation between 91 and 92 in the cross- track direction, or y parallax. If d is the separation between 91 and 92 (or the y parallax), the following sign convention is used:
  • d is positive (+) if 91 is farther from flight line 98 than 92 is from flight line 99 (in other words, the points 91 and 92 cross over each other); (ii) d is negative (-) if 91 is closer to flight line 98 than 92 is from flight line 99;
  • the roll angle can then be computed by determining an angle that would minimize the expression:
  • dr is equal to the average value of d.
  • the overlapping swath width area for each crossing pair of flight lines of the flight pattern is compared in determining the roll angle.
  • the roll angle is computed for each object in the overlapping swath width area of the crossing pair of flight lines then averaged to yield the roll angle for the crossing pair of flight lines.
  • the roll angles of all crossing pairs of flight lines are then averaged to yield the final roll angle for the flight pattern.
  • the range offset can then next be computed.
  • the range offset like the roll angle, can also be determined by cross-track parallax of objects in the overlapping swath width areas. However, the range offset is determined using matching pairs of flight lines. Because the roll angle effects the same parallax shift for both flight lines of a matching pair it therefore does not effect the computation of the range offset.
  • the range offset is a characteristic of an active sensor, such as LLDAR. It causes objects to appear below ground trath, causing positive y parallax.
  • the range offset 103 as depicted in FIG. 11, can be approximated by:
  • Range offset ((d 2)/tan (c)) where: c is the average incident angle (cl + c2)/2 [0050]
  • the method above further includes detennining an estimated horizontal displacement error and an estimated vertical displacement error of the remote sensing system.
  • a first object having ground height may not be at the same position in the overlapping swath width area due to a horizontal displacement error (E ).
  • E h of the remaining objects in this overlapping swath width area as well as other overlapping swath width areas are also determined in a similar fashion and an estimated horizontal displacement error is computed by taking the standard deviation of the Eh values.
  • the estimated vertical displacement error is based on the y parallax of objects in an overlapping swath width area of matching pairs of flight lines.
  • a range offset error causes the data to be below ground truth and the images to be moved away from their respective swath centerline. Therefore, an overlapping swath width area can be used to determine a vertical displacement error for a first object having a discrepancy between the ground trath and its data values using the stereo imaging technique.
  • the estimated vertical error for the remote sensing system is then determined by computing the standard deviation of a sampling of vertical displacement errors for the same number of objects in each flight line such that the objects are spread evenly along each flight line.

Abstract

The present invention provides a method to calibrate an on-board remote sensing system using a self-locking travel pattern and target remote sensing data. The self-locking travel pattern includes a number of parallel travel lines having overlapping swath widths between adjacent travel lines. The overlapping swath widths are used to determine the boresight angles and range offset of the remote sensor device. In addition, the method can be used to generate estimated horizontal and vertical displacement errors. These estimated errors can be used as correction factors for the range offset and boresight angles.

Description

A METHOD OF USING A SELF-LOCKING TRAVEL PATTERN TO ACHIEVE CALIBRATION OF REMOTE SENSORS USING CONVENTIONALLY
COLLECTED DATA
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to the following United States
Patent Application, Serial No. 10/244,980, filed September 17, 2002.
FIELD OF THE INVENTION [0002] This invention relates generally to the field of imaging using remote sensors. More specifically, this invention relates to a method of calibrating a vehicle- mounted remote sensor device using remote sensing data collected during conventional operation of the vehicle.
BACKGROUND OF THE INVENTION [0003] Remote sensing involves the acquisition of information or data around a distant object or system without being in physical contact with it. Most remote sensing instruments are designed to analyze the characteristics of the electromagnetic spectra reflected by objects (their spectral signatures) to allow one to determine some of the objects' properties. Human vision uses the same principle when using color (the sensation produced when light of different wavelengths falls on the human eye) to identify objects. The sensors used in remote sensing, however, make it possible to broaden the field of analysis to include parts of the electromagnetic spectrum that are well beyond visible light such as ultraviolet (<0.3 μm), visible (0.4-0.7 μm), near- infrared (0.7-1.5 μm) and thermal infrared (up to 1000 μm or 1 mm) ranges. [0004] Today, remote sensing technology is used in a variety of applications in fields such as hydrology, geology, environment, transportation, ecology, and earthquake engineering. One particular application involves airborne imaging where remote sensors are placed on-board aircraft to make observations and images of the Earth. These airborne remote sensor systems generally use either a mechanical scanning technique or a linear array, along with aircraft motion, to acquire terrestrial imagery.
[0005] One drawback to using current airborne imaging techniques is the inferior geometric fidelity in image quality since the two-dimensional spatial images captured by the remote sensors are not acquired at the same instant. During airborne imaging, each image scene that is collected from a target area consists of a two- dimensional grid of discrete cells, each of which is referred to as a pixel. For scanning sensors, adjacent pixels are acquired at different times, while for linear array sensors, adjacent rows of pixels are acquired at different times. Attitude data meanwhile are sampled once per scan revolution. Consequently, any changes in the direction of the aircraft's velocity or attitude results in geometric distortions for different regions within the two-dimensional image. Also, sufficient information is not available to obtain accurate records of the sensor's location or its attitude parameters at the appropriate instant. Therefore, the collected data requires sophisticated and expensive post-mission processing to improve upon the geometric fidelity and to achieve a positioning accuracy that meet the user's requirement.
[0006] Another drawback to current airborne imaging is that the remote sensors mounted to the aircraft have to be calibrated in order to accurately obtain the absolute geophysical coordinates of the remote sensing data. During normal operation, the remote sensing data acquired during the flight must be transferred from the original mission medium to a working medium. The remote sensing data is then processed in a centrally located data processing center before it is distributed to end users. To obtain the desired level of accuracy on the absolute geophysical coordinates, each user has to perform additional image processing. This includes sophisticated and extensive ground processing and, in many cases, collecting supporting data on ground control points before the absolute geophysical coordinates on any feature in the terrestrial imagery can be obtained. No accurate absolute geophysical coordinate information, suitable for medium and large scale mapping applications, of any terrestrial features in an image scene is available on the original mission medium.
[0007] One method of calibrating a remote sensor is to place calibration targets on the target area that is to be sensed. Panels made of cloth have been used as calibration targets but are expensive, difficult to handle, require intensive effort to lay out in a field, are easily damaged, and usually must be gathered up after the calibration exposure is completed. In addition, deploying calibration targets requires significant labor costs when sites are remote or when images must be acquired frequently. Another calibration target is described in U.S. Patent No. 6,191,851 (Kirkham et al.). Kirkham et al. discloses a calibration target that can be left in place adjacent to or in the field of interest to permit automatic calibration of the remote sensing system. However, the calibration target must still be deployed in or near the area to be imaged to provide the imagery characteristics of the target area in order to calibrate the data received by the remote sensor. [0008] Accordingly, there is a need in the art of remote sensor technology to provide an inexpensive calibration method that can provide optical and thermal imagery characteristics without having to perform multiple calibration flights or travel during airborne or vehicular imaging applications. The aspects for cost reduction include equipment and material cost, mission execution and verification process, reduction of ground support tasks, efficiency and accessibility of deriving accurate position information from remotely sensed images.
SUMMARY OF THE INVENTION [0009] The present invention provides a method of calibrating a remote sensing system employed in a vehicle, such as an aircraft, using remote sensing data collected during conventional vehicle operation. The method includes mounting at least one remote sensor on a vehicle and moving the vehicle in a self-locking pattern over a target area, the movement comprising any pattern that produces at least three substantially parallel travel lines out of a group of three or more lines, at least one of which travel lines is in an opposing direction to the other substantially parallel travel lines. Swath widths are generated for each substantially parallel travel line with the remote sensor device. Remote sensing data is collected of the target area during vehicle movement, which is inputted into a computer to calculate calibration data. The calibration data is applied to the remote sensing data to remove bias in image output.
[0010] The present method further includes mounting at least one remote sensor device on an aircraft to generate remote sensing data of a target area below. The method uses a self-locking flight pattern having a number of parallel flight lines arranged so that an individual flight line has one adjacent flight line oriented in a matching direction and the other adjacent flight line oriented in an opposite or crossing direction. Additional flight lines outside the target area of interest can be added to the left and right boundary of the target area. These extra outer-boundary lines are not themselves required, but are present to ensure each flight line in the target area of interest has two adjacent lines. A computer is used post-process to determine the boresight angles, and the range offset, if needed, of the remote sensor device using overlapped areas of adjacent parallel flight lines. The computed boresight angles and range offset can be applied to remove bias in the final image output. [0011] In addition, the present invention further includes a method to generate an estimated horizontal displacement error and vertical displacement error using parallax values found in the overlapped areas of adjacent flight lines. The estimated horizontal displacement error is the standard deviation of the horizontal displacement errors of a sample of objects having images separated by a certain distance in the overlapped areas. The vertical displacement error is the standard deviation of a sampling of vertical displacement errors; the sampling taken so that each flight line contributes the same number of objects spread evenly along the flight line. [0012] The present invention also provides a remote sensing system utilizing the above calibration method. The remote sensing system includes a remote sensor device, an aircraft flying in a self-locking flight pattern and a computer adapted to generate calibration data. BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 depicts a block diagram of the on-board remote sensing system of a preferred embodiment of the present invention;
FIG. 2 depicts a remote sensor device of the system of the present invention; FIG. 3 depicts a simplified block diagram of a LIDAR remote sensor device in the preferred embodiment of the invention;
FIG. 4 depicts a self-locking flight pattern of a preferred embodiment of the present invention;
FIG. 5 depicts a three-axis coordinate system of the present invention; FIG. 6 depicts an image plane of the target area of the present invention;
FIG. 7 depicts a remote sensor device attached to a moving aircraft;
FIG. 8 depicts a self-locking flight pattern used to calculate the yaw angle;
FIG. 9 depicts a self-locking flight pattern used to calculate the roll angle;
FIG. 10 depicts a self-locking flight pattern used to calculate the pitch angle; and
FIG. 11 depicts a self-locking flight pattern used to calculate the range offset.
DETAILED DESCRIPTION
[0013] The present invention provides a method of calibrating remote sensors using remote sensing data collected during conventional operation of a vehicle. FIG. 1 depicts an aircraft on-board remote sensing system that can utilize a preferred embodiment of the method of the present invention. The on-board remote sensing system has at least one remote sensor device 10 that is designed to obtain data of a site flown over by an aircraft. The remote sensor device 10 is associated with a computer 12 suited to form, select and correct images of the site flown over. The computer 12 is connected to a positioning device 14 to allow continuous association of geographic data with the images acquired. The computer 12 can also be connected to an attitude- sensing device 16 whose indications allow readjustment of the images acquired according to the trajectory of the aircraft. The on-board system can further comprise a memory unit 18 and navigation guidance system 20 to provide immediate feedback to the pilot as to the position of the aircraft relative to the planned flight lines. This system receives position data from real-time positioning device 22 that can include a differential GPS unit. In addition, the computer 12 can also be coupled to a communications network 21 to permit direct transmission of data to locations remote from computer 12.
[0014] The remote sensor device 10 is mounted to the aircraft and generally includes an optical system 30 and a detector 32 as shown in FIG. 2. The remote sensor device can be mounted on a cargo door, in a hole in the floor of the aircraft, under the nose or wing of the plane, or in a pod that is attached beneath the aircraft. The optical system 30 can include a lens 34, an aperture 36 and filter 38 to redirect or focus the energy onto the detector 32. The detector 32 senses energy and generates an emission of electrons that are collected and counted as a signal. The signal is carried to computer 12 that outputs a signal that is used in making images or is analyzed by a computer program. The magnitude of the output signal is proportional to the intensity of the sensed energy. Therefore, changes in the output signal can be used to measure changes in sensed energy during a given time interval. [0015] The remote sensor device 10 can either be a passive or active sensor device. In a passive sensor device, energy comes from an external source. In contrast, an active sensor device generates energy within the sensor system, beams the energy outward, and the fraction of energy returned is measured. In addition, the remote sensor device can be either an imaging or non-imaging device. Imaging devices use the measured energy related to a specific point in the target area to excite a substance, like silver in film, or to drive an image-producing device like a monitor, to produce an image or a display. Non-imaging devices measure the energy from all points in the target area to generate an electrical strength signal. [0016] h one preferred embodiment, the remote sensor device 10 includes a charge-coupled device or CCD. CCD is an extremely small, silicon chip that is light sensitive. When energy strikes a CCD, electronic charges develop whose magnitudes are proportional to the intensity of the impinging energy during a short time interval (exposure time). The number of detector elements per unit length, along with the optical system, determines the spatial resolution. Using integrated circuits, each linear array is sampled very rapidly in sequence to produce an electrical signal that varies with the radiation striking the array. This changing signal recording goes through a signal processor then to a recorder, and finally, is used to drive an electro-optical device to make a black and white image. After the instrument samples the data, the array discharges electronically fast enough to allow the next incoming radiation to be detected independently. Filters can be selected for wavelength intervals, each associated with a CCD array, in order to obtain multi-band sensing if desired. [0017] In another embodiment, the remote sensor device includes a 3- dimensional sensor device such as LLDAR. LIDAR is similar to the more familiar radar, and can be thought of as laser radar. In radar, radio waves are transmitted into the atmosphere that scatters some of the energy back to the radar's receiver. LIDAR also transmits and receives electromagnetic radiation, but at a higher frequency since it operates in the ultraviolet, visible and infrared region of the electromagnetic spectrum. In operation, LLDAR transmits light out to a target area. The transmitted light interacts with and is changed by the target area. Some of this light is reflected / scattered back to the LLDAR instrument where it can be analyzed. The change in the properties of the light enables some property of the target area to be detennined. The time for the light to travel out to the target area and back to LIDAR device is used to determine the range to the target.
[0018] There are presently three basic types of LIDAR: Range finders,
Differential Absorption LIDAR (DIAL) and Doppler LIDAR. Range finder LIDAR is the simplest LIDAR and is used to measure the distance from the LIDAR device to a solid or hard target. DIAL LIDAR is used to measure chemical concentrations (such as ozone, water vapor, pollutants) in the atmosphere. A DIAL LLDAR uses two different laser wavelengths that are selected so that one of the wavelengths is absorbed by the molecule of interest while the other wavelength is not. The difference in intensity of the two return signals can be used to deduce the concentration of the molecule being investigated. Doppler LLDAR is used to measure the velocity of a target. When the light transmitted from the LLDAR hits a target moving towards or away from the LLDAR, the wavelength of the light reflected/scattered off the target will be changed slightly. This is known as a Doppler-shift and therefore Doppler LIDAR. If the target is moving away from the LIDAR, the return light will have a longer wavelength (sometimes referred to as a red shift), if moving towards the LLDAR the return light will be at a shorter wavelength (blue shifted). The target can be either a hard target or an atmospheric target (e.g. microscopic dust and aerosol particles that are carried by the wind.
[0019] A simplified block diagram of LLDAR is shown in FIG. 3 and includes a transmitter 40, a receiver 42 and a detector 44. The transmitter 40 is a laser, while its receiver 42 is an optical telescope. Different kinds of lasers are used depending on the power and wavelength required. The laser may be both a continuous wave or pulsed. Gain mediums for the lasers include, gases (e.g. Helium Neon or Xenon Fluoride), solid-state diodes, dyes and crystals (e.g. Neodymium: Yttrium Aluminum Garnet). The receiver 42 records the scattered light received by the receiver at fixed time intervals. Detector 44 is usually an extremely sensitive detector such as photo-multiplier tubes that can detect backscattered light. Photo-multiplier tubes first convert the individual quanta of light/photons into electric currents that are subsequently turned into digital photocounts that can be stored and processed on a computer. The photocounts received are recorded for fixed time intervals during the return pulse. The times are then converted to heights called range bins since the speed of light is well known. The range-gated photocounts can then be stored and analyzed by a computer.
[0020] Computer 12 can comprise an industry standard model PCI single board computer using a processor and having board slots to handle the I/O functions performed by the board. The LP boards can provide analog-to-digital, digital-to-analog and discrete digital I/O functions. The IP boards are adapted to receive and store data from remote sensor device 10, attitude sensing device 16 and positioning device 14. In addition, computer 12 is adapted to perform stereo imaging techniques on collected data from the target area in order to calibrate remote sensor device 10. [0021] Positioning device 14 can include a kinematic, post-processed GPS unit, the unit comprising a GPS system antenna connected to a GPS receiver that is part of computer 12. The GPS receiver periodically generates a set of geophysical coordinate and velocity data representative of the position of remote sensor device 10. The set of geophysical coordinate data and velocity data can be directed to computer 12 for processing and storing.
[0022] Attitude sensing device 16 can include an inertial measurement unit
(IMU) to provide attitude data to computer 12 that is representative of a set of measured angles. The IMU generally senses change in velocity and rotation rate of the aircraft or remote sensor device, depending on where it is attached, in three coordinate axes. The IMU data obtained is used to determine the roll angle, the pitch angle and yaw angle.
[0023] A memory unit 18 can also be connected to computer 12 to store remote sensing data and geographic data. Memory unit 18 contains sufficient storage volume to store and transfer remote sensing data and geographic data for the system. [0024] The navigation guidance system 20 can include a display console that presents to the pilot the current aircraft position relative to the planned flight lines in the target area of interest. A cross-hair can also be displayed to show whether the aircraft is staying on line at the planned altitude. [0025] The method of the present invention provides a method of calibrating a remote sensing system employed in an aircraft or other vehicle using remote sensing data collected during conventional operation. The method includes mounting at least one remote sensor 10 on a vehicle and moving the vehicle in a self-locking pattern 46 over a target area 58. The movement may comprise any pattern that produces at least three substantially parallel travel lines out of a group of three or more lines. Further, at least one of the travel lines should be in an opposing direction to the other substantially parallel travel lines. In other words, out of any group of travel lines, some of which may not be parallel, at least three of the travel lines are parallel. Further, in the most preferred embodiment of the invention, the travel lines are parallel. In one preferred embodiment of the invention, the travel pattern comprises at least one pair of parallel travel lines in a matching direction and at least one pair of travel lines in an opposing direction. [0026] Swath widths 59, as described below, are generated for each substantially parallel travel line with the remote sensor device. Remote sensing data is collected from the target area during vehicle movement, which is inputted into a computer 12 to calculate calibration data. The calibration data is applied to the remote sensing data to remove bias in image output. The vehicle used in the present invention may be an airplane, helicopter, satellite, truck or other transport device. [0027] The preferred method of the present invention utilizes an aircraft 6 land a self-locking flight pattern 46 that includes a number of flight lines that are used to obtain the images from the target area as described in FIG. 4. The number of flight lines required to cover a target area can vary depending on the area of interest. It is not always possible or required to have an even number of flight lines. The pattern over the target area includes pairs of adjacent flight lines oriented so that one flight line is up and the other flight line is down or both flight lines are oriented in the same direction. Thus, any two adjacent flight lines can form a pair of flight lines in either an opposing or matching direction.
[0028] The self-locking flight pattern 46 as depicted in FIG. 4 can further include right and left outermost flight lines 47 with a number of inner parallel flight lines 48-51. The flight lines 47-51 can be divided into pairs of adjacent flight lines in a way so that both flight lines of each pair are in the same direction to form a double-up double down pattern. For example, pair 54 including flight lines 49 and 50 is in the opposite direction to its neighboring pair of flight lines including flight lines 47 and 48 and flight lines 47 and 51. The self-locking flight pattern 46 allows each flight line in the pattern to have one adjacent flight line oriented in the same or matching direction and the other adjacent flight line to be in an opposite crossing direction over the target area. However, the right and left outermost flight lines 47 are not part of the target area of interest but provide uniformity for the inner flight lines and therefore have only one inner adjacent flight line.
[0029] As the remote sensor device 10 moves along the self-locking flight pattern 46, it gathers data. In doing so, it generates swath widths 59 where the remote sensor device 10 scans a path covering an area to the sides of a flight line. Because each flight line is parallel to one another, these swath widths 59 overlap. These overlapping swath width areas can be used to calibrate remote sensor device 10 by along-track and cross-track parallax of images in adjacent flight lines with stereo imaging techniques as will be described below. The swath widths 59 are determined by the remote sensor device's field of view and can be varied as desired to obtain the optimum width for use in this method.
[0030] As depicted in FIGS. 5, 6 and 7, the remote sensor device 10 can be mounted onto aircraft 61 such that a portion of target area 58 is imaged onto a two- dimensional array 60 whose linear axis defines an image plane 62. An image coordinate system 64 of image plane 62 consists of a forward axis 66 or "x"-axis, a "y"-axis 68 and a "z"-axis 70 having an origin located at the center of array 60. The x- axis 66 is the axis parallel to a linear axis of array 60 and is in the same general direction of the forward flight motion. The y-axis 68 lies on image plane 62 and is perpendicular to x-axis 66 while the z-axis 70 is perpendicular to image plane 62. [0031] The set of three world axes include a vertical axis 80, a forward flight axis 82 and a cross-track axis 84. The vertical axis 80 is defined by gravity, the forward flight axis 82 is the vector projection of an instantaneous velocity of aircraft 61 in the x- y plane of the image coordinate system 64, the cross-track axis 84 is defined by a cross- section between the y-z plane of the image coordinate system 64 and a horizontal plane perpendicular to the vertical axis 80. The three attitude parameters are a roll angle 87 (omega), a pitch angle 88 (phi), and a yaw angle 89 (kappa). The pitch angle 88 is the angle between the x-axis 66 of the image plane 62 and a horizontal axis perpendicular to the vertical axis 80 and lies in the x-z plane of the image coordinate system 64. The roll angle 87 is the angle between the y-axis 68 of the image plane 62 and the cross- track axis 84; while the yaw angle 89 is the angle between the x-axis 66 of the image plane 62 and the forward flight axis 82. [0032] For an active sensor such as LLDAR, light pulses are emitted and their reflected signals captured. The position of the reflecting object is determined by the angles of the incoming light signals and the travel time (i.e. the time when a pulse is generated until an echo is received). However, this time can be biased by propagation delay internal to the LIDAR device. If this delay were not considered, the range (i.e. the distance from the LIDAR device to the reflecting object) would be over-estimated. The range offset, computed by multiplying the propagational delay with the speed of light, must be calibrated to remove this bias. [0033] Additionally, during operation the IMU unit constantly records the attitude of its own reference plane. However, this plane does not coincide with image plane 62 whose attitude parameters are required to process sensor data. The boresight angles are the angles such that a series of rotations based on such angles will make the image plane coincide with the IMU reference plane. By convention, the order of rotations is roll, pitch and yaw. Once the roll, pitch and yaw angles are determined, the attitude of the image plane is readily available by combining the attitude of the IMU reference plane with these angles.
[0034] In one embodiment, the method of the present invention uses a 3- dimensional remote sensor device. Although a 3-dimensional remote sensor device is used in this embodiment, a 2-dimensional device can also be used in the calibration method of the present invention.
[0035] First, the data are processed using an initial set of assumed roll, pitch and yaw angles that can be obtained from prior calibrations or simply set to zeroes if no calibration data are available. As a result of processing with bias, objects in images will be shifted from their true geographical locations. Using the algorithms described below, a new set of roll, pitch and yaw angles are derived. The process is then iterated until the values converge. Usually, only two or three iterations are required. [0036] The yaw and pitch angles are determined from along-track parallax ("x" parallax) of objects in the overlapping swath width areas of adjacent flight lines. The yaw angle is determined using pairs of adjacent flight lines oriented in the same direction or matching pairs. The pitch angle in contrast is determined using pairs of adjacent flight lines going in opposite directions or crossing pairs. [0037] The roll angle and the range offset in comparison are determined using cross-track parallax ("y" parallax) of objects in the overlapping swath width areas of adjacent flight lines. The roll angle is determined using crossing pairs of adjacent flight lines, whereas the range offset is determined by matching pairs of adjacent flight lines. [0038] Because of the yaw bias, which is a rotation about the z-axis, objects are rotated about the center of the image plane. In FIG. 8, assuming there is a counter- clockwise bias, images rotate clockwise. For example, if the flight direction is up, object 90 in FIG. 8 in the overlapping swath width area is shifted forward to position 91 during flight line 97. In comparison, object 90 in the overlapping swath width area is shifted backward to position 92 during flight line 98 since object 90 is to the left of flight line 98. If d is the along-track parallax ("x" parallax) of a point with a positive value, meaning a forward displacement for objects to the right of a flight line and backward displacement for objects to the left, and if a positive yaw angle is one where the image plane has to be rotated counter clockwise to coincide with the IMU reference plane, for objects located in the overlapping swath width areas and for small yaw angles (which is almost always the case), the following formula holds true: d = AO * sin (yaw angle) *2 where: A = the midpoint of the line segment connecting 91 and 92 O = the nadir point
[0039] Conversely, if d and AO can be measured, then the yaw angle can be determined by: yaw angle = arcsin[(d/2)/(AO)] [0040] The overlapping swath width area for each matching pair of flight lines of the flight pattern is compared in determining the yaw angle. The yaw angle can be computed for each object in the overlapping swath width area of the matching pair of flight lines then averaged to yield the yaw angle for the matching pair of flight lines. The yaw angles of all matching pairs of flight lines are then averaged to yield the final yaw angle for the flight pattern. The matching of objects in the overlapping swath width areas can be performed either manually or automatically using pattern matching software.
[0041] Next, the pitch angle is determined. A pitch angle is a rotation about the y-axis. A positive pitch angle is defined to be one where the forward edge of the image plane is tilted upward. A pitch angle is computed in the present invention using crossing pairs of adjacent flight lines.
[0042] A pitch angle creates x parallax. A positive angle shifts object images backward in the final output image. Consider the pair of crossing flight lines 98 and 99 in FIG. 9. Assuming there is a positive pitch, object 90 in the overlapping swath width area during flight line 98 is shifted backward to 91 while during flight line 99, it is shifted backward to 92. Since the flight lines are in an opposite crossing direction, the shift in position of objects in the overlapping swath width area will also be in opposite directions creating the x parallax. If h is the altitude above ground of the center of the image plane, and d is the x parallax (i.e. line segment connecting 91 and 92), the pitch angle can be determined by: pitch angle = arctan[(d/2)/h] where: d is positive if the vector 9192 points in the same direction as flight line 98; [0043] Flight GPS data and general elevation data for the area of interest (such as by using United States Geographical Survey data) can be used in determining h. The general elevation data for the target area of interest does not have to be exact for the algorithm of the present invention to function properly, and thus can be estimated. [0044] The overlapping swath width area for each crossing pair of flight lines of the flight pattern is compared in determining the pitch angle. The pitch angle is computed for each object in the overlapping swath width area of the crossing pair of flight lines then averaged to yield the pitch angle for the crossing pair of flight lines. The pitch angles of all crossing pairs of flight lines are then averaged to yield the final pitch angle for the flight pattern. [0045] Note that the yaw angle causes approximately the same along-track shift in the same direction in both flight lines of a crossing pair. Therefore, the yaw angle does not effect the determination of the pitch angle. [0046] Next the roll angle is computed using crossing pairs of flight lines. The roll angle is a rotation about the x-axis, the axis that is in the direction of flight. A positive roll angle is one where the image plane is tilted downward to the right, causing objects in the final output image to be shifted to the right. Considering crossing flight lines 98 and 99 in FIG. 10 having an object 90 in the overlapping swath width area, and assuming that there is a positive roll, object 90 during flight line 98 will be shifted to the right to position 91 while during flight line 99, object 90 will be shifted to position 92. Since flight lines 98 and 99 are in opposite direction, the shifts for each flight line will also be in opposite directions creating a separation between 91 and 92 in the cross- track direction, or y parallax. If d is the separation between 91 and 92 (or the y parallax), the following sign convention is used:
(i) d is positive (+) if 91 is farther from flight line 98 than 92 is from flight line 99 (in other words, the points 91 and 92 cross over each other); (ii) d is negative (-) if 91 is closer to flight line 98 than 92 is from flight line 99; The roll angle can then be computed by determining an angle that would minimize the expression:
∑(d~drf
where: dr = the displacement caused by the roll angle
Using the least square error theory, dr is equal to the average value of d.
dr = dave= - 1 " V. n ,=ι where: n = number of matching objects in the overlapped area If h is again the altitude above ground of the center of the image plane, a roll angle 87 that would effect the cross-track adjustment dave can be approximated by (note that each flight line contributes half of the adjustment): tan (roll angle) = tan (a-b) = (tan(a) - tan (b))/(l + tan(a)*tan(b)) where: tan (a) = lOl/l tan (b) = 102/h [0047] The overlapping swath width area for each crossing pair of flight lines of the flight pattern is compared in determining the roll angle. The roll angle is computed for each object in the overlapping swath width area of the crossing pair of flight lines then averaged to yield the roll angle for the crossing pair of flight lines. The roll angles of all crossing pairs of flight lines are then averaged to yield the final roll angle for the flight pattern.
[0048] The range offset can then next be computed. The range offset, like the roll angle, can also be determined by cross-track parallax of objects in the overlapping swath width areas. However, the range offset is determined using matching pairs of flight lines. Because the roll angle effects the same parallax shift for both flight lines of a matching pair it therefore does not effect the computation of the range offset. [0049] The range offset is a characteristic of an active sensor, such as LLDAR. It causes objects to appear below ground trath, causing positive y parallax. The range offset 103, as depicted in FIG. 11, can be approximated by:
Range offset = ((d 2)/tan (c)) where: c is the average incident angle (cl + c2)/2 [0050] Once the yaw, pitch and roll angles and the range offset are determined, they can be applied to remove the bias in the final image output. [0051] In another embodiment, the method above further includes detennining an estimated horizontal displacement error and an estimated vertical displacement error of the remote sensing system. When two adjacent parallel flight lines are overlaid on one another, a first object having ground height may not be at the same position in the overlapping swath width area due to a horizontal displacement error (E ). By measuring the parallax of the first object in the overlapping swath width area, Eh for the first object can be determined by: Eh of first object = (measured distance)/2
Eh of the remaining objects in this overlapping swath width area as well as other overlapping swath width areas are also determined in a similar fashion and an estimated horizontal displacement error is computed by taking the standard deviation of the Eh values. [0052] The estimated vertical displacement error is based on the y parallax of objects in an overlapping swath width area of matching pairs of flight lines. A range offset error causes the data to be below ground truth and the images to be moved away from their respective swath centerline. Therefore, an overlapping swath width area can be used to determine a vertical displacement error for a first object having a discrepancy between the ground trath and its data values using the stereo imaging technique. The estimated vertical error for the remote sensing system is then determined by computing the standard deviation of a sampling of vertical displacement errors for the same number of objects in each flight line such that the objects are spread evenly along each flight line.
[0053] Although various embodiments of the present invention have been described in detail above, it should be appreciated that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the invention.

Claims

WHAT IS CLAIMED IS:
1. A method of calibrating a remote sensor system comprising the steps of:
(a) mounting at least one remote sensor on a vehicle;
(b) moving the vehicle in a self-locking pattern over a target area, the movement comprising any pattern that produces at least three substantially parallel travel lines out of a group of three or more lines, at least one of which fravel lines is in an opposing direction to the other substantially parallel travel lines;
(c) generating swath widths for each substantially parallel travel line with the remote sensor device;
(d) collecting remote sensing data of the target area during vehicle movement;
(e) inputting the remote sensing data into a computer to calculate calibration data; and
(f) applying the calibration data to the remote sensing data to remove bias in image output.
2. The method of claim 1, wherein the vehicle is an aircraft.
3. The method of claim 1, wherein the vehicle is a satellite.
4. The method of claim 2, wherein the travel pattern comprises at least one pair of parallel flight lines in a matching direction and at least one pair of parallel flight lines in an opposing direction.
5. The method of claim 2 wherein the remote sensor device is LIDAR.
6. The method of claim 1 wherein the yaw angle is calculated using the overlapping swath width areas of pairs of travel lines having matching direction.
7. The method of claim 1 wherein the pitch angle is calculated using the overlapping swath width areas of pairs of travel lines having opposing direction.
8. The method of claim 1 wherein the range offset is calculated using the overlapping swath width areas of pairs of fravel lines having matching direction.
9. The method of claim 1 wherein the roll angle is calculated using the overlapping swath width areas of pairs of flight lines having crossing direction.
10, The method of claim 1 wherein the remote sensor device is CCD.
11. A method of calibrating a remote sensor system for use in airborne imaging comprising the steps of:
(a) mounting at least one remote sensor device on an aircraft;
(b) flying the aircraft in a self-locking flying pattern over a target area, the self- locking flying pattern comprising any pattern that produces at least three substantially parallel flight lines out of a group of three or more lines, at least one of which flight lines is in an opposing direction to the other substantially parallel flight lines;
(c) generating swath widths between the adjacent substantially parallel flight lines with the remote sensor device such that the adjacent substantially parallel flight lines produce at least one overlapping swath width area;
(d) collecting remote sensing data of the target area in-flight;
(e) inputting the remote sensing data into a computer to calculate a yaw angle, a pitch angle, a range offset, and a roll angle; and
(f) applying the yaw angle, the pitch angle, the range offset, and the roll angle to remove bias in an image output.
12. The method of claim 11 wherein the flight lines are parallel.
13. The method of claim 11 wherein the self-locking flying pattern includes at least one pair of flight lines in a matching direction and at least one pair of flight lines in an opposing direction.
14. The method of claim 11 wherein the remote sensor device is LIDAR.
15. The method of claim 11 wherein the yaw angle is calculated using the overlapping swath width areas of pairs of flight lines having matching direction.
16. The method of claim 11 wherein the pitch angle is calculated using the overlapping swath width areas of pairs of flight lines having crossing direction.
17. The method of claim 11 wherein the range offset is calculated using the overlapping swath width areas of pairs of flight lines having matching direction.
18. The method of claim 11 wherein the roll angle is calculated using the overlapping swath width areas of pairs of flight lines having crossing direction.
19. A method of calibrating a remote sensor system for use in airborne imaging comprising the steps of: (a) mounting at least one remote sensor device on an aircraft; (b) flying the aircraft in a self-locking flying pattern over a target area, the self- locking flying pattern comprising adjacent substantially parallel flight lines having a right outermost flight line, a left outermost flight line and at least one inner flight line, the adjacent substantially parallel flights lines arranged so that the self-locking flying pattern has at least one pair of adjacent substantially parallel flight lines in a matching direction and at least one pair of adjacent substantially parallel flight lines in a opposing direction; (c) generating swath widths between the adjacent substantially parallel flight lines with the remote sensor device such that adjacent flight lines produce overlapping swath width areas; (d) collecting remote sensing data of the target area in-flight; (e) inputting the data images into a computer to calculate a yaw angle, a pitch angle, and a roll angle; and (f) applying the yaw angle, the pitch angle, and the roll angle to remove bias in an image output.
20. The method of claim 19 wherein the flight lines are parallel.
21. The method of claim 19 wherein the yaw angle is calculated using the overlapping swath width areas of pairs of adjacent substantially parallel flight lines having matching direction.
22. The method of claim 19 wherein the pitch angle is calculated using the overlapping swath width areas of pairs of adjacent substantially parallel flight lines having crossing direction.
23. The method of claim 19 wherein the roll angle is calculated using the overlapping swath width areas of pairs of adjacent substantially parallel flight lines having crossing direction.
24. A method of determining error in a remote sensing system for airborne imaging comprising the steps of: (a) mounting at least one remote sensor device on an aircraft; (b) flying the aircraft in a self-locking flying pattern over a target area, the self- locking flying pattern comprising adjacent substantially parallel flight lines arranged so that the self-locking flying pattern includes at least one pair of flight lines in a matching direction and at least one pair of flight lines in an opposing direction; (c) generating overlapping swath widths areas between the adjacent substantially parallel flight lines with the remote sensor device; (d) collecting remote sensing data of the target area in-flight; (e) inputting the remote sensing data into a computer to generate an estimated horizontal displacement error and an estimated vertical displacement error using the swath widths; and (f) applying the horizontal displacement error and vertical displacement error to the remote sensing data to reduce the error in the remote sensing system.
25. An in-flight calibrated remote sensing system for use in airborne imaging comprising: (a) at least one remote sensor device mounted to an aircraft; (b) a self-locking flight pattern; and (c) a computer having means to compute a yaw angle, a pitch angle, a range offset, and a roll angle using remote sensing data collected by the remote sensor device.
26. The in-flight calibrated remote sensing system of claim 25 wherein the remote sensor device is a three dimensional remote sensor device.
27. The in-flight calibrated remote sensing system of claim 26 wherein the three dimensional remote sensor device is LIDAR. The in-flight calibrated remote sensing system of claim 25 further comprising a positioning device, an attitude sensing device, a memory unit and a navigation guidance system.
The in-flight calibrated remote sensing system of claim 25 wherein the computer further comprises means to generate an estimated horizontal displacement error and an estimated vertical displacement error.
A self-locking flying pattern comprising substantially parallel flight lines arranged to form a double-up double-down pattern.
PCT/US2003/028727 2002-09-17 2003-09-12 A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data WO2004027348A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2003278803A AU2003278803A1 (en) 2002-09-17 2003-09-12 A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data
CA002534966A CA2534966A1 (en) 2002-09-17 2003-09-12 A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/244,980 2002-09-17
US10/244,980 US7212938B2 (en) 2002-09-17 2002-09-17 Method of using a self-locking travel pattern to achieve calibration of remote sensors using conventionally collected data

Publications (2)

Publication Number Publication Date
WO2004027348A2 true WO2004027348A2 (en) 2004-04-01
WO2004027348A3 WO2004027348A3 (en) 2004-06-24

Family

ID=31992013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/028727 WO2004027348A2 (en) 2002-09-17 2003-09-12 A method of using a self-locking travel pattern to achieve calilbration of remote sensors using conventionally collected data

Country Status (4)

Country Link
US (1) US7212938B2 (en)
AU (1) AU2003278803A1 (en)
CA (1) CA2534966A1 (en)
WO (1) WO2004027348A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3193136A1 (en) 2016-01-13 2017-07-19 Vito NV Method and system for geometric referencing of multi-spectral data
CN108572361A (en) * 2018-04-03 2018-09-25 深圳飞马机器人科技有限公司 Airborne laser radar system equipment integrates angle of setting calibration method and device
RU2695596C1 (en) * 2018-12-29 2019-07-24 федеральное государственное автономное образовательное учреждение высшего образования "Санкт-Петербургский политехнический университет Петра Великого" (ФГАОУ ВО "СПбПУ") Ice field photogrammetry method in ice basin

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8483960B2 (en) 2002-09-20 2013-07-09 Visual Intelligence, LP Self-calibrated, remote imaging and data processing system
USRE49105E1 (en) * 2002-09-20 2022-06-14 Vi Technologies, Llc Self-calibrated, remote imaging and data processing system
WO2006090368A1 (en) * 2005-02-22 2006-08-31 Israel Aerospace Industries Ltd. A calibration method and system for position measurements
US7860628B2 (en) 2005-06-09 2010-12-28 Trimble Navigation Limited System for guiding a farm implement between swaths
US7844378B2 (en) 2006-10-05 2010-11-30 Trimble Navigation Limited Farm apparatus having implement sidehill drift compensation
US8497905B2 (en) 2008-04-11 2013-07-30 nearmap australia pty ltd. Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
US8675068B2 (en) * 2008-04-11 2014-03-18 Nearmap Australia Pty Ltd Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
US8120644B2 (en) * 2009-02-17 2012-02-21 Autoliv Asp, Inc. Method and system for the dynamic calibration of stereovision cameras
CN101604017B (en) * 2009-07-16 2012-05-23 北京航空航天大学 Method for realizing remote sensing image simulation under given external orientation element
CN101839713B (en) * 2010-04-20 2011-08-17 武汉大学 Satellite image system error correction method based on bias matrix with time factor
US8552905B2 (en) 2011-02-25 2013-10-08 Raytheon Company Automated layout of beams
JP2014511155A (en) * 2011-03-31 2014-05-12 ビジュアル インテリジェンス,エルピー Self-calibrating remote imaging and data processing system
US9068886B2 (en) * 2011-07-29 2015-06-30 Raytheon Company Method and system for vicarious spatial characterization of a remote image sensor
FR2985307B1 (en) * 2012-01-03 2015-04-03 Centre Nat Etd Spatiales METHOD OF CALIBRATING THE BANDS OF ALIGNMENT OF AN EARTH OBSERVATION SYSTEM UTILIZING SYMMETRICAL SIGHTS
CN102607592B (en) * 2012-02-24 2014-11-26 北京大学 Remote sensing calibration comprehensive method and calibration equipment vehicle
US9823116B2 (en) * 2012-08-23 2017-11-21 Raytheon Company Geometric calibration of a remote sensor
CN103018736B (en) * 2012-12-03 2014-11-26 北京航空航天大学 Satellite-borne remote sensor radiation calibration method based on atmospheric parameter remote sensing retrieval
CN103530381B (en) * 2013-10-17 2016-08-17 宁波工程学院 A kind of parallel optimization method towards remote sensing image neighborhood processing
US9317041B2 (en) * 2014-01-21 2016-04-19 Sikorsky Aircraft Corporation Rotor moment feedback for stability augmentation
CN104236613A (en) * 2014-07-14 2014-12-24 北京理工大学 Portable monitoring, diagnosing and on-site verifying system for sensing devices on highway network
ES2927014T3 (en) * 2017-03-31 2022-11-02 A 3 by Airbus LLC Systems and methods for the calibration of sensors in vehicles

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5308022A (en) * 1982-04-30 1994-05-03 Cubic Corporation Method of generating a dynamic display of an aircraft from the viewpoint of a pseudo chase aircraft
US5371358A (en) * 1991-04-15 1994-12-06 Geophysical & Environmental Research Corp. Method and apparatus for radiometric calibration of airborne multiband imaging spectrometer

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4398195A (en) * 1979-07-02 1983-08-09 Del Norte Technology, Inc. Method of and apparatus for guiding agricultural aircraft
US4313678A (en) * 1979-09-24 1982-02-02 The United States Of America As Represented By The Secretary Of The Interior Automated satellite mapping system (MAPSAT)
US6204799B1 (en) * 1980-05-27 2001-03-20 William J. Caputi, Jr. Three dimensional bistatic imaging radar processing for independent transmitter and receiver flightpaths
US4583703A (en) * 1982-03-17 1986-04-22 The United States Of America As Represented By The Secretary Of The Army One fin orientation and stabilization device
US4712010A (en) * 1986-01-30 1987-12-08 Hughes Aircraft Company Radiator scanning with image enhancement and noise reduction
US4965572A (en) * 1988-06-10 1990-10-23 Turbulence Prediction Systems Method for producing a warning of the existence of low-level wind shear and aircraftborne system for performing same
US5013917A (en) * 1988-07-07 1991-05-07 Kaman Aerospace Corporation Imaging lidar system using non-visible light
US5166789A (en) * 1989-08-25 1992-11-24 Space Island Products & Services, Inc. Geographical surveying using cameras in combination with flight computers to obtain images with overlaid geographical coordinates
IL91659A (en) * 1989-09-15 1995-05-26 Israel Min Of Energy & Inf Geophysical survey system
US4964721A (en) * 1989-10-12 1990-10-23 Kaman Aerospace Corporation Imaging lidar system
US5231401A (en) * 1990-08-10 1993-07-27 Kaman Aerospace Corporation Imaging lidar system
US5276321A (en) * 1991-04-15 1994-01-04 Geophysical & Environmental Research Corp. Airborne multiband imaging spectrometer
US5257085A (en) * 1991-04-24 1993-10-26 Kaman Aerospace Corporation Spectrally dispersive imaging lidar system
US5198657A (en) * 1992-02-05 1993-03-30 General Atomics Integrated imaging and ranging lidar receiver
US5448936A (en) * 1994-08-23 1995-09-12 Hughes Aircraft Company Destruction of underwater objects
US5639964A (en) * 1994-10-24 1997-06-17 Djorup; Robert S. Thermal anemometer airstream turbulent energy detector
US5596494A (en) 1994-11-14 1997-01-21 Kuo; Shihjong Method and apparatus for acquiring digital maps
US5894323A (en) 1996-03-22 1999-04-13 Tasc, Inc, Airborne imaging system using global positioning system (GPS) and inertial measurement unit (IMU) data
US6252544B1 (en) * 1998-01-27 2001-06-26 Steven M. Hoffberg Mobile communication device
US6087984A (en) * 1998-05-04 2000-07-11 Trimble Navigation Limited GPS guidance system for use with circular cultivated agricultural fields
US6282301B1 (en) * 1999-04-08 2001-08-28 The United States Of America As Represented By The Secretary Of The Army Ares method of sub-pixel target detection
AU2001271238A1 (en) * 2000-03-16 2001-09-24 The Johns-Hopkins University Light detection and ranging (lidar) mapping system
US6553311B2 (en) * 2000-12-08 2003-04-22 Trimble Navigation Limited Navigational off- line and off-heading indication system and method
US6542831B1 (en) * 2001-04-18 2003-04-01 Desert Research Institute Vehicle particulate sensor system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5308022A (en) * 1982-04-30 1994-05-03 Cubic Corporation Method of generating a dynamic display of an aircraft from the viewpoint of a pseudo chase aircraft
US5371358A (en) * 1991-04-15 1994-12-06 Geophysical & Environmental Research Corp. Method and apparatus for radiometric calibration of airborne multiband imaging spectrometer

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3193136A1 (en) 2016-01-13 2017-07-19 Vito NV Method and system for geometric referencing of multi-spectral data
WO2017121876A1 (en) 2016-01-13 2017-07-20 Vito Nv Method and system for geometric referencing of multi-spectral data
US10565789B2 (en) 2016-01-13 2020-02-18 Vito Nv Method and system for geometric referencing of multi-spectral data
CN108572361A (en) * 2018-04-03 2018-09-25 深圳飞马机器人科技有限公司 Airborne laser radar system equipment integrates angle of setting calibration method and device
RU2695596C1 (en) * 2018-12-29 2019-07-24 федеральное государственное автономное образовательное учреждение высшего образования "Санкт-Петербургский политехнический университет Петра Великого" (ФГАОУ ВО "СПбПУ") Ice field photogrammetry method in ice basin

Also Published As

Publication number Publication date
CA2534966A1 (en) 2004-04-01
US20040054488A1 (en) 2004-03-18
WO2004027348A3 (en) 2004-06-24
AU2003278803A8 (en) 2004-04-08
AU2003278803A1 (en) 2004-04-08
US7212938B2 (en) 2007-05-01

Similar Documents

Publication Publication Date Title
US7212938B2 (en) Method of using a self-locking travel pattern to achieve calibration of remote sensors using conventionally collected data
US5445453A (en) Method for airborne surveying which includes the determination of the apparent thermal inertia of the material being surveyed
US8483960B2 (en) Self-calibrated, remote imaging and data processing system
US7127348B2 (en) Vehicle based data collection and processing system
CN110108984B (en) Spatial relationship synchronization method for multiple sensors of power line patrol laser radar system
US20090122295A1 (en) Increasing measurement rate in time of flight measurement apparatuses
US11619712B2 (en) Hybrid LiDAR-imaging device for aerial surveying
Miller et al. 3-D site mapping with the CMU autonomous helicopter
CA2796162A1 (en) Self-calibrated, remote imaging and data processing system
JP2590689B2 (en) Interferometric synthetic aperture radar system and terrain change observation method
US4482252A (en) Calibration method and apparatus for optical scanners
Szabó et al. Zooming on aerial survey
Hill et al. Ground-to-air flow visualization using Solar Calcium-K line Background-Oriented Schlieren
Elbahnasawy et al. Multi-sensor integration onboard a UAV-based mobile mapping system for agricultural management
Redman et al. Streak tube imaging lidar (STIL) for 3-D imaging of terrestrial targets
Vaidyanathan et al. Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration
USRE49105E1 (en) Self-calibrated, remote imaging and data processing system
Pierrottet et al. Characterization of 3-D imaging lidar for hazard avoidance and autonomous landing on the Moon
Kohoutek et al. Processing of UAV based range imaging data to generate detailed elevation models of complex natural structures
Weber et al. Polarization upgrade of specMACS: calibration and characterization of the 2D RGB polarization resolving cameras
WO2021101612A2 (en) Passive wide-area three-dimensional imaging
Mandlburger et al. Evaluation of Consumer-Grade and Survey-Grade UAV-LIDAR
Kohoutek et al. Geo-referenced mapping using an airborne 3D time-of-flight camera
Cramer et al. Data capture
Amzajerdian et al. Performance of Flash Lidar with real-time image enhancement algorithm for Landing Hazard Avoidance

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
ENP Entry into the national phase

Ref document number: 2534966

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)