WO1997037336A1 - An aircraft detection system - Google Patents

An aircraft detection system Download PDF

Info

Publication number
WO1997037336A1
WO1997037336A1 PCT/AU1997/000198 AU9700198W WO9737336A1 WO 1997037336 A1 WO1997037336 A1 WO 1997037336A1 AU 9700198 W AU9700198 W AU 9700198W WO 9737336 A1 WO9737336 A1 WO 9737336A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
aircraft
object detection
image acquisition
camera
Prior art date
Application number
PCT/AU1997/000198
Other languages
French (fr)
Inventor
Glen William Auty
Michael John Best
Timothy John Davis
Ashley John Dreier
Ian Barry Macintyre
Original Assignee
Commonwealth Scientific And Industrial Research Organisation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commonwealth Scientific And Industrial Research Organisation filed Critical Commonwealth Scientific And Industrial Research Organisation
Priority to NZ332051A priority Critical patent/NZ332051A/en
Priority to AU21438/97A priority patent/AU720315B2/en
Priority to EP97913985A priority patent/EP0890161A4/en
Publication of WO1997037336A1 publication Critical patent/WO1997037336A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F1/00Ground or aircraft-carrier-deck installations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F1/00Ground or aircraft-carrier-deck installations
    • B64F1/002Taxiing aids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0082Surveillance aids for monitoring traffic from a ground station
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Definitions

  • the present invention relates to an object detection system and, in particular to an aircraft detection system.
  • the International Civil Aviation Organization (ICAO) has established regulations which require all civil aircraft to have registration markings beneath the port wing to identify an aircraft.
  • the markings denote the nationality of an aircraft and its registration code granted by the ICAO.
  • airline operators do not follow the regulations and the markings appear on an aircraft's fuselage.
  • Owners of aircraft are charged for airport use, but a satisfactory system has not been developed to automatically detect aircraft and then, if necessary, administer a charge to the owner.
  • Microwave signals for detecting an aircraft can interfere with microwave frequencies used for airport communications and, similarly, radar signals can interfere with those used for aircraft guidance systems.
  • a system which can be used to detect an aircraft using unobtrusive passive technology is desired.
  • an object detection system including :
  • passive sensing means for receiving electromagnetic radiation from a moving object and generating intensity signals representative of the received radiation, and processing means for subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
  • the present invention also provides an image acquisition system including : at least one camera for acquiring an image of at least part of a moving object, in response to a trigger signal, and analysis means for processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process.
  • the present invention also provides an object detection method including: passively sensing electromagnetic radiation received from a moving object; generating intensity signals representative of the received radiation; and subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
  • the present invention also provides an image acquisition method including: acquiring an image of at least part of a moving object, in response to a trigger signal, using at least one camera, and
  • processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process.
  • Figure 1 is a block diagram of a preferred embodiment of an aircraft detection system
  • Figure 2 is a schematic diagram of a preferred embodiment of the aircraft detection system
  • Figure 3 is a block diagram of a connection arrangement for components of the aircraft detection system
  • Figure 4 is a more detailed block diagram oi a proximity detector and a tracking system for the aircraft detection system
  • Figure 5 is a coordinate system used for the proximity detector
  • Figures 6(a) and 6(b) are underneath views of discs of sensors of the tracking system
  • Figure 7 is a schematic diagram of an image obtained by the tracking system
  • Figures 8 and 9 are images obtained from a first embodiment of the tracking system
  • Figure 10 is a graph of a pixel row sum profile for an image obtained by the tracking system
  • Figure 11 is a graph of a difference profile obtained by subtracting successive row sum profiles
  • Figure 12 is a diagram of a coordinate system for images obtained by the tracking system
  • Figure 13 is a diagram of a coordinate system for the aircraft used for geometric correction of the images obtained by the tracking system
  • Figure 14 is a diagram of a coordinate system used for predicting a time to generate an acquisition signal
  • Figure 15 is a graph of aircraft position in images obtained by the tracking system over successive frames
  • Figure 16 is a graph of predicted trigger frame number over successive image frames obtained by the tracking system
  • Figure 17 is a schematic diagram of a pyroelectric sensor used in a second embodiment of the tracking system.
  • Figure 18 is graphs of differential signatures obtained using the second embodiment of the tracking system.
  • Figures 19 and 20 are images obtained of an aircraft by high resolution cameras of an acquisition system of the aircraft detection system
  • Figure 21 is a schematic diagram of an optical sensor system used for exposure control of the acquisition cameras
  • Figure 22 is a flow diagram of a preferred character location process executed on image data obtained by the high resolution cameras
  • Figure 23 is a diagram of images produced during the character location process.
  • Figure 24 is a flow diagram of a character recognition process executed on a binary image of the characters extracted from an image obtained by one of the high resolution cameras.
  • An aircraft detection system 2 as shown in Figure 1 , includes a proximity detector 4, a tracking sensor system 6, an image processing system 8, an image acquisition system 10 and an analysis system 12.
  • a control system 14 can be included to control the image acquisition system 10 on the basis of signals provided by the image processing system 8, and also control an illumination unit 16.
  • the proximity detector 4 and the tracking sensor system 6 includes sensors 3 which may be placed on or near an aircraft runway 5 to detect the presence of an aircraft 28 using visual or thermal imaging or aural sensing techniques Also located on or near the runway 5 is at least one high resolution camera 7 of the image acquisition system 10.
  • the sensors 3 and the acquisition camera 7 are connected by data and power lines 9 to an instrument rack 11 , as shown in Figure 2, which may be located adjacent or near the runway 5.
  • the instrument rack 11 may alternatively be powered by its own independent supply which may be charged by solar power
  • the instrument rack 11 includes control circuitry and image processing circuitry which is able to control activation of the sensors 3 and the camera 7 and perform image processing, as required.
  • the instrument rack 11 , the data and power lines 9, the sensors 3 and the acquisition camera 7 can be considered to form a runway module which may be located at the end of each runway of an airport.
  • a runway module can be connected back to a central control system 13 using an optical fibre or other data link 15 Images provided by the sensors 3 may be processed and passed to the central system 13 for further processing, and the central system 13 would control triggering of the acquisition cameras 7. Alternatively image processing for determining triggering of the acquisition camera 7 may be performed by each instrument rack 11 .
  • the central control system 13 includes the analysis system 12. One method of configuring connection of the instrument racks 11 to the central control system 13 is illustrated in Figure 3.
  • the optical fibre link 15 may include dedicated optical fibres 17 for transmitting video signals to the central control system 13 and other optical fibres 19 dedicated to transmitting data to and receiving data from the central control system 13 using the Ethernet protocol or direct serial data communication .
  • a number of different alternatives can be used for connecting the runway modules to the central control system 13.
  • the runway modules and the control system 13 may be connected as a Wide Area Network (WAN) using Asynchronous Transfer Mode (ATM) or Synchronous Digital Hierarchy (SDH) links.
  • the runway modules and the central control system 13 may also be connected as a Local Area Network (LAN) using a LAN protocol, such as Ethernet Physical connections may be made between the runway modules and the central control system 13 or alternatively wireless transmission techniques may be used, such as using infrared or microwave signals for communication.
  • WAN Wide Area Network
  • ATM Asynchronous Transfer Mode
  • SDH Synchronous Digital Hierarchy
  • the runway modules and the central control system 13 may also be connected as a Local Area Network (LAN) using a LAN protocol, such as Ethernet Physical connections
  • the proximity detector 4 determines when an aircraft is within a predetermined region, and then on detecting the presence of an aircraft activates the tracking sensor system 6.
  • the proximity detector 4, as shown in Figure 4, may include one or more pyroelectnc devices 21 , judiciously located at an airport, and a signal processing unit 23 and trigger unit 25 connected thereto in order to generate an activation signal to the tracking sensor system 6 when the thermal emission of an approaching aircraft exceeds a predetermined threshold.
  • the proximity detector 4 may use one or more pyroelectnc point sensors that detect the infrared radiation emitted from the aircraft 28.
  • a mirror system can be employed with a point sensor 70 to enhance its sensitivity to the motion of the aircraft 28.
  • the point sensor 70 may consist of two or more pyroelectric sensors configured in a geometry and with appropriate electrical connections so as to be insensitive to the background infrared radiation and slowly moving objects. With these sensors the rate of motion of the image of the aircraft 28 across the sensor 70 is important.
  • the focal length of the mirror 72 is chosen to optimise the motion of the image across the sensor 70 at the time of detection. As an example, if the aircraft at altitude H with glide slope angle ⁇ GS moves with velocity V and passes overhead at time t o , as shown in Figure 5, then the position h of the image of the aircraft 28 on the sensor 70 is
  • the proximity detector 4 may include different angled point sensors to determine when an aircraft enters the monitored region and is about to land or take-off In response to the activation signal, the tracking sensor system 6 exposes the sensor 3 to track the aircraft. Use of the proximity detector 4 allows the sensor 3 to be sealed in a housing when not in use and protected from damaging environmental conditions, such as hailstorms and blizzards or fuel.
  • the sensor 3 is only exposed to the environment for a short duration whilst an aircraft is in the vicinity of the sensor 3. If the tracking system 6 is used in conditions where the sensor 3 can be permanently exposed to the environment or the sensor 3 can resist the operating conditions, then the proximity detector 4 may not be required.
  • the activation signal generated by the proximity detector 4 can also be used to cause the instrument rack 11 and the central control system 13 to adjust the bandwidth allocated on the link 15 so as to provide an adequate data transfer rate for transmission of video signals from the runway module to the central system 13. If the bandwidth is fixed at an acceptable rate or the system 2 only uses local area network communications and only requires a reduced bandwidth, then again the proximity detector 4 may not be required.
  • the tracking sensor system 6 includes one or more tracking or detection cameras 3 which obtain images of an aircraft as it approaches or leaves a runway. From a simple image of the aircraft, aspect ratios, such as the ratio of the wingspan to the fuselage length can be obtained.
  • the tracking camera 3 used is a thermal camera which monitors thermal radiation received in the 10 to 14 ⁇ m wavelength range and is not dependent on lighting conditions for satisfactory operation. Use of the thermal cameras is also advantageous as distribution of temperatures over the observed surfaces of an aircraft can be obtained, together with signatures of engine exhaust emissions and features in the fuselage or engines.
  • the tracking camera 3 can obtain an instantaneous two-dimensional image l n using all of the sensors in a CCD array of the camera, or alternatively one row of the array perpendicular to the direction of motion of the aircraft can be used to obtain a linear image at each scan and the linear image is then used to build up a two-dimensional image l n for subsequent processing .
  • a rotating disc system is employed.
  • the use of a rotating disc for removing water drops from windows is used on marine vessels.
  • a reflective or transparent disc is rotated at high speed in front of the window that is to be kept clear. Water droplets falling on the disk experience a large shear force related to the rotation velocity. The shear force is sufficient to atomise the water drop, thereby removing it from the surface of the disc.
  • a transparent disc of approximate diameter 200 mm is mounted to an electric motor and rotated to a frequency of 60 Hz.
  • a camera with a 4 8 mm focal length lens was placed below a glass window which in turn was beneath the rotating disc.
  • the results of inserting the rotating disc are illustrated in Figure 6(a), which shows the surface of a camera housing without the rotating disc, and in Figure 6(b), which shows the surface of a camera housing with the rotating disc activated and in rain conditions.
  • the image processing system 8 processes the digital images provided by the tracking sensor system 6 so as to extract in real-time information concerning the features and movement of the aircraft.
  • the images provided to the image processing system depending on the tracking cameras employed, provide an underneath view of the aircraft, as shown in Figure 7.
  • the tips of the wings or wingspan points 18 of the aircraft are tracked by the image processor system 8 to determine when the image acquisition system 10 should be activated so as to obtain the best image of the registration markings on the port wing 20 of the aircraft.
  • the image processing system 8 generates an acquisition signal using a trigger logic circuit 39 to trigger the camera of the image acquisition system 10.
  • the image processing system 8 also determines and stores data concerning the wingspan 22 of the aircraft and other details concerning the size, shape and ICAO category (A to G) of the aircraft.
  • the image processing system 8 classifies the aircraft on the basis of the size which can be used subsequently when determining the registration markings on the port wing 20.
  • the data obtained can also be used for evaluation of the aircraft during landing and/or take-off.
  • a pyroelectnc sensor 27 can be used with a signal processing wing detection unit 29 to provide a tracking system 1 which also generates the acquisition signal using the trigger logic circuit 39, as shown in Figure 4 and described later.
  • Detecting moving aircraft in the field of view of the sensor 3 or 27 is based on forming a profile or signature of the aircraft, P(y,t), that depends on a spatial coordinatey and time t .
  • a difference profile ⁇ P(y,t) is formed.
  • the profile or signature can be differenced in time or in space because these differences are equivalent for moving objects. If the intensity of the light or thermal radiation from the object is not changing then the time derivative of the profile obtained from this radiation is zero.
  • a time derivative of a moving field can be written as a convective derivative involving partial derivatives, which gives the equation where v is the speed of the object as observed in the profile.
  • the profile can be differenced in space ⁇ y P(y,t) . Then an extremum in the profile P(y, t) will correspond to a point where the difference profile ⁇ y P(y, t) crosses zero.
  • a profile P(y,t) is formed and a difference profile ⁇ t P(y,t) is obtained by differencing in time, as described below. According to equation (4) this is equivalent to a profile of a moving object that is differenced in space. Therefore the position y p of the zero crossing point of A f P(y,t) at time t is also the position of the zero crossing point of ⁇ y P(y,t) which locates an extremum in P(y,r) .
  • the difference between the radiation received by a sensor 27 from two points in space is obtained as a function of time, ⁇ y S(t) , as described below. If there are no moving features in the field of view, then the difference is constant. If any object in the field of view is moving, then the position of a point on the object is related to time using equation (5). This allows a profile or signature differenced in space to be constructed
  • a y P(y(t),t) ⁇ y S(t) (6) and, as described above, allows an extremum corresponding to an aircraft wing to be located in the profile from the zero crossing point in the differential signature.
  • the image acquisition system 10 includes at least one high resolution camera
  • the illumination unit 16 is also triggered simultaneously to provide illumination of the aircraft during adverse lighting conditions, such as at night or during inclement weather.
  • the acquired images are passed to the analysis system 12 which performs Optical Character Recognition (OCR) on the images to obtain the registration code.
  • OCR Optical Character Recognition
  • the registration code corresponds to aircraft type and therefore the aircraft classification determined by the image processing system 8 can be used to assist to the recognition process, particularly when characters of the code are obscured in an acquired image.
  • the registration code extracted and any other information concerning the aircraft can be then passed to other systems via a network connection 24.
  • the tracking system 1 is activated by the proximity detector 4.
  • the proximity detector 4 is usually the first stage detection system to determine when the aircraft is in the proximity of the more precise tracking system 1.
  • the tracking system 1 includes the tracking sensor system 6 and the image processing system 8 and according to one embodiment the images from the detection cameras 3 of the sensor system 6 are used by the image processing system 8 to provide a trigger for the image acquisition system when some point in the image of the aircraft reaches a predetermined pixel position.
  • One or more detection cameras 3 are placed in appropriate locations near the airport runway such that the aircraft passes within the field of view of the cameras 3.
  • a tracking camera 3 provides a sequence of images, ⁇ l n ⁇ .
  • the image processing system 8 subtracts a background image from each image l n of the sequence.
  • the background image represents an average of a number of preceding images. This yields an image ⁇ l n that contains only those objects that have moved during the time interval between images.
  • the image ⁇ l n is thresholded at appropriate values to yield a binary image, i.e. one that contains only two levels of brightness, such that the pixels comprising the edges of the aircraft are clearly distinguishable.
  • the pixels at the extremes of the aircraft in the direction perpendicular to the motion of the aircraft will correspond to the edges 18 of the wings of the aircraft.
  • Imaging the aircraft using thermal infrared wavelengths and detecting the aircraft by its thermal radiation renders the aircraft self-luminous so that it can be imaged both during the day and night primarily without supplementary illumination
  • Infrared (IR) detectors are classified as either photon detectors (termed cooled sensors herein), or thermal detectors (termed uncooled sensors herein).
  • Photon detectors photoconductors or photodiodes
  • Photon detectors produce an electrical response directly as the result of absorbing IR radiation. These detectors are very sensitive, but are subject to noise due to ambient operating temperatures. It is usually necessary to cryogenically cool (80°K) these detectors to maintain high sensitivity.
  • Thermal detectors experience a temperature change when they absorb IR radiation, and an electrical response results from temperature dependence of the material property. Thermal detectors are not generally as sensitive as photon detectors, but perform well at room temperature.
  • the cooled sensing devices are formed from Mercury Cadmium Tellunde offer far greater sensitivity than uncooled devices, which may be formed from Barium Strontium Titanate. Their Net Equivalent Temperature Difference (NETD) is also superior.
  • NETD Net Equivalent Temperature Difference
  • uncooled sensor a chopper can be used to provide temporal modulation of the scene. This permits AC coupling of the output of each pixel to remove the average background. This minimises the dynamic range requirements for the processing electronics and amplifies only the temperature differences. This is an advantage for resolving differences between cloud, the sun, the aircraft and the background.
  • the advantage of differentiation between objects is that it reduced the load on subsequent image processing tasks for segmenting the aircraft from the background and other moving objects such as the clouds.
  • Both a cooled and uncooled thermal infrared imaging system 6 has been used during day, night and foggy conditions.
  • the system 6 produced consistent images of the aircraft in all these conditions, as shown in Figures 8 and 9.
  • the sun in the field of view produced no saturation artefacts or flaring in the lens.
  • the image processing system 8 uses a background subtraction method in an attempt to eliminate slowly moving or stationary objects from the image, leaving only the fast moving objects. This is achieved by maintaining a background image that is updated after a certain time interval elapses. The update is an incremental one based on the difference between the current image and the background.
  • the incremental change is such that the background image can adapt to small intensity variations in the scene but takes some time to respond to large variations.
  • the background image is subtracted from the current image, a modulus is taken and a threshold applied.
  • the result is a binary image containing only those differences from the background that exceed the threshold.
  • Equation (7) shows that the value of this difference depends on the velocity v of the feature at (x,y) and the intensity gradient.
  • B(x,y,t) 1 if a feature is located at (x,y) at time t
  • B(x,y,t) Orepresents the background.
  • the fast moving features belong to the aircraft.
  • the two-dimensional binary image can be compressed into one dimension by summing along each pixel row of the binary image,
  • Equation (9) demonstrates an obvious fact that the time derivative of a profile gives information on the changes (such as motion) of feature A only when the changes in A do not overlap features C
  • C(x,y,t) must cover as small an area as possible
  • the time difference between profiles gives the motion of the aircraft.
  • the difference profile corresponding to Figure 10 is shown in Figure 11 where the slow moving clouds have been eliminated. The wing positions occur at the zero-crossing points 33 and 34 Note that the clouds have been removed, apart from small error terms.
  • the method is implemented using a programmable logic circuit of the image processing system 8 which is programmed to perform the row sums on the binary image and to output these as a set of integers after each video field.
  • a programmable logic circuit of the image processing system 8 which is programmed to perform the row sums on the binary image and to output these as a set of integers after each video field.
  • the difference profile is analysed to locate valid zero crossing points corresponding to the aircraft wing positions
  • a valid zero crossing is one in which the difference profile initially rises above a threshold l ⁇ for a minimum distance y ⁇ and falls through zero to below -l ⁇ for a minimum distance y ⁇ .
  • the magnitude of the threshold l ⁇ is chosen to be greater than the error term ⁇ (C) which is done to discount the affect produced by slow moving features, such as clouds.
  • the peak value of the profile corresponding to the aircraft wing, can be obtained by summing the difference values when they are valid up to the zero crossing point. This method removes the contributions to the peak from the non- overlapping clouds It can be used as a guide to the wing span of the aircraft.
  • the changes in position of the aircraft in the row-sum profile are used to determine a velocity for the aircraft that can be used for determining the image acquisition or trigger time, even if the aircraft is not in view. This situation may occur if the aircraft image moves into a region on the sensor that is saturated, or if the trigger point is not in the field of view of the camera 3.
  • geometric corrections to the aircraft position are required to account for the distortions in the image introduced by the camera lens.
  • a normalised variable Z N ZIY o can be used If y o is the coordinate of the centre of the images, f is the focal length of the lens and ⁇ c is the angle of the camera from the horizontal in the vertical plane, then where the tangent has been expanded using a standard trigonometric identity. Using (10) and (11) an expression for the normalised distance Z N is obtained
  • a length X on it in the X direction subtends an angle in the horizontal plane of
  • the x coordinate is corrected to a value at y 1 . Since X N should be independent of position, then a length x 2 - x 0 at y 2 has a geometrically corrected length of
  • 1/f is chosen so that x and y are measured in terms of pixel numbers. If y 0 is the centre of the camera centre and it is equal to half the total number of pixels, and if ⁇ FOV is the vertical field of view of the camera, then )
  • This relation allows ⁇ to be calculated without knowing the lens focal length and the dimensions of the sensor pixels.
  • the aircraft will cross the trigger point located at y ⁇ at a time t ⁇ estimated by )
  • the method is able to predict the time for triggering the acquisition system 10 based on observations of the position of the aircraft 28.
  • a set of coordinates are defined such that the axis points vertically upwards, the axis points horizontally along the runway towards the approaching aircraft, and p is horizontal and perpendicular to the runway.
  • the image 66 of the aircraft is located in the digitised image by pixel values (x p ,y p ) , where x p is defined to be the vertical pixel value and y the horizontal value.
  • the lens on the camera inverts the image so that a light ray from the aircraft strikes the sensor at position (-x p' -y p, 0) , where the sensor is located at the coordinate origin.
  • Figure 14 shows a ray 68 from an object, such as a point on the aircraft, passing through a lens of a focal length f , and striking the imaging sensor at a point (-x p' -y p ) , where x p and y p are the pixel values.
  • the equation locating a point on the ray is given by
  • z is the horizontal distance along the ray
  • subscript c refers to the camera coordinates.
  • the camera axis c is collinear with the lens optical axis. It will be assumed that z/f»1 , which is usually the case.
  • z(t) is the horizontal position of the aircraft at time t
  • ⁇ GS is the glide-slope angle
  • the aim is to determine f 0 from a series of values of z p (t) at t determined from the image of the aircraft.
  • the trigger time, t o can be expressed in terms of the parameters a, b and c
  • equation (34) is a prediction of the relationship between the measured values x p and t , based on a simple model of the optical system of the detection camera 3 and the trajectory of the aircraft 28.
  • the parameters a, b and c are to be chosen so as to minimise the error of the model fit to the data, i.e. make equation (34) be as close to zero as possible.
  • x n be the location of the aircraft in the image, i.e. pixel value, obtained at time t n .
  • the chi-square statistic is for N pairs of data points.
  • the optimum values of the parameters are those that minimise the chi-square statistic, i.e. those that satisfy equation (34).
  • a graph of aircraft image position as a function of image frame number is shown in Figure 15.
  • the predicted point 70 is shown in Figure 16 as a function of frame number.
  • the aircraft can be out of the view of the camera 3 for up to 1.4 seconds and the system 2 can still trigger the acquisition camera 7 to within 40 milliseconds of the correct time. For an aircraft travelling at 62.5 m/s, the system 2 captures the aircraft to within 2.5 metres of the required position.
  • the tracking system 6, 8 may also use an Area-Parameter Accelerator (APA) digital processing unit, as discussed in International Publication No. WO 93/19441 , to extract additional information, such as the aspect ratio of the wing span to the fuselage length of the aircraft and the location of the centre of the aircraft.
  • APA Area-Parameter Accelerator
  • the tracking system 1 can also be implemented using one or more pyroelectric sensors 27 with a signal processing wing detection unit 29.
  • Each sensor 27 has two adjacent pyroelectric sensing elements 40 and 42, as shown in Figure 17, which are electrically connected so as to cancel identical signals generated by each element.
  • a plate 44 with a slit 46 is placed above the sensing elements 40 and 42 so as to provide the elements 40 and 42 with different fields of view 48 and 50.
  • the fields of view 48 and 50 are significantly narrower than the field of view of a detection camera discussed previously If aircraft move above the runway in the direction indicated by the arrow 48, the first element 40 has a front field of view 48 and the second element 42 has a rear field of view 50.
  • the first element 40 detects the thermal radiation of the aircraft before the second element 42, the aircraft 28 will then be momentarily in both fields of view 48 and 50, and then only detectable by the second element 42.
  • An example of the difference signals generated by two sensors 27 is illustrated in Figure 18 where the graph 52 is for a sensor 27 which has a field of view that is directed at 90° to the horizontal and a sensor 27 which is directed at 75° to the horizontal.
  • Graph 54 is an expanded view of the centre of graph 52. The zero crossing points of peaks 56 in the graphs 52 and 54 correspond to the point at which the aircraft 28 passes the sensor 27.
  • a time can be determined for generating an acquisition signal to trigger the high resolution acquisition cameras 7.
  • the speed can be determined from movement of the zero crossing points over time, in a similar manner to that described previously.
  • the image acquisition system 10 acquires an image of the aircraft with sufficient resolution for the aircraft registration characters to be obtained using optical character recognition.
  • the system 10 includes two high resolution cameras 7 each comprising a lens and a CCD detector array. Respective images obtained by the two cameras 7 are shown in Figures 19 and 20.
  • the minimum pixel dimension and the focal length of the lens determine the spatial resolution in the image. If the dimension of a pixel is L p , the focal length f and the altitude of the aircraft is h, then the dimension of a feature W min on the aircraft that is mapped onto a pixel is
  • the character recognition process used requires each character stroke to be mapped onto at least four pixels with contrast levels having at least 10% difference from the background.
  • the width of a character stroke in the aircraft registration is regulated by the ICAO.
  • the field of view of the system 10 at altitude h is determined by the spatial resolution W min chosen at altitude h max and the number of pixels N pl along the length of the CCD,
  • the image moves a distance less than the size of a pixel during the exposure. If the aircraft velocity is v, then the time to move a distance equal to the required spatial resolution W min is
  • W min 0 02 m
  • the exposure time to avoid excessive blurring is t ⁇ 240 ⁇ s.
  • the focal length of the lens in the system 10 can be chosen to obtain the required spatial resolution at the maximum altitude This fixes the field of view.
  • the field of view may be varied by altering the focal length according to the altitude of the aircraft
  • the range of focal lengths required can be calculated from equation (44).
  • the aircraft registration during daylight conditions, is illuminated by sunlight or scattered light reflected from the ground .
  • the aircraft scatters the light that is incident, some of which is captured by the lens of the imaging system
  • the considerable amount of light reflected from aluminium fuselages of an aircraft can affect the image obtained, and is taken into account
  • the light power falling onto a pixel of the CCD is given by *
  • L ⁇ is the solar spectral radiance
  • is the wavelength bandpass of the entire configuration
  • ⁇ sun is the solid angle subtended by the sun
  • R gnd is the reflectivity of the ground
  • P A is the reflectivity of the aircraft
  • a p is the area of a pixel in the CCD detector
  • f# is the lens f-number.
  • the solar spectral radiance L ⁇ varies markedly with wavelength ⁇ .
  • the power falling on a pixel will therefore vary over a large range. This can be limited by restricting the wavelength range ⁇ passing to the sensor and optimally choosing the centre wavelength of this range.
  • the optimum range and centre wavelength are chosen to match the characteristics of the imaging sensor.
  • the optimum wavelength range and centre wavelength are chosen in the near infrared waveband, 0. 69 to 2.0 microns. This limits the variation in light power on a pixel in the sensor to within the useable limits of the sensor.
  • a KODAKTM KAF-1600L imaging sensor (a monolithic silicon sensor with lateral overflow anti-blooming) was chosen that incorporated a mechanism to accommodate a thousandfold saturation of each pixel, giving a total acceptable range of light powers in each pixel of 10 5 . This enables the sensor to produce a useful image of an aircraft when very bright light sources, for example the sun, are in its field of view.
  • the correct choice of sensor and the correct choice of wavelength range and centre wavelength enables an image to be obtained within a time interval that arrests the motion of the aircraft and that provides an image with sufficient contrast on the aircraft registration to enable digital image processing and recognition of the registration characters.
  • the optimum wavelength range was therefore set to between 0.69 ⁇ m and 2.0 ⁇ m.
  • the CCD sensor and system electronics are chosen to accommodate this range of light powers.
  • the aircraft registration requires additional illumination from the illumination unit 16.
  • the light source of the unit 16 needs to be sufficient to illuminate the aircraft at its maximum altitude. If the source is designed to emit light into a solid angle that just covers the field of view of the imaging system then the light power incident onto a pixel of the imaging system 10 due to light emitted from the source and reflected from the aircraft is given by where A A is the area on the aircraft imaged onto a pixel of area A p , P s is the light power of the source, P A is the aircraft reflectivity, N ptot is the total number of pixels in the CCD sensor and f# is the f-number of the lens.
  • the aperture of the lens on the acquisition camera 7 is automatically adjusted to control the amount of light on the imaging sensor in order to optimise the image quality for digital processing.
  • the intensity level of the registration characters relative to the underside of the aircraft needs to be maintained to provide good contrast between the two for OCR.
  • the power P s of the flash 16 is automatically adjusted in accordance with the aperture setting f# of the acquisition camera 7 to optimise the image quality and maintain the relative contrast between the registration characters and the underside of the aircraft, in accordance with the relationship expressed in equation (50).
  • the aperture of the lens may be very small and the power of the flash may be increased to provide additional illumination of the underside, whereas during night conditions, the aperture may be fully opened and the power of the flash reduced considerably as additional illumination is not required.
  • the electrical gam of the electronic circuits connected to an acquisition camera 7 is adjusted automatically to optimise the image quality.
  • one or more point optical sensors 60, 62 are used to measure the ambient lighting conditions.
  • the electrical output signals of the sensors 60, 62 are processed by the acquisition system 10 to produce the information required to control the camera aperture and/or gain.
  • Two point sensors 60, 62 sensitive to the same optical spectrum as the acquisition cameras 7 can be used.
  • One sensor 60 receives light from the sky that passes through a diffusing plate 64 onto the sensor 60.
  • the diffusing plate 64 collects light from many different directions and allows it to reach the sensor 60.
  • the second sensor 62 is directed towards the ground to measure the reflected light from the ground.
  • the analysis system 12 processes the aircraft images obtained by a high resolution camera 7 according to an image processing procedure 100, as shown in Figure 22, which is divided into two parts 102 and 104.
  • the first part 102 operates on a sub-sampled image 105, as shown in Figure 23, to locate regions that contain features that may be registration characters, whereas the second part 104 executes a similar procedure but is done using the full resolution of the original image and is executed only on the regions identified by the first part 102.
  • the sub-sampled image 105 is the original image with one pixel in four removed in both row and column directions, resulting in a one in sixteen sampling ratio .
  • the first part 102 receives the sub-sampled image at step 106 and filters the image to remove features which are larger than the expected size of the registration characters (b) at step 108
  • Step 108 executes a morphological operation of linear closings applied to a set of lines angled between 0 and 180°.
  • the operation passes a kernel or window across the image 105 to extract lines which exceed a predetermined length and are at a predetermined angle .
  • the kernel or window is passed over the image a number of times and each time the predetermined angle is varied .
  • the lines extracted from all of the passes are then subtracted from the image 105 to provide a filtered difference image 109.
  • the filtered difference image 109 is then thresholded or binansed at step 110 to convert it from a grey scale image to a binary scale image 111 . This is done by setting to 1 all image values that are greater than a threshold and setting to 0 all other image values.
  • the threshold at a given point in the image is determined from a specified multiple of standard deviations from the mean calculated from the pixel values within a window centred on the given point.
  • the binansed image 111 is then filtered at step 112 to remove all features that have pixel densities in a bounding box that are smaller or larger than the expected pixel density for a bounded registration character.
  • the image 111 is then processed at step 114 to remove all features which are not clustered together like registration characters.
  • Step 114 achieves this by grouping together features that have similar sizes and that are close to one another. Groups of features that are smaller than a specified size are removed from the image to obtain a cleaned image 113. The cleaned image 113 is then used at step 116 to locate regions of interest. Regions of interest are obtained in step 116 from the location and extents of the groups remaining after step 114. Step 116 produces regions of interest which include the registration characters and areas of the regions are bounded above and below, as for the region 115 shown in Figure 23.
  • the regions of interest obtained by the first part 102 of the procedure 100 are further processed individually using the full resolution of the original image and the second part 104 of the procedure.
  • the second part 104 takes a region of interest 115 from the original image at step 120 and for that region filters out features larger than the expected character sizes at step 122, using the same morphological operation of linear closings applied to a set of lines angled between 0 and 180°, followed by image subtraction, as described above, to obtain image 117.
  • the filtered image 117 is then binansed at step 124 by selecting a filter threshold that is representative of the pixel values at the edges of features. To distinguish the registration characters from the aircraft wing or body the filter threshold needs to be set correctly.
  • a mask image of significant edges in image 117 is created by calculating edge-strengths at each point in image 117 and setting to 1 all points that have edge-strengths greater than a mask threshold and setting to 0 all other points.
  • An edge-strength is determined by taking at each point pixel gradients in two directions, ⁇ x and ⁇ y, and calculating to give the edge-strength at that point. The mask threshold at a given
  • the filter threshold for each point in image 117 is then determined from a specified multiple of standard deviations from the mean calculated from the pixel values at all points within a window centred on the given point that correspond to non-zero values in the mask image.
  • the binarised image 118 is then filtered at step 126 to remove features that are smaller than the expected character sizes.
  • Features are clustered together at step 128 that have similar sizes, that are near to one another and that are associated with similar image values in image 117.
  • step 130 the correctly clustered features that have sizes, orientations and relative positions that deviate too much from the averages for the clusters are filtered out to leave features that form linear chains.
  • step 132 if the number of features remaining in the image produced by step 130 is greater than 3, then a final image is created by rotating image 118 to align the linear chain of features with the image rows and by masking out features not belonging to the linear chain.
  • the final image is passed to a character recognition process 200 to determine whether the features are registration characters and, if so, which characters.
  • the final image undergoes a standard optical character recognition process 200, as shown in Figure 24, to generate character string data which represents the ICAO characters on the port wing.
  • the process 200 includes receiving the final image at step 202, which is produced by step 132 of the image processing procedure 100, and separating the characters of the image at step 204.
  • the size of the characters are normalised at step 206 and at step 208 correction for the alignment of the characters is made and further normalisation occurs.
  • Character features are extracted at step 210 and an attempt made to classify the features of the characters extracted at step 212
  • Character rules are applied to the classified features at step 214 so as to produce a binary string representative of the registration characters at step 216.
  • the system 2 has been described above as being one which is particularly suitable for detecting an aircraft, it should be noted that many features of the system can be used for detecting and identifying other moving objects
  • the embodiments of the tracking system 1 may be used for tracking land vehicles.
  • the system 2 may be employed to acquire images of and identify automobiles at tollway points on a roadway.

Abstract

An object detection system including passive sensors (3) for receiving electromagnetic radiation from a moving object (28) and generating intensity signals representative of the received radiation, and a processing system for subtracting the intensity signals to obtain a differential signature representative of the position of the moving object. An image acquisition system including at least one camera (7) for acquiring an image of at least part of a moving object, in response to a trigger signal, and an analysis system for processing the image to locate a region in the image including markings identifying the object and processing the region to extract the markings for optical recognition.

Description

AN AIRCRAFT DETECTION SYSTEM
The present invention relates to an object detection system and, in particular to an aircraft detection system. The International Civil Aviation Organisation (ICAO) has established regulations which require all civil aircraft to have registration markings beneath the port wing to identify an aircraft. The markings denote the nationality of an aircraft and its registration code granted by the ICAO. In some countries, airline operators do not follow the regulations and the markings appear on an aircraft's fuselage. Owners of aircraft are charged for airport use, but a satisfactory system has not been developed to automatically detect aircraft and then, if necessary, administer a charge to the owner. Microwave signals for detecting an aircraft can interfere with microwave frequencies used for airport communications and, similarly, radar signals can interfere with those used for aircraft guidance systems. A system which can be used to detect an aircraft using unobtrusive passive technology is desired.
In accordance with the present invention there is provided an object detection system including :
passive sensing means for receiving electromagnetic radiation from a moving object and generating intensity signals representative of the received radiation, and processing means for subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
The present invention also provides an image acquisition system including : at least one camera for acquiring an image of at least part of a moving object, in response to a trigger signal, and analysis means for processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process. The present invention also provides an object detection method including: passively sensing electromagnetic radiation received from a moving object; generating intensity signals representative of the received radiation; and subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
The present invention also provides an image acquisition method including: acquiring an image of at least part of a moving object, in response to a trigger signal, using at least one camera, and
processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process.
Preferred embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:
Figure 1 is a block diagram of a preferred embodiment of an aircraft detection system;
Figure 2 is a schematic diagram of a preferred embodiment of the aircraft detection system;
Figure 3 is a block diagram of a connection arrangement for components of the aircraft detection system;
Figure 4 is a more detailed block diagram oi a proximity detector and a tracking system for the aircraft detection system;
Figure 5 is a coordinate system used for the proximity detector,
Figures 6(a) and 6(b) are underneath views of discs of sensors of the tracking system;
Figure 7 is a schematic diagram of an image obtained by the tracking system; Figures 8 and 9 are images obtained from a first embodiment of the tracking system;
Figure 10 is a graph of a pixel row sum profile for an image obtained by the tracking system;
Figure 11 is a graph of a difference profile obtained by subtracting successive row sum profiles;
Figure 12 is a diagram of a coordinate system for images obtained by the tracking system;
Figure 13 is a diagram of a coordinate system for the aircraft used for geometric correction of the images obtained by the tracking system;
Figure 14 is a diagram of a coordinate system used for predicting a time to generate an acquisition signal;
Figure 15 is a graph of aircraft position in images obtained by the tracking system over successive frames;
Figure 16 is a graph of predicted trigger frame number over successive image frames obtained by the tracking system;
Figure 17 is a schematic diagram of a pyroelectric sensor used in a second embodiment of the tracking system;
Figure 18 is graphs of differential signatures obtained using the second embodiment of the tracking system;
Figures 19 and 20 are images obtained of an aircraft by high resolution cameras of an acquisition system of the aircraft detection system;
Figure 21 is a schematic diagram of an optical sensor system used for exposure control of the acquisition cameras;
Figure 22 is a flow diagram of a preferred character location process executed on image data obtained by the high resolution cameras;
Figure 23 is a diagram of images produced during the character location process; and
Figure 24 is a flow diagram of a character recognition process executed on a binary image of the characters extracted from an image obtained by one of the high resolution cameras. An aircraft detection system 2, as shown in Figure 1 , includes a proximity detector 4, a tracking sensor system 6, an image processing system 8, an image acquisition system 10 and an analysis system 12. A control system 14 can be included to control the image acquisition system 10 on the basis of signals provided by the image processing system 8, and also control an illumination unit 16.
The proximity detector 4 and the tracking sensor system 6 includes sensors 3 which may be placed on or near an aircraft runway 5 to detect the presence of an aircraft 28 using visual or thermal imaging or aural sensing techniques Also located on or near the runway 5 is at least one high resolution camera 7 of the image acquisition system 10. The sensors 3 and the acquisition camera 7 are connected by data and power lines 9 to an instrument rack 11 , as shown in Figure 2, which may be located adjacent or near the runway 5. The instrument rack 11 may alternatively be powered by its own independent supply which may be charged by solar power The instrument rack 11 includes control circuitry and image processing circuitry which is able to control activation of the sensors 3 and the camera 7 and perform image processing, as required. The instrument rack 11 , the data and power lines 9, the sensors 3 and the acquisition camera 7 can be considered to form a runway module which may be located at the end of each runway of an airport. A runway module can be connected back to a central control system 13 using an optical fibre or other data link 15 Images provided by the sensors 3 may be processed and passed to the central system 13 for further processing, and the central system 13 would control triggering of the acquisition cameras 7. Alternatively image processing for determining triggering of the acquisition camera 7 may be performed by each instrument rack 11 . The central control system 13 includes the analysis system 12. One method of configuring connection of the instrument racks 11 to the central control system 13 is illustrated in Figure 3. The optical fibre link 15 may include dedicated optical fibres 17 for transmitting video signals to the central control system 13 and other optical fibres 19 dedicated to transmitting data to and receiving data from the central control system 13 using the Ethernet protocol or direct serial data communication . A number of different alternatives can be used for connecting the runway modules to the central control system 13. For example, the runway modules and the control system 13 may be connected as a Wide Area Network (WAN) using Asynchronous Transfer Mode (ATM) or Synchronous Digital Hierarchy (SDH) links. The runway modules and the central control system 13 may also be connected as a Local Area Network (LAN) using a LAN protocol, such as Ethernet Physical connections may be made between the runway modules and the central control system 13 or alternatively wireless transmission techniques may be used, such as using infrared or microwave signals for communication. The proximity detector 4 determines when an aircraft is within a predetermined region, and then on detecting the presence of an aircraft activates the tracking sensor system 6. The proximity detector 4, as shown in Figure 4, may include one or more pyroelectnc devices 21 , judiciously located at an airport, and a signal processing unit 23 and trigger unit 25 connected thereto in order to generate an activation signal to the tracking sensor system 6 when the thermal emission of an approaching aircraft exceeds a predetermined threshold. The proximity detector 4 may use one or more pyroelectnc point sensors that detect the infrared radiation emitted from the aircraft 28. A mirror system can be employed with a point sensor 70 to enhance its sensitivity to the motion of the aircraft 28. The point sensor 70 may consist of two or more pyroelectric sensors configured in a geometry and with appropriate electrical connections so as to be insensitive to the background infrared radiation and slowly moving objects. With these sensors the rate of motion of the image of the aircraft 28 across the sensor 70 is important. The focal length of the mirror 72 is chosen to optimise the motion of the image across the sensor 70 at the time of detection. As an example, if the aircraft at altitude H with glide slope angle θGS moves with velocity V and passes overhead at time to, as shown in Figure 5, then the position h of the image of the aircraft 28 on the sensor 70 is
Figure imgf000007_0001
where f is the focal length of a cylindrical mirror. If the rate of motion of the image dhldt is required to have a known value, then the focal length of the mirror 72 should be chosen to satisfy
Figure imgf000008_0001
where to - t is the time difference between the time fo at which the aircraft is overhead and the time . at which it is to be detected. Alternatively, the proximity detector 4 may include different angled point sensors to determine when an aircraft enters the monitored region and is about to land or take-off In response to the activation signal, the tracking sensor system 6 exposes the sensor 3 to track the aircraft. Use of the proximity detector 4 allows the sensor 3 to be sealed in a housing when not in use and protected from damaging environmental conditions, such as hailstorms and blizzards or fuel. The sensor 3 is only exposed to the environment for a short duration whilst an aircraft is in the vicinity of the sensor 3. If the tracking system 6 is used in conditions where the sensor 3 can be permanently exposed to the environment or the sensor 3 can resist the operating conditions, then the proximity detector 4 may not be required. The activation signal generated by the proximity detector 4 can also be used to cause the instrument rack 11 and the central control system 13 to adjust the bandwidth allocated on the link 15 so as to provide an adequate data transfer rate for transmission of video signals from the runway module to the central system 13. If the bandwidth is fixed at an acceptable rate or the system 2 only uses local area network communications and only requires a reduced bandwidth, then again the proximity detector 4 may not be required.
The tracking sensor system 6 includes one or more tracking or detection cameras 3 which obtain images of an aircraft as it approaches or leaves a runway. From a simple image of the aircraft, aspect ratios, such as the ratio of the wingspan to the fuselage length can be obtained. The tracking camera 3 used is a thermal camera which monitors thermal radiation received in the 10 to 14 μm wavelength range and is not dependent on lighting conditions for satisfactory operation. Use of the thermal cameras is also advantageous as distribution of temperatures over the observed surfaces of an aircraft can be obtained, together with signatures of engine exhaust emissions and features in the fuselage or engines. The tracking camera 3 can obtain an instantaneous two-dimensional image ln using all of the sensors in a CCD array of the camera, or alternatively one row of the array perpendicular to the direction of motion of the aircraft can be used to obtain a linear image at each scan and the linear image is then used to build up a two-dimensional image ln for subsequent processing .
To allow operation of the tracking and acquisition cameras 3 and 7 in ram, a rotating disc system is employed. The use of a rotating disc for removing water drops from windows is used on marine vessels. A reflective or transparent disc is rotated at high speed in front of the window that is to be kept clear. Water droplets falling on the disk experience a large shear force related to the rotation velocity. The shear force is sufficient to atomise the water drop, thereby removing it from the surface of the disc. A transparent disc of approximate diameter 200 mm is mounted to an electric motor and rotated to a frequency of 60 Hz. A camera with a 4 8 mm focal length lens was placed below a glass window which in turn was beneath the rotating disc. The results of inserting the rotating disc are illustrated in Figure 6(a), which shows the surface of a camera housing without the rotating disc, and in Figure 6(b), which shows the surface of a camera housing with the rotating disc activated and in rain conditions.
The image processing system 8 processes the digital images provided by the tracking sensor system 6 so as to extract in real-time information concerning the features and movement of the aircraft. The images provided to the image processing system, depending on the tracking cameras employed, provide an underneath view of the aircraft, as shown in Figure 7. The tips of the wings or wingspan points 18 of the aircraft are tracked by the image processor system 8 to determine when the image acquisition system 10 should be activated so as to obtain the best image of the registration markings on the port wing 20 of the aircraft. The image processing system 8 generates an acquisition signal using a trigger logic circuit 39 to trigger the camera of the image acquisition system 10. The image processing system 8 also determines and stores data concerning the wingspan 22 of the aircraft and other details concerning the size, shape and ICAO category (A to G) of the aircraft. The image processing system 8 classifies the aircraft on the basis of the size which can be used subsequently when determining the registration markings on the port wing 20. The data obtained can also be used for evaluation of the aircraft during landing and/or take-off.
Alternatively a pyroelectnc sensor 27 can be used with a signal processing wing detection unit 29 to provide a tracking system 1 which also generates the acquisition signal using the trigger logic circuit 39, as shown in Figure 4 and described later.
Detecting moving aircraft in the field of view of the sensor 3 or 27 is based on forming a profile or signature of the aircraft, P(y,t), that depends on a spatial coordinatey and time t . To eliminate features in the field of view that are secondary or slowly moving, a difference profile ΔP(y,t) is formed. The profile or signature can be differenced in time or in space because these differences are equivalent for moving objects. If the intensity of the light or thermal radiation from the object is not changing then the time derivative of the profile obtained from this radiation is zero. A time derivative of a moving field can be written as a convective derivative involving partial derivatives, which gives the equation
Figure imgf000010_0001
where v is the speed of the object as observed in the profile. After rearranging equation (3), gives
Figure imgf000010_0002
which shows that the difference in the profile in time is equivalent to the difference in the profile in space. This only holds for moving objects, when v≠ 0. Equation (4) also follows from the simple fact that if the profile has a given value P(yo,to) at the coordinate (yo,to), then it will have this same value along the line y = Yo + v(t - to) (5)
To detect and locate a moving feature that forms an extremum in the profile, such as an aircraft wing, the profile can be differenced in space ΔyP(y,t) . Then an extremum in the profile P(y, t) will correspond to a point where the difference profile ΔyP(y, t) crosses zero.
In one method for detecting a feature on the aircraft, a profile P(y,t) is formed and a difference profile ΔtP(y,t) is obtained by differencing in time, as described below. According to equation (4) this is equivalent to a profile of a moving object that is differenced in space. Therefore the position yp of the zero crossing point of AfP(y,t) at time t is also the position of the zero crossing point of ΔyP(y,t) which locates an extremum in P(y,r) .
In another method for detecting a feature on the aircraft, the difference between the radiation received by a sensor 27 from two points in space is obtained as a function of time, ΔyS(t) , as described below. If there are no moving features in the field of view, then the difference is constant. If any object in the field of view is moving, then the position of a point on the object is related to time using equation (5). This allows a profile or signature differenced in space to be constructed
AyP(y(t),t) = ΔyS(t) (6) and, as described above, allows an extremum corresponding to an aircraft wing to be located in the profile from the zero crossing point in the differential signature.
The image acquisition system 10 includes at least one high resolution camera
7 to obtain images of the aircraft when triggered. The images are of sufficient resolution to enable automatic character recognition of the registration code on the port wing 20 or elsewhere. The illumination unit 16 is also triggered simultaneously to provide illumination of the aircraft during adverse lighting conditions, such as at night or during inclement weather.
The acquired images are passed to the analysis system 12 which performs Optical Character Recognition (OCR) on the images to obtain the registration code. The registration code corresponds to aircraft type and therefore the aircraft classification determined by the image processing system 8 can be used to assist to the recognition process, particularly when characters of the code are obscured in an acquired image. The registration code extracted and any other information concerning the aircraft can be then passed to other systems via a network connection 24.
Once signals received from the pyroelectric sensors 21 indicate the aircraft 28 is within the field of view of the sensors 3 of the tracking sensor system 6, the tracking system 1 is activated by the proximity detector 4. The proximity detector 4 is usually the first stage detection system to determine when the aircraft is in the proximity of the more precise tracking system 1. The tracking system 1 includes the tracking sensor system 6 and the image processing system 8 and according to one embodiment the images from the detection cameras 3 of the sensor system 6 are used by the image processing system 8 to provide a trigger for the image acquisition system when some point in the image of the aircraft reaches a predetermined pixel position. One or more detection cameras 3 are placed in appropriate locations near the airport runway such that the aircraft passes within the field of view of the cameras 3. A tracking camera 3 provides a sequence of images, {ln}. The image processing system 8 subtracts a background image from each image ln of the sequence. The background image represents an average of a number of preceding images. This yields an image Δln that contains only those objects that have moved during the time interval between images. The imageΔln is thresholded at appropriate values to yield a binary image, i.e. one that contains only two levels of brightness, such that the pixels comprising the edges of the aircraft are clearly distinguishable. The pixels at the extremes of the aircraft in the direction perpendicular to the motion of the aircraft will correspond to the edges 18 of the wings of the aircraft. After further processing, described below, when it is determined the pixels comprising the port edge pass a certain position in the image corresponding to the acquisition point, the acquisition system 10 is triggered, thereby obtaining an image of the registration code beneath the wing 20 of the aircraft.
Imaging the aircraft using thermal infrared wavelengths and detecting the aircraft by its thermal radiation renders the aircraft self-luminous so that it can be imaged both during the day and night primarily without supplementary illumination Infrared (IR) detectors are classified as either photon detectors (termed cooled sensors herein), or thermal detectors (termed uncooled sensors herein). Photon detectors (photoconductors or photodiodes) produce an electrical response directly as the result of absorbing IR radiation. These detectors are very sensitive, but are subject to noise due to ambient operating temperatures. It is usually necessary to cryogenically cool (80°K) these detectors to maintain high sensitivity. Thermal detectors experience a temperature change when they absorb IR radiation, and an electrical response results from temperature dependence of the material property. Thermal detectors are not generally as sensitive as photon detectors, but perform well at room temperature.
Typically, the cooled sensing devices are formed from Mercury Cadmium Tellunde offer far greater sensitivity than uncooled devices, which may be formed from Barium Strontium Titanate. Their Net Equivalent Temperature Difference (NETD) is also superior. However, with the uncooled sensor a chopper can be used to provide temporal modulation of the scene. This permits AC coupling of the output of each pixel to remove the average background. This minimises the dynamic range requirements for the processing electronics and amplifies only the temperature differences. This is an advantage for resolving differences between cloud, the sun, the aircraft and the background. The advantage of differentiation between objects is that it reduced the load on subsequent image processing tasks for segmenting the aircraft from the background and other moving objects such as the clouds. Both a cooled and uncooled thermal infrared imaging system 6 has been used during day, night and foggy conditions. The system 6 produced consistent images of the aircraft in all these conditions, as shown in Figures 8 and 9. In particular, the sun in the field of view produced no saturation artefacts or flaring in the lens. At night, the entire aircraft was observable, not just the lights. The image processing system 8 uses a background subtraction method in an attempt to eliminate slowly moving or stationary objects from the image, leaving only the fast moving objects. This is achieved by maintaining a background image that is updated after a certain time interval elapses. The update is an incremental one based on the difference between the current image and the background. The incremental change is such that the background image can adapt to small intensity variations in the scene but takes some time to respond to large variations. The background image is subtracted from the current image, a modulus is taken and a threshold applied. The result is a binary image containing only those differences from the background that exceed the threshold.
One problem with this method is that some slow moving features, such as clouds, still appear in the binary image. The reason for this is that the method does not select on velocity but on a combination of velocity and intensity gradients. If the intensity in the image is represented by l(x,y,f) , where x and y represent the position in rows and columns, respectively, and t represents the image frame number (time) and if the variation in the intensity due to ambient conditions is very small then it can be shown that the time variation of the intensity in the image due to a feature moving with velocity v is given by
Figure imgf000014_0001
In practice, the time derivative in equation (7) is performed by taking the difference between the intensity at (x,y) at different times. Equation (7) shows that the value of this difference depends on the velocity v of the feature at (x,y) and the intensity gradient. Thus a fast moving feature with low contrast relative to the background is identical to a slow moving feature with a large contrast. This is the situation with slowly moving clouds that often have very bright edges and therefore large intensity gradients there, and are not eliminated by this method. Since features in a binary image have the same intensity gradients, better velocity selection is obtained using the same method but applied to the binary image. In this sense, the background-subtraction method is applied twice, once to the original grey-scale image to produce a binary image as described above, and again to the subsequent binary image, as described below.
The output from the initial image processing hardware is a binary image B(x,y,t) where B(x,y,t) = 1 if a feature is located at (x,y) at time t , and B(x,y,t) = Orepresents the background. Within this image the fast moving features belong to the aircraft. To deduce the aircraft wing position the two-dimensional binary image can be compressed into one dimension by summing along each pixel row of the binary image,
Figure imgf000015_0001
where the aircraft image moves in the direction of the image columns. This row-sum profile is easily analysed in real time to determine the location of the aircraft. An example of a profile is shown in Figure 10 where the two peaks 30 and 31 of the aircraft profile correspond to the main wings (large peak 30) and the tail wings (smaller peak 31 ). in general, there are other features present, such as clouds, that must be identified or filtered from the profile. To do this, differences between profiles from successive frames are taken, which is equivalent to a time derivative of the profile. Letting A(x,y,t) be the aircraft where A(x,y,t) = 1 if (x,y) lies within the aircraft and 0 otherwise and letting C(x,y,t) represent clouds or other slowly moving objects, then it can be shown that the time derivative of the profile is given by
Figure imgf000015_0002
where ε(C)≈ 0 is a small error term due to the small velocity of the clouds. Equation (9) demonstrates an obvious fact that the time derivative of a profile gives information on the changes (such as motion) of feature A only when the changes in A do not overlap features C In order to obtain the best measure of the location of a feature, the overlap between features must be minimised. This means that C(x,y,t) must cover as small an area as possible If the clouds are present but do not overlap the aircraft, then apart from a small error term, the time difference between profiles gives the motion of the aircraft. The difference profile corresponding to Figure 10 is shown in Figure 11 where the slow moving clouds have been eliminated. The wing positions occur at the zero-crossing points 33 and 34 Note that the clouds have been removed, apart from small error terms.
The method is implemented using a programmable logic circuit of the image processing system 8 which is programmed to perform the row sums on the binary image and to output these as a set of integers after each video field. When taking the difference between successive profiles the best results were obtained using differences between like fields of the video image, i e even-even and odd-odd fields.
The difference profile is analysed to locate valid zero crossing points corresponding to the aircraft wing positions A valid zero crossing is one in which the difference profile initially rises above a threshold lτ for a minimum distance yτ and falls through zero to below -lτ for a minimum distance yτ. The magnitude of the threshold lτ is chosen to be greater than the error term ε(C) which is done to discount the affect produced by slow moving features, such as clouds.
In addition, the peak value of the profile, corresponding to the aircraft wing, can be obtained by summing the difference values when they are valid up to the zero crossing point. This method removes the contributions to the peak from the non- overlapping clouds It can be used as a guide to the wing span of the aircraft.
The changes in position of the aircraft in the row-sum profile are used to determine a velocity for the aircraft that can be used for determining the image acquisition or trigger time, even if the aircraft is not in view. This situation may occur if the aircraft image moves into a region on the sensor that is saturated, or if the trigger point is not in the field of view of the camera 3. However, to obtain a reliable estimate of the velocity, geometric corrections to the aircraft position are required to account for the distortions in the image introduced by the camera lens. These are described below using the coordinate systems (x,y,z) for the image and (X, Y,Z) for the aircraft as shown in Figures 12 and 13, respectively. For an aircraft at distance Z and at a constant altitude Yo, the angle from the horizontal to the aircraft in the vertical plane is
Figure imgf000017_0001
Since Yo is approximately constant, a normalised variable ZN = ZIYo can be used If yo is the coordinate of the centre of the images, f is the focal length of the lens and θc is the angle of the camera from the horizontal in the vertical plane, then
Figure imgf000017_0002
where the tangent has been expanded using a standard trigonometric identity. Using (10) and (11) an expression for the normalised distance ZN is obtained
e
Figure imgf000017_0003
where β = 1/f. This equation allows a point in the image at y to be mapped onto a true distance scale, ZN . Since the aircraft altitude is unknown, the actual distance cannot be determined Instead, all points in the image profile are scaled to be equivalent to a specific point, y1 , in the profile This point is chosen to be the trigger line or image acquisition line The change in the normalised distance ZN(y1) at y1 due to an increment in pixel value Δy1 is ΔZN(y1) = ZN(y1 + Δy1) - ZN(y1). The number of such increments over a distance ZN(y2) - ZN(y1) is M = (ZN(y2) - ZN(y1))lΔZN(y1). Thus the geometrically corrected pixel position at y2 is
Figure imgf000018_0001
For an aircraft at distance Z and at altitude Vo , a length X on it in the X direction subtends an angle in the horizontal plane of
Figure imgf000018_0002
where normalised values have been used. If xo is the location of the centre of the image and f is the focal length of the lens, then
Figure imgf000018_0005
Using (12), (14) and (15), the normalised distance XN can be obtained in terms of x and y
Figure imgf000018_0003
As with the y coordinate, the x coordinate is corrected to a value at y1 . Since XN should be independent of position, then a length x2 - x0 at y2 has a geometrically corrected length of
Figure imgf000018_0004
The parameter β = 1/f is chosen so that x and y are measured in terms of pixel numbers. If y0 is the centre of the camera centre and it is equal to half the total number of pixels, and if θFOV is the vertical field of view of the camera, then )
Figure imgf000019_0001
This relation allows β to be calculated without knowing the lens focal length and the dimensions of the sensor pixels.
The velocity of a feature is expressed in terms of the number of pixels moved between image fields (or frames). Then if the position of the feature in frame n is yn, the velocity is given by vn = yn - yn-1 . Over N frames, the average velocity is then
Figure imgf000019_0002
which depends only on the start and finish points of the data. This is sensitive to errors in the first and last values and takes no account of the positions in between. The error in the velocity due to an error δyN in the value yN is
Figure imgf000019_0003
A better method of velocity estimation uses all the position data obtained between these values. A time t is maintained which represents the current frame number. Then the current position is given by y = yo - vt (21) where yo is the unknown starting point and v is the unknown velocity. The number n of valid positions yn measured from the feature are each measured at time tn . Minimising the mean square error τ
Figure imgf000019_0004
with respect to v and yo gives two equations for the unknown quantities yo and v. Solving for v yields
Figure imgf000020_0001
This solution is more robust in the sense that it takes account of all the motions of the feature, rather than the positions at the beginning and at the end of the observations. If the time is sequential, so that tn = nΔt where tn = 1 is the time interval between image frames, then the error in the velocity due to an error δyn in the value yn is
Figure imgf000020_0002
which, for the same error δyN in (19), gives a smaller error than (21 ) for N > 5. In general, the error in (24) varies as 1/N2 which is less sensitive to uncertainties in position than (19).
If the aircraft is not in view, then the measurement of the velocity v can be used to estimate the trigger time. If yl is the position of a feature on the aircraft that was last seen at a time tl , then the position at any time t is estimated from y = yl - v(t - fl) (25)
Based on this estimate of position, the aircraft will cross the trigger point located at yτ at a time tτ estimated by )
Figure imgf000020_0003
An alternative method of processing the images obtained by the camera 3 to determine the aircraft position, which also automatically accounts for geometric corrections, is described below. The method is able to predict the time for triggering the acquisition system 10 based on observations of the position of the aircraft 28. To describe the location of an aircraft 28 and its position, a set of coordinates are defined such that the
Figure imgf000021_0005
axis points vertically upwards, the
Figure imgf000021_0006
axis points horizontally along the runway towards the approaching aircraft, and
Figure imgf000021_0007
p is horizontal and perpendicular to the runway. The image 66 of the aircraft is located in the digitised image by pixel values (xp,yp) , where xp is defined to be the vertical pixel value and y the horizontal value. The lens on the camera inverts the image so that a light ray from the aircraft strikes the sensor at position (-xp' -yp,0) , where the sensor is located at the coordinate origin. Figure 14 shows a ray 68 from an object, such as a point on the aircraft, passing through a lens of a focal length f , and striking the imaging sensor at a point (-xp' -yp) , where xp and yp are the pixel values. The equation locating a point on the ray is given by
Figure imgf000021_0001
where z is the horizontal distance along the ray, and the subscript c refers to the camera coordinates. The camera axis
Figure imgf000021_0008
c is collinear with the lens optical axis. It will be assumed that z/f»1 , which is usually the case.
Assuming the camera is aligned so that is aligned with the runway
Figure imgf000021_0009
coordinate, but the camera is tilted from the horizontal by angle θ. Then
Figure imgf000021_0002
and a point on the ray from the aircraft to its image is given by
Figure imgf000021_0003
Letting the aircraft trajectory be given by
Figure imgf000021_0004
where z(t) is the horizontal position of the aircraft at time t , θGS is the glide-slope angle, and the aircraft is at altitude xo and has a lateral displacement yo at z(to) = 0. Here, t = to is the time at which the image acquisition system 10 is triggered, i.e. when the aircraft is overhead with respect to the cameras 7.
Comparing equations (29) and (30) allows z to be written in terms of z(t) and gives the pixel positions as
Figure imgf000022_0001
Since zp(t) is the vertical coordinate and its value controls the acquisition trigger, the following discussion will be centred on equation (31 ). The aircraft position is given by z(t) = v(to - t) (33) where v is the speed of the aircraft along the
Figure imgf000022_0003
axis.
The aim is to determine f0 from a series of values of zp(t) at t determined from the image of the aircraft. For this purpose, it is useful to rearrange (31 ) into the following form
Figure imgf000022_0002
Figure imgf000023_0001
The pixel value corresponding to the trigger point vertically upwards is xτ = f cotθ . The trigger time, to, can be expressed in terms of the parameters a, b and c
Figure imgf000023_0002
The parameters a, b and c are unknown since the aircraft glide slope, speed, altitude and the time at which the trigger is to occur are unknown. However, it is possible to estimate these using equation (34) by minimising the chi-square statistic. Essentially, equation (34) is a prediction of the relationship between the measured values xp and t , based on a simple model of the optical system of the detection camera 3 and the trajectory of the aircraft 28. The parameters a, b and c are to be chosen so as to minimise the error of the model fit to the data, i.e. make equation (34) be as close to zero as possible.
Let xn be the location of the aircraft in the image, i.e. pixel value, obtained at time tn . Then the chi-square statistic is
Figure imgf000023_0003
for N pairs of data points. The optimum values of the parameters are those that minimise the chi-square statistic, i.e. those that satisfy equation (34).
For convenience, the following symbols are defined
Figure imgf000023_0004
Then the values of a, b and c that minimise equation (39) are given by
Figure imgf000024_0001
On obtaining a, b and c from equations (41 ) to (43), then fo can be obtained from equation (38).
Using data obtained from video images of an aircraft landing at Melbourne airport, a graph of aircraft image position as a function of image frame number is shown in Figure 15. The data was processed using equations (41 ) to (43) and (38) to yield the predicted value for the trigger frame number to = 66 corresponding to trigger point 70. The predicted point 70 is shown in Figure 16 as a function of frame number. The predicted value is to = 66 ± 0.5 after 34 frames. In this example, the aircraft can be out of the view of the camera 3 for up to 1.4 seconds and the system 2 can still trigger the acquisition camera 7 to within 40 milliseconds of the correct time. For an aircraft travelling at 62.5 m/s, the system 2 captures the aircraft to within 2.5 metres of the required position.
The tracking system 6, 8 may also use an Area-Parameter Accelerator (APA) digital processing unit, as discussed in International Publication No. WO 93/19441 , to extract additional information, such as the aspect ratio of the wing span to the fuselage length of the aircraft and the location of the centre of the aircraft.
The tracking system 1 can also be implemented using one or more pyroelectric sensors 27 with a signal processing wing detection unit 29. Each sensor 27 has two adjacent pyroelectric sensing elements 40 and 42, as shown in Figure 17, which are electrically connected so as to cancel identical signals generated by each element. A plate 44 with a slit 46 is placed above the sensing elements 40 and 42 so as to provide the elements 40 and 42 with different fields of view 48 and 50. The fields of view 48 and 50 are significantly narrower than the field of view of a detection camera discussed previously If aircraft move above the runway in the direction indicated by the arrow 48, the first element 40 has a front field of view 48 and the second element 42 has a rear field of view 50. As an aircraft 28 passes over the sensor 27 the first element 40 detects the thermal radiation of the aircraft before the second element 42, the aircraft 28 will then be momentarily in both fields of view 48 and 50, and then only detectable by the second element 42. An example of the difference signals generated by two sensors 27 is illustrated in Figure 18 where the graph 52 is for a sensor 27 which has a field of view that is directed at 90° to the horizontal and a sensor 27 which is directed at 75° to the horizontal. Graph 54 is an expanded view of the centre of graph 52. The zero crossing points of peaks 56 in the graphs 52 and 54 correspond to the point at which the aircraft 28 passes the sensor 27. Using the known position of the sensor 27 the time at which the aircraft passes, and the speed of the aircraft 28, a time can be determined for generating an acquisition signal to trigger the high resolution acquisition cameras 7. The speed can be determined from movement of the zero crossing points over time, in a similar manner to that described previously.
The image acquisition system 10, as mentioned previously, acquires an image of the aircraft with sufficient resolution for the aircraft registration characters to be obtained using optical character recognition. According to one embodiment of the acquisition system 10, the system 10 includes two high resolution cameras 7 each comprising a lens and a CCD detector array. Respective images obtained by the two cameras 7 are shown in Figures 19 and 20.
The minimum pixel dimension and the focal length of the lens determine the spatial resolution in the image. If the dimension of a pixel is Lp , the focal length f and the altitude of the aircraft is h, then the dimension of a feature Wmin on the aircraft that is mapped onto a pixel is
Figure imgf000026_0001
The character recognition process used requires each character stroke to be mapped onto at least four pixels with contrast levels having at least 10% difference from the background. The width of a character stroke in the aircraft registration is regulated by the ICAO. According to the ICAO Report, Annex 7, sections 4. 2.1 and 5.3, the character height beneath the port wing must be at least 50 centimetres and the character stroke must be 1/6th the character height. Therefore, to satisfy the character recognition criterion, the dimension of the feature on the aircraft that is mapped onto a pixel should be Wmin = 2 centimetres, or less. Once the CCD detector is chosen, Lp is fixed and the focal length of the system 10 is determined by the maximum altitude of the aircraft at which the spatial resolution Wmin = 2 centimetres is required.
The field of view of the system 10 at altitude h is determined by the spatial resolution Wmin chosen at altitude h max and the number of pixels Npl along the length of the CCD,
Figure imgf000026_0002
For h = h max and Npl = 1552 the field of view is WFOV = 31 04 metres .
To avoid blurring due to motion of the aircraft, the image moves a distance less than the size of a pixel during the exposure. If the aircraft velocity is v, then the time to move a distance equal to the required spatial resolution Wmin is
Figure imgf000026_0003
The maximum aircraft velocity that is likely to be encountered on landing or take-off is v = 160 knots = 82 ms -1 . With Wmin = 0 02 m , the exposure time to avoid excessive blurring is t < 240 μs.
The focal length of the lens in the system 10 can be chosen to obtain the required spatial resolution at the maximum altitude This fixes the field of view.
Alternatively, the field of view may be varied by altering the focal length according to the altitude of the aircraft The range of focal lengths required can be calculated from equation (44). The aircraft registration, during daylight conditions, is illuminated by sunlight or scattered light reflected from the ground . The aircraft scatters the light that is incident, some of which is captured by the lens of the imaging system The considerable amount of light reflected from aluminium fuselages of an aircraft can affect the image obtained, and is taken into account The light power falling onto a pixel of the CCD is given by *
Figure imgf000027_0001
where Lλ is the solar spectral radiance, Δλ is the wavelength bandpass of the entire configuration, Ωsun is the solid angle subtended by the sun, Rgnd is the reflectivity of the ground, PA is the reflectivity of the aircraft, Ap is the area of a pixel in the CCD detector and f# is the lens f-number.
The solar spectral radiance Lλ varies markedly with wavelength λ. The power falling on a pixel will therefore vary over a large range. This can be limited by restricting the wavelength range Δλ passing to the sensor and optimally choosing the centre wavelength of this range. The optimum range and centre wavelength are chosen to match the characteristics of the imaging sensor.
In one embodiment, the optimum wavelength range and centre wavelength are chosen in the near infrared waveband, 0. 69 to 2.0 microns. This limits the variation in light power on a pixel in the sensor to within the useable limits of the sensor. A KODAK™ KAF-1600L imaging sensor (a monolithic silicon sensor with lateral overflow anti-blooming) was chosen that incorporated a mechanism to accommodate a thousandfold saturation of each pixel, giving a total acceptable range of light powers in each pixel of 105. This enables the sensor to produce a useful image of an aircraft when very bright light sources, for example the sun, are in its field of view.
The correct choice of sensor and the correct choice of wavelength range and centre wavelength enables an image to be obtained within a time interval that arrests the motion of the aircraft and that provides an image with sufficient contrast on the aircraft registration to enable digital image processing and recognition of the registration characters.
In choosing the wavelength range and centre wavelength, it was important to avoid dazzling light from the supplementary illumination of the illumination unit 16. The optimum wavelength range was therefore set to between 0.69 μm and 2.0 μm.
The power of sunlight falling onto a pixel directly from the sun is
Figure imgf000028_0001
The relative light powers from the sun and from the aircraft registration falling onto a single pixel is
Figure imgf000028_0002
With Ωsun = 6.8 x 10-5 steradians, Rgnd≈ 0.2 and RA = 1 , the ratio is
Pp-sunIPp = 4.6 x 105. This provides an estimate of the relative contrast between the image of the sun and the image of the underneath of the aircraft on a CCD pixel. The
CCD sensor and system electronics are chosen to accommodate this range of light powers. In poor lighting conditions, the aircraft registration requires additional illumination from the illumination unit 16. The light source of the unit 16 needs to be sufficient to illuminate the aircraft at its maximum altitude. If the source is designed to emit light into a solid angle that just covers the field of view of the imaging system then the light power incident onto a pixel of the imaging system 10 due to light emitted from the source and reflected from the aircraft is given by
Figure imgf000029_0001
where AA is the area on the aircraft imaged onto a pixel of area Ap, Ps is the light power of the source, PA is the aircraft reflectivity, Nptot is the total number of pixels in the CCD sensor and f# is the f-number of the lens. The power of the source required to match the daytime reflected illumination is estimated by setting Pp = 7.3 x 10-11 W, RA = 1 , Ap = 81 μm 2, Nptot = 1552 x 1032, f# = 1.8 and noting that AA = W2 min where Wmin = 0.02 m. Then Ps = 1.50 x 104 W. For a Xenon flash lamp, the flash time is typically t = 300 μs which compares favourably with the exposure time to minimise motion blurring. Then the source must deliver an energy of Es = Pst = 4.5 J . This is the light energy in a wavelength band Δλ = 0.1 μm . Xenon flash lamps typically have 10% of their light energy within this bandpass centred around λ = 0.8 μm . Furthermore, the flash lamp may only be 50% efficient. Thus the electrical energy required is approximately 90 J. Flash lamps that deliver energies of 1500 J in 300 μs are readily available. Illumination with such a flash lamp during the day reduces the contrast between the direct sun and the aircraft registration, thereby relaxing the requirement for over-exposure tolerance of the CCD sensor. This result depends on the flash lamp directing all of its energy into the field of view only and that the lens focal length is optimally chosen to image the region of dimension Wmin = 0.02 monto a single pixel.
In one embodiment, the aperture of the lens on the acquisition camera 7 is automatically adjusted to control the amount of light on the imaging sensor in order to optimise the image quality for digital processing. In the image obtained, the intensity level of the registration characters relative to the underside of the aircraft needs to be maintained to provide good contrast between the two for OCR. The power Ps of the flash 16 is automatically adjusted in accordance with the aperture setting f# of the acquisition camera 7 to optimise the image quality and maintain the relative contrast between the registration characters and the underside of the aircraft, in accordance with the relationship expressed in equation (50). For example, during the day the aperture of the lens may be very small and the power of the flash may be increased to provide additional illumination of the underside, whereas during night conditions, the aperture may be fully opened and the power of the flash reduced considerably as additional illumination is not required. As an alternative, or in addition, the electrical gam of the electronic circuits connected to an acquisition camera 7 is adjusted automatically to optimise the image quality.
To appropriately set the camera aperture and/or gain one or more point optical sensors 60, 62, as shown in Figure 21 , are used to measure the ambient lighting conditions. The electrical output signals of the sensors 60, 62 are processed by the acquisition system 10 to produce the information required to control the camera aperture and/or gain. Two point sensors 60, 62 sensitive to the same optical spectrum as the acquisition cameras 7 can be used. One sensor 60 receives light from the sky that passes through a diffusing plate 64 onto the sensor 60. The diffusing plate 64 collects light from many different directions and allows it to reach the sensor 60. The second sensor 62 is directed towards the ground to measure the reflected light from the ground. The high resolution images obtained of the aircraft by the acquisition system
10 are submitted, as described previously, to the analysis system 12 which performs optical character recognition on the images to extract the registration codes of the aircraft. The analysis system 12 processes the aircraft images obtained by a high resolution camera 7 according to an image processing procedure 100, as shown in Figure 22, which is divided into two parts 102 and 104. The first part 102 operates on a sub-sampled image 105, as shown in Figure 23, to locate regions that contain features that may be registration characters, whereas the second part 104 executes a similar procedure but is done using the full resolution of the original image and is executed only on the regions identified by the first part 102. The sub-sampled image 105 is the original image with one pixel in four removed in both row and column directions, resulting in a one in sixteen sampling ratio . Use of the sub-sampled image improves processing time sixteen-fold. The first part 102 receives the sub-sampled image at step 106 and filters the image to remove features which are larger than the expected size of the registration characters (b) at step 108 Step 108 executes a morphological operation of linear closings applied to a set of lines angled between 0 and 180°. The operation passes a kernel or window across the image 105 to extract lines which exceed a predetermined length and are at a predetermined angle . The kernel or window is passed over the image a number of times and each time the predetermined angle is varied . The lines extracted from all of the passes are then subtracted from the image 105 to provide a filtered difference image 109. The filtered difference image 109 is then thresholded or binansed at step 110 to convert it from a grey scale image to a binary scale image 111 . This is done by setting to 1 all image values that are greater than a threshold and setting to 0 all other image values. The threshold at a given point in the image is determined from a specified multiple of standard deviations from the mean calculated from the pixel values within a window centred on the given point. The binansed image 111 is then filtered at step 112 to remove all features that have pixel densities in a bounding box that are smaller or larger than the expected pixel density for a bounded registration character. The image 111 is then processed at step 114 to remove all features which are not clustered together like registration characters. Step 114 achieves this by grouping together features that have similar sizes and that are close to one another. Groups of features that are smaller than a specified size are removed from the image to obtain a cleaned image 113. The cleaned image 113 is then used at step 116 to locate regions of interest. Regions of interest are obtained in step 116 from the location and extents of the groups remaining after step 114. Step 116 produces regions of interest which include the registration characters and areas of the regions are bounded above and below, as for the region 115 shown in Figure 23.
The regions of interest obtained by the first part 102 of the procedure 100 are further processed individually using the full resolution of the original image and the second part 104 of the procedure. The second part 104 takes a region of interest 115 from the original image at step 120 and for that region filters out features larger than the expected character sizes at step 122, using the same morphological operation of linear closings applied to a set of lines angled between 0 and 180°, followed by image subtraction, as described above, to obtain image 117. The filtered image 117 is then binansed at step 124 by selecting a filter threshold that is representative of the pixel values at the edges of features. To distinguish the registration characters from the aircraft wing or body the filter threshold needs to be set correctly. A mask image of significant edges in image 117 is created by calculating edge-strengths at each point in image 117 and setting to 1 all points that have edge-strengths greater than a mask threshold and setting to 0 all other points. An edge-strength is determined by taking at each point pixel gradients in two directions, Δx and Δy, and calculating to give the edge-strength at that point. The mask threshold at a given
Figure imgf000032_0001
point is determined from a specified multiple of standard deviations from the mean calculated from the edge-strengths within a window centred on the given point. The filter threshold for each point in image 117 is then determined from a specified multiple of standard deviations from the mean calculated from the pixel values at all points within a window centred on the given point that correspond to non-zero values in the mask image. The binarised image 118 is then filtered at step 126 to remove features that are smaller than the expected character sizes. Features are clustered together at step 128 that have similar sizes, that are near to one another and that are associated with similar image values in image 117. At step 130 the correctly clustered features that have sizes, orientations and relative positions that deviate too much from the averages for the clusters are filtered out to leave features that form linear chains. Then at step 132, if the number of features remaining in the image produced by step 130 is greater than 3, then a final image is created by rotating image 118 to align the linear chain of features with the image rows and by masking out features not belonging to the linear chain. The final image is passed to a character recognition process 200 to determine whether the features are registration characters and, if so, which characters.
The final image undergoes a standard optical character recognition process 200, as shown in Figure 24, to generate character string data which represents the ICAO characters on the port wing. The process 200 includes receiving the final image at step 202, which is produced by step 132 of the image processing procedure 100, and separating the characters of the image at step 204. The size of the characters are normalised at step 206 and at step 208 correction for the alignment of the characters is made and further normalisation occurs. Character features are extracted at step 210 and an attempt made to classify the features of the characters extracted at step 212 Character rules are applied to the classified features at step 214 so as to produce a binary string representative of the registration characters at step 216.
Although the system 2 has been described above as being one which is particularly suitable for detecting an aircraft, it should be noted that many features of the system can be used for detecting and identifying other moving objects For example, the embodiments of the tracking system 1 may be used for tracking land vehicles. The system 2 may be employed to acquire images of and identify automobiles at tollway points on a roadway.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention as hereinbefore described with reference to the accompanying drawings.

Claims

CLAIMS:
1. An object detection system including:
passive sensing means for receiving electromagnetic radiation from a moving object and generating intensity signals representative of the received radiation, and processing means for subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
2. An object detection system as claimed in claim 1 , wherein said processing means generates an image acquisition signal on the basis of said differential signature.
3. An object detection system as claimed in claim 2, including acquisition means, responsive to said acquisition signal, for acquiring an image of at least part of said moving object; and
analysis means for processing said image to identify said moving object.
4. An object detection system as claimed in claim 3, wherein said analysis system processes said acquired image to extract markings to identify said moving object.
5. An object detection system as claimed in claim 1 , 2, 3 or 4, wherein said moving object is an aircraft.
6. An object detection system as claimed in claim 5, wherein said aircraft is in flight.
7. An object detection system as claimed in claim 5, wherein said electromagnetic radiation is thermal radiation.
8. An object detection system as claimed in claim 5, including proximity detecting means for detecting the presence of said aircraft within a predetermined region and, in response thereto, generating an activation signal for said passive sensing means .
9. An object detection system as claimed in claim 1 , wherein said passive sensing means includes imaging means for obtaining images of said radiation at successive periods of time, and
said processing means generates respective profiles of pixel values for said images and a difference profile, generated from the difference between successive profiles, which includes said differential signature.
10. An object detection system as claimed in claim 9, wherein said position corresponds to a zero crossing point in said difference profile where said difference profile has risen above a first predetermined threshold for at least a first predetermined distance and then falls to below a second predetermined threshold for a second predetermined distance.
11. An object detection system as claimed in claim 10, wherein said processing means monitors the movement of said position in successive ones of said difference profile and determines the time for generation of an image acquisition signal .
12. An object detection system as claimed in claim 11 , wherein said processing means generates a background image from successive images obtained by said imaging means and subtracts said background image from images of said radiation before generating said profiles .
13. An object detection system as claimed in claim 1 , wherein said passive sensing means includes pyroelectnc sensors with different fields of view and said intensity signals include at least first and second signals representative of the thermal radiation present in said views, respectively, and said processing means subtracts said first and second signals to obtain a differential signal including said differential signature.
14. An object detection system as claimed in claim 13, wherein said processing means determines a time for generation of an image acquisition signal on the basis of the position of said passive sensing means, the time of generation of said differential signature and the speed of said moving object.
15. An object detection system as claimed in claim 14, wherein said time of generation and said speed are determined from a zero crossing point of said differential signature.
16. An image acquisition system including:
at least one camera for acquiring an image of at least part of a moving object, in response to a trigger signal, and
analysis means for processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process.
17. An image acquisition system as claimed in claim 16, wherein said camera images received radiation between 0.69 to 2.0 μm.
18. An image acquisition system as claimed in claim 17, wherein said camera has an exposure time of < 240 μs.
19. An image acquisition system as claimed in claim 17, including an infrared flash having its power adjusted on the basis of the aperture setting of said camera.
20. An image acquisition system as claimed in claim 17, including optical sensor means positioned to obtain measurements of ambient direct light and reflected light for the field of view of said camera and adjust settings of said camera on the basis of said measurements.
21. An image acquisition system as claimed in claim 16, wherein said analysis means sub-samples said image, extracts lines exceeding a predetermined length and at predetermined angles, binarises the image, removes features smaller or larger than said markings, removes features not clustered as said markings, and locates said region using the remaining features.
22. An image acquisition system as claimed in claim 21 , wherein said analysis means extracts said region from said image and processes said region by removing features larger than expected marking sizes, binansing said region, removing features smaller than expected marking sizes, removing features not clustered as identifying markings, and passing the remaining image for optical recognition if including more than a predetermined number of markings.
23. An image acquisition system as claimed in claim 17, wherein said moving object is an aircraft.
24. An image acquisition system as claimed in claim 23, wherein said aircraft is in flight.
25. An object detection system as claimed in claim 2, including an image acquisition system as claimed in any one of claims 16 to 24, wherein said trigger signal is said image acquisition signal.
26. An object detection method including :
passively sensing electromagnetic radiation received from a moving object; generating intensity signals representative of the received radiation, and subtracting said intensity signals to obtain a differential signature representative of the position of said moving object.
27. An object detection method as claimed in claim 26, including generating an image acquisition signal on the basis of said differential signature.
28. An object detection method as claimed in claim 27, including acquiring an image of at least part of said moving object in response to said acquisition signal, and processing said image to identify said moving object.
29. An object detection method as claimed in claim 28, including processing said acquired image to extract markings identifying said moving object.
30. An object detection method as claimed in claim 26, 27, 28 or 29, wherein said moving object is an aircraft.
31. An object detection method as claimed in claim 30, wherein said aircraft is in flight.
32. An object detection method as claimed in claim 30, wherein said electromagnetic radiation is thermal radiation.
33. An object detection method as claimed in claim 30, including detecting the presence of said aircraft within a predetermined region and, in response thereto, generating an activation signal to execute said passive sensing step.
34. An object detection method as claimed in claim 26, wherein said passive sensing includes imaging said radiation at successive periods of time, and
said subtracting includes generating respective profiles of pixel values for images of said radiation and generating a difference profile, from the difference between successive profiles, which includes said differential signature.
35. An object detection method as claimed in claim 34, wherein said position corresponds to a zero crossing point in said difference profile where said difference profile has risen above a first predetermined threshold for at least a first predetermined distance and then falls to below a second predetermined threshold for a second predetermined distance.
36. An object detection method as claimed in claim 35, including monitoring the movement of said position in successive ones of said difference profile and determining the time for generation of an image acquisition signal.
37. An object detection method as claimed in claim 36, wherein said subtracting includes generating a background image from successive images of said radiation imaging means and subtracting said background image from images of said radiation before generating said profiles.
38. An object detection method as claimed in claim 26, wherein said passive sensing includes pyroelectric sensing with different fields of view and said intensity signals include at least first and second signals representative of the thermal radiation present in said views, respectively, and
said subtracting includes subtracting said first and second signals to obtain a differential signal including said differential signature.
39. An object detection method as claimed in claim 38, including determining a time for generation of an image acquisition signal on the basis of the position of passive sensors, the time of generation of said differential signature and the speed of said moving object.
40. An object detection method as claimed in claim 39, wherein said time of generation and speed are determined from a zero crossing point of said differential signature.
41. An image acquisition method including:
acquiring an image of at least part of a moving object, in response to a trigger signal, using at least one camera, and
processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process.
42. An image acquisition method as claimed in claim 41 , wherein said camera images received radiation between 0.69 to 2.0 μm.
43. An image acquisition method as claimed in claim 42, wherein said camera has an exposure time of < 240 μs.
44. An image acquisition method as claimed in claim 42, including adjusting the power of an infrared flash for said camera on the basis of the aperture setting of said camera.
45. An image acquisition method as claimed in claim 42, including obtaining automatic measurements of ambient direct light and reflected light for the field of view of said camera and adjusting settings of said camera on the basis of said measurements.
46. An image acquisition method as claimed in claim 41 , wherein said image processing includes sub-sampling said image, extracting lines exceeding a predetermined length and at predetermined angles, binarising the image, removing features smaller or larger than said markings, removing features not clustered as said markings, and locating said region using the remaining features.
47. An image acquisition method as claimed in claim 46, wherein said region processing includes extracting said region from said image, removing features larger than expected marking sizes, binarising said region, removing features with ar^as smaller or larger than expected marking areas, removing features not clusterec as identifying markings, and passing the remaining image for optical recognition if including more than a predetermined number of markings.
48. An image acquisition method as claimed in claim 42, wherein said moving object is an aircraft.
49. An image acquisition method as claimed in claim 48, wherein said aircraft is in flight.
50. An object detection method as claimed in claim 27, including an image acquisition method as claimed in any one of claims 41 to 49, wherein said trigger signal is said image acquisition signal.
PCT/AU1997/000198 1996-03-29 1997-03-27 An aircraft detection system WO1997037336A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
NZ332051A NZ332051A (en) 1996-03-29 1997-03-27 Detecting position of aircraft flying overhead for accurately recording registration markings
AU21438/97A AU720315B2 (en) 1996-03-29 1997-03-27 An aircraft detection system
EP97913985A EP0890161A4 (en) 1996-03-29 1997-03-27 An aircraft detection system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPN9032A AUPN903296A0 (en) 1996-03-29 1996-03-29 An aircraft detection system
AUPN9032 1996-03-29

Publications (1)

Publication Number Publication Date
WO1997037336A1 true WO1997037336A1 (en) 1997-10-09

Family

ID=3793346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1997/000198 WO1997037336A1 (en) 1996-03-29 1997-03-27 An aircraft detection system

Country Status (9)

Country Link
EP (1) EP0890161A4 (en)
KR (1) KR20000005409A (en)
AU (1) AUPN903296A0 (en)
CA (1) CA2250927A1 (en)
ID (1) ID17121A (en)
NZ (1) NZ332051A (en)
TW (1) TW333614B (en)
WO (1) WO1997037336A1 (en)
ZA (1) ZA972699B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170715A2 (en) * 2000-07-04 2002-01-09 H.A.N.D. GmbH Method for surface surveillance
EP1187083A2 (en) * 2000-09-08 2002-03-13 Zapfe, Hans, Dipl.-Ing.; PA Method and arrangement to monitor take-off and landing of aircrafts
CN110702723A (en) * 2018-07-09 2020-01-17 浙江清华柔性电子技术研究院 Imaging system and method for high-temperature wind tunnel
US10585185B2 (en) 2017-02-03 2020-03-10 Rohde & Schwarz Gmbh & Co. Kg Security scanning system with walk-through-gate
CN111316340A (en) * 2017-06-05 2020-06-19 Wing航空有限责任公司 Method and system for sharing unmanned aerial vehicle system database throughout airspace across multiple service providers
CN113281341A (en) * 2021-04-19 2021-08-20 唐山学院 Detection optimization method of double-sensor surface quality detection system of hot-dip galvanized strip steel
CN113474788A (en) * 2019-07-23 2021-10-01 东洋制罐株式会社 Image data processing system, unmanned aerial vehicle, image data processing method and non-temporary storage computer readable memory medium
CN115656478A (en) * 2022-11-14 2023-01-31 中国科学院、水利部成都山地灾害与环境研究所 Seepage-proofing shearing test device for simulating ice particle circulating shearing and using method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160070384A (en) 2014-12-10 2016-06-20 박원일 System for detecting flying object by thermal image monitoring
WO2020236575A1 (en) * 2019-05-17 2020-11-26 Photon-X, Inc. Spatial phase integrated wafer-level imaging
CN111401370B (en) * 2020-04-13 2023-06-02 城云科技(中国)有限公司 Garbage image recognition and task assignment management method, model and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990001706A2 (en) * 1988-08-08 1990-02-22 Hughes Aircraft Company Signal processing for autonomous acquisition of objects in cluttered background
GB2227589A (en) * 1989-01-30 1990-08-01 Image Recognition Equipment Co Bar code reading system
US5134472A (en) * 1989-02-08 1992-07-28 Kabushiki Kaisha Toshiba Moving object detection apparatus and method
US5243418A (en) * 1990-11-27 1993-09-07 Kabushiki Kaisha Toshiba Display monitoring system for detecting and tracking an intruder in a monitor area
WO1993019441A1 (en) * 1992-03-20 1993-09-30 Commonwealth Scientific And Industrial Research Organisation An object monitoring system
WO1993021617A1 (en) * 1992-04-16 1993-10-28 Traffic Technology Limited Vehicle monitoring apparatus
US5406501A (en) * 1990-12-21 1995-04-11 U.S. Philips Corporation Method and device for use in detecting moving targets
EP0686943A2 (en) * 1994-06-08 1995-12-13 Matsushita Electric Industrial Co., Ltd. Differential motion detection method using background image
WO1996012265A1 (en) * 1994-10-14 1996-04-25 Airport Technology In Scandinavia Ab Aircraft identification and docking guidance systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990001706A2 (en) * 1988-08-08 1990-02-22 Hughes Aircraft Company Signal processing for autonomous acquisition of objects in cluttered background
GB2227589A (en) * 1989-01-30 1990-08-01 Image Recognition Equipment Co Bar code reading system
US5134472A (en) * 1989-02-08 1992-07-28 Kabushiki Kaisha Toshiba Moving object detection apparatus and method
US5243418A (en) * 1990-11-27 1993-09-07 Kabushiki Kaisha Toshiba Display monitoring system for detecting and tracking an intruder in a monitor area
US5406501A (en) * 1990-12-21 1995-04-11 U.S. Philips Corporation Method and device for use in detecting moving targets
WO1993019441A1 (en) * 1992-03-20 1993-09-30 Commonwealth Scientific And Industrial Research Organisation An object monitoring system
WO1993021617A1 (en) * 1992-04-16 1993-10-28 Traffic Technology Limited Vehicle monitoring apparatus
EP0686943A2 (en) * 1994-06-08 1995-12-13 Matsushita Electric Industrial Co., Ltd. Differential motion detection method using background image
WO1996012265A1 (en) * 1994-10-14 1996-04-25 Airport Technology In Scandinavia Ab Aircraft identification and docking guidance systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0890161A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170715A3 (en) * 2000-07-04 2003-01-29 H.A.N.D. GmbH Method for surface surveillance
EP1170715A2 (en) * 2000-07-04 2002-01-09 H.A.N.D. GmbH Method for surface surveillance
EP1187083A2 (en) * 2000-09-08 2002-03-13 Zapfe, Hans, Dipl.-Ing.; PA Method and arrangement to monitor take-off and landing of aircrafts
EP1187083A3 (en) * 2000-09-08 2003-05-07 Zapfe, Hans, Dipl.-Ing.; PA Method and arrangement to monitor take-off and landing of aircrafts
US10585185B2 (en) 2017-02-03 2020-03-10 Rohde & Schwarz Gmbh & Co. Kg Security scanning system with walk-through-gate
CN111316340A (en) * 2017-06-05 2020-06-19 Wing航空有限责任公司 Method and system for sharing unmanned aerial vehicle system database throughout airspace across multiple service providers
US11488484B2 (en) 2017-06-05 2022-11-01 Wing Aviation Llc Methods and systems for sharing an airspace wide unmanned aircraft system database across a plurality of service suppliers
CN111316340B (en) * 2017-06-05 2023-03-07 Wing航空有限责任公司 Method and system for sharing unmanned aerial vehicle system database throughout airspace across multiple service providers
CN110702723A (en) * 2018-07-09 2020-01-17 浙江清华柔性电子技术研究院 Imaging system and method for high-temperature wind tunnel
CN113474788A (en) * 2019-07-23 2021-10-01 东洋制罐株式会社 Image data processing system, unmanned aerial vehicle, image data processing method and non-temporary storage computer readable memory medium
CN113474788B (en) * 2019-07-23 2024-02-13 东洋制罐株式会社 Image data processing system, unmanned plane, image data processing method and non-temporary computer readable memory medium
CN113281341A (en) * 2021-04-19 2021-08-20 唐山学院 Detection optimization method of double-sensor surface quality detection system of hot-dip galvanized strip steel
CN115656478A (en) * 2022-11-14 2023-01-31 中国科学院、水利部成都山地灾害与环境研究所 Seepage-proofing shearing test device for simulating ice particle circulating shearing and using method

Also Published As

Publication number Publication date
AUPN903296A0 (en) 1996-04-26
EP0890161A1 (en) 1999-01-13
TW333614B (en) 1998-06-11
ID17121A (en) 1997-12-04
NZ332051A (en) 2000-05-26
KR20000005409A (en) 2000-01-25
CA2250927A1 (en) 1997-10-09
EP0890161A4 (en) 1999-06-16
ZA972699B (en) 1997-11-18

Similar Documents

Publication Publication Date Title
US5974158A (en) Aircraft detection system
CN104364673B (en) Use the gated imaging of the adaptive depth of field
EP1989681B1 (en) System for and method of synchronous acquisition of pulsed source light in performance of monitoring aircraft flight operation
EP2272027B1 (en) Multispectral enhanced vision system and method for aircraft landing in inclement weather conditions
US6420704B1 (en) Method and system for improving camera infrared sensitivity using digital zoom
US6678395B2 (en) Video search and rescue device
EP0890161A1 (en) An aircraft detection system
US10902630B2 (en) Passive sense and avoid system
CN107300722A (en) A kind of foreign body detection system for airfield runway
Sudhakar et al. Imaging Lidar system for night vision and surveillance applications
EP0597715A1 (en) Automatic aircraft landing system calibration
Forster et al. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays
WO2022071893A1 (en) A system for optimising runway capacity on an airport runway and a method thereof
AU720315B2 (en) An aircraft detection system
AU760788B2 (en) An image acquisition system
CN104010165B (en) Precipitation particles shadow image automatic acquisition device
US10257472B2 (en) Detecting and locating bright light sources from moving aircraft
Roberts et al. Suspended sediment concentration estimation from multi-spectral video imagery
Morris et al. Sensing for hov/hot lanes enforcement
EP3428686B1 (en) A vision system and method for a vehicle
EP3447527A1 (en) Passive sense and avoid system
Lemoff et al. Automated night/day standoff detection, tracking, and identification of personnel for installation protection
Daley et al. Detection of vehicle occupants in HOV lanes: exploration of image sensing for detection of vehicle occupants
EP0297665A2 (en) Radiation source detection
US20230243698A1 (en) Hyperspectral imaging systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 332051

Country of ref document: NZ

ENP Entry into the national phase

Ref document number: 2250927

Country of ref document: CA

Kind code of ref document: A

Ref document number: 2250927

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1019980708141

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1997913985

Country of ref document: EP

NENP Non-entry into the national phase

Ref document number: 97534744

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 1997913985

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1019980708141

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1997913985

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1019980708141

Country of ref document: KR