WO2003021967A2 - Image fusion systems - Google Patents

Image fusion systems Download PDF

Info

Publication number
WO2003021967A2
WO2003021967A2 PCT/GB2002/003949 GB0203949W WO03021967A2 WO 2003021967 A2 WO2003021967 A2 WO 2003021967A2 GB 0203949 W GB0203949 W GB 0203949W WO 03021967 A2 WO03021967 A2 WO 03021967A2
Authority
WO
WIPO (PCT)
Prior art keywords
images
correspondence
image
points
image sequence
Prior art date
Application number
PCT/GB2002/003949
Other languages
French (fr)
Other versions
WO2003021967A3 (en
Inventor
Andrew Mark Peacock
Original Assignee
Icerobotics Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Icerobotics Limited filed Critical Icerobotics Limited
Priority to AU2002326020A priority Critical patent/AU2002326020A1/en
Publication of WO2003021967A2 publication Critical patent/WO2003021967A2/en
Publication of WO2003021967A3 publication Critical patent/WO2003021967A3/en

Links

Classifications

    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to an apparatus and method for identifying points of correspondence between multiple image sequences captured by multiple image acquisition devices each acquiring images in a different medium, such as different parts of the electromagnetic spectrum or sound waves.
  • Multi-sensor image fusion is the combination of images from sensors, sensitive to different physical phenomenon.
  • the fused image can provide greater information than the individual images and as such multi-sensor image fusion is an increasingly important research area with many applications including robotics, medical imaging, manufacturing, defence and remote sensing.
  • a popular coregistration -approach is to identify points of correspondence (POC) between the different sensor images and use these to determine the parameters of the chosen registration transform. These points of correspondence are typically found by looking for similar features in the different images, such as intensity contours. Local or global correlation methods are often used in this process .
  • POC points of correspondence
  • a disadvantage of this approach to coregistration is that, for many combinations of sensors, there is little correlation between the images they form making it difficult to identify enough points of correspondence to fuse the images.
  • the approach is limited as, typically, the less the correlation between the individual images the greater the benefit is likely to be had by fusing the images.
  • apparatus for automatically registering images from a plurality of image sequence acquisition devices each acquiring images in a different medium to form a single image sequence, the apparatus comprising means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images.
  • the medium are selected from a group comprising any region of the electromagnetic spectrum and sound.
  • the medium comprise visible and infrared.
  • the apparatus further comprises means for building region maps from the at least two images .
  • the apparatus further comprises means for overlapping the region maps. Additionally the apparatus may further comprise means for matching region markers which are close to each other as points of correspondence to coregister the images.
  • the apparatus further comprises means to fuse the coregistered images into a single image sequence .
  • an imaging system comprising a plurality of image sequence acquisition devices each acquiring images in a different medium, image registration means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images, image fusion means for fusing the images using the points of correspondence and image display means for displaying the fused single image sequence.
  • At least one of the plurality of image sequence acquisition devices may be a passive device. Additionally at least one of the plurality of image sequence acquisition devices may be an active device.
  • the plurality of image sequence acquisition devices comprise at least two sensors.
  • the two or more sensors are selected from a group comprising video cameras, thermal infra red cameras, radar, millimetre wave radar, ground penetrating radar, ultrasound, near-infrared and ultraviolet.
  • the two or more sensors include video cameras.
  • the video cameras may be of any recognised format, for example CCIR format colour video cameras .
  • the radar may be millimetre wave radar or ground penetrating radar.
  • the image display means is a standard colour monitor, such as Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) .
  • the image display means is a television, projector or head-up display system.
  • the imaging system includes processing means for further processing the fused image.
  • the further processing means may carry out an automatic function.
  • the automatic function may be quality inspection, motion detection or the setting of an intruder alarm.
  • the at least two sensors directly output images in digital format.
  • a method of registering images comprising the steps of:
  • the points of correspondence are identified by building Interest Region maps from the multiple images. Additionally, the corresponding regions may be identified from the Interest Region maps. Preferably also, the corresponding region maps are overlapping and Interest Markers which are close to each other are matched as Points of Correspondence.
  • a fourth aspect of the present invention there is provided a method of finding points of correspondence in multiple images, each acquired in a different medium using non static regions that appear in all of the multiple images by;
  • Figure 1 illustrates apparatus for combining the output of multiple image sequence acquisition devices for display or further processing
  • Figure 2 illustrates an example implementation of the apparatus of the present invention having a two camera image fusion device in a typical urban environment
  • Figure 3 is a flow chart illustrating the processor for finding points of correspondence from different images in a two sensor implementation
  • Figure 4 illustrates the first stage in the process described, wherein different region maps are generated from the image sequences
  • Figure 5 shows matching coregister Interest Region maps .
  • the apparatus of the present invention automatically combines the output of multiple image sequence acquisition devices into a single image sequence for display or for further processing by a machine vision system.
  • the apparatus enables more information to be provided in a single image sequence and can be provided by any of the individual image sequence acquisition devices alone. Thus the devices are sensitive to different physical phenomenon and capture the images in different medium.
  • the apparatus of the present invention provides for registration of 2D multispectral images based on the moving portions of the images to be fused, as opposed to current systems where the point of correspondence is determined from static positions on the images.
  • the apparatus of the present invention may find application in areas such as medical diagnosis, CCTV surveillance systems, security alarm systems, fire fighting, automatic inspection, surveying, aviation and wildlife watching.
  • Figure 1 illustrates an apparatus for combining the output of multiple image sequence acquisition devices 1 into a single image sequence which can be displayed at 2.
  • the process involves a coregistration stage 3 and a fusion/combination stage 4.
  • the image sequence acquisition device described in the present invention comprises image acquisition means, means to coregister the images, means to combine or fuse the images into a single image and means to output the fuse to image sequence for display of further processing.
  • the image sequence acquisition devices comprise sensors that can form two-dimensional images sequence of the scenes that they are exposed to and can output the image sequence as a digital representation.
  • the example embodiment described herein uses a CCIR format colour video camera as a primary sensor and an NTSC thermal infrared camera as a secondary sensor.
  • the embodiment also uses the digitiser to convert the CCIR/NTSC format images to digital representation.
  • the sensors may be passive, as in the example embodiment described herein or alternatively may be active sensor such as radar.
  • Examples of alternative image sequence acquisition devices include Millimetre Wave Radar, Ground Penetrating Radar, Ultrasound, Near-Infrared and Ultraviolet. An important aspect is that the images taken do not need to be the same size.
  • the sensor system may directly output images in digital format .
  • the imaging devices will be positioned so that they overlook/observe and acquire images from the same target scene.
  • a primary 5 and secondary 6 imaging device in this case a camera
  • the cameras are at similar height and point in approximately the same direction.
  • the cameras may be at physically different locations and may look at the scene from different elevations, rotations, and distances.
  • the images may be at different resolutions and it is therefore necessary to coregister the images before they can be combined.
  • Registration is achieved by means of any suitable image transform.
  • This transform may be a global transform such as the well known Projective Transform or a local transform such as the well known Elastic Deformation Transform.
  • the parameters of the chosen transform are determined from Points of Correspondence between the individual images.
  • the projective transform has 8 parameters and so requires 4 Points of Correspondence to be specified. Given more than 4 Points of Correspondence, the well known Least Squares approach can be used to automatically determine the parameters.
  • the first stage of the process is therefore to build Interest Region Meps from the image sequences.
  • This stage comprises 3 steps which can be seen in Figure 4.
  • two reference images 9 and 10 are taken from the image sequence and the Interest Map is calculated by taking for each pel in the image the absolute difference between the earlier frame pel intensity and the later frame pel intensity.
  • a threshold operator is applied to remove small differences, which can be attributed to noise in the sequence, thus yielding a binary image 11.
  • a region growing operator is applied to find all regions in the binary image and the centres of gravity (average pel location) of the regions are calculated, these are termed Interest Markers 12.
  • This stage is applied to all the image sequences and it is important that the reference images from each sequence are taken at the same time points or very close together so that the Interest Maps correspond to the same time period.
  • a time stamp may be attached to each interest marker, and only markers with similar time stamps in the different images sequence allowed to be points of correspondence.
  • moving objects in image sequences may be tracked over time and their positions at each frame recorded as interest markers.
  • the second stage of the process to find points of correspondence is to identify corresponding regions from the Interest Region Maps of different image sequences. This is achieved by assuming that the registration between the difference maps can be approximated by a global transform such as the projective transform.
  • the parameters ⁇ that minimise the distance between the Interest Markers in one sequence x ⁇ and their closest neighbours (under the transform) in the second sequence X R are identified. This is achieved by an electronic system that minimises the following equation:
  • f(x; ⁇ ) is the chosen transform of the point x using parameters ⁇ and
  • denotes the distance.
  • X ⁇ is chosen to be the image sequence with the smallest number of regions, or either sequence if both yield the same number of regions .
  • the distance measure is the Cartesian distance and the electronic system conducts and exhaustive search of the quantised transform parameter space. The search is constrained given that limits of the difference in translation/rotation/scaling can easily be identified. The level of quantisation of the parameter space is a compromise between speed and precision of the coregistration. This stage can only be carried out once sufficient interest markers are identified in the individual images to determine parameters of the chosen transform. To improve robustness, time stamps may be attached to interest markers so that only interest markers with similar time stamps may be points of correspondence .
  • the search was constrained to horizontal/vertical translations of ⁇ 50 pels in 2 pel increments and rotation of ⁇ 10° in 1° increments .
  • the registered region maps from the different image sequences 13 and 14 as illustrated in Figure 5a are overlapped as shown in Figure 5b.
  • Interest Markers 15 that appear close to each other are matched as being Points of Correspondence, as illustrated in Figures 5c.
  • a number of Points of Correspondence can be recorded to improve accuracy. These can then be used to determine the transform parameters.
  • Some difference regions may not appear in all of the image sequences due to being out of sight or invisible to some sensors. Such regions can be identified as having no matching regions, and hence discarded, as illustrated in Figure 5.
  • the multiple matches can be averaged to give the single region as illustrated in Figure 5. This may be caused by problems in thresholding the Interest Map.
  • the next step in the process is the fusion process in which the coregistered images are combined and further processed, for example by using information from corresponding areas in multiple images to control an alarm system or by combining the different images into a single image.
  • information of interest to the application is preserved from the individual images.
  • an RGB colour visual image sequence is combined with a monochrome Thermal INFRARED Image sequence and an RGB colour fused image is created by an electronic device as the following relationship between the its input and output :
  • r F (r v ) , g F (g v )and b F (b v ) are the red, green and blue intensity components of the Fused (Visual) pel respectively and m IR is the monochrome intensity of the Thermal INFRARED pel. This has the effect of making the fused image appear similar to the colour visual image but with hot objects slightly red and cold objects slightly blue .
  • the fused image sequence output by the device is in a format suitable for an image display device or for further processing by a machine vision system.
  • the image display device used in the example embodiment described here is a standard colour monitor.
  • Examples of alternative means of display include television, projectors and head-up display systems.
  • Examples of further processing systems include automatic quality inspection and automatic motion detection.
  • the advantage of the present invention lies in the fact that there is provided an apparatus and method for automatically finding points of correspondence between images formed by sensors that are sensitive to different physical phenomenon wherein the technique assumes that temporal differences in the image sequences are likely to occur in similar places rather than rely on correlation of static features in the different images.
  • Example results from a thermal INFRARED visual image application demonstrate that this technique can successfully be applied to find points of correspondence in areas of motion in the scene. These are the areas of most interest in some applications such as surveillance.
  • a further advantage of the present invention lies in the fact that the apparatus and method can be used to identify points of correspondence to coregister images using images formed by different imaging sensors where there is very little correlation.

Abstract

Apparatus and method for automatically combining the output of multiple image sequence acquisition devices, each capturing the images in a different medium, into a single image sequence for display or for further processing by a machine vision system. Specifically, the apparatus of the present invention provides for registration of images based on the moving portions of the images to be fused, as opposed to current systems where the point of correspondence is determined from static features on the images. The apparatus of the present invention may find application in areas such as medical diagnosis, CCTV surveillance systems, security alarm systems, fire fighting, automatic inspection, surveying, avionics and wildlife watching.

Description

Image Fusion Systems
The present invention relates to an apparatus and method for identifying points of correspondence between multiple image sequences captured by multiple image acquisition devices each acquiring images in a different medium, such as different parts of the electromagnetic spectrum or sound waves.
Multi-sensor image fusion is the combination of images from sensors, sensitive to different physical phenomenon. The fused image can provide greater information than the individual images and as such multi-sensor image fusion is an increasingly important research area with many applications including robotics, medical imaging, manufacturing, defence and remote sensing.
An important stage in image fusion is the coregistration of images from the different sensors. A popular coregistration -approach is to identify points of correspondence (POC) between the different sensor images and use these to determine the parameters of the chosen registration transform. These points of correspondence are typically found by looking for similar features in the different images, such as intensity contours. Local or global correlation methods are often used in this process .
A disadvantage of this approach to coregistration is that, for many combinations of sensors, there is little correlation between the images they form making it difficult to identify enough points of correspondence to fuse the images. Thus the approach is limited as, typically, the less the correlation between the individual images the greater the benefit is likely to be had by fusing the images.
It is an object of at least one embodiment of the present invention to provide apparatus and method for registering images which obviates or mitigates the disadvantages in the prior art .
It is a further object of at least one embodiment of the present invention to provide apparatus and method for registering images which, by virtue of the activity in a scene, uses the interframe differences found at similar locations in the images as points of correspondence to coregister the images.
According to a first aspect of the present invention, there is provided apparatus for automatically registering images from a plurality of image sequence acquisition devices each acquiring images in a different medium to form a single image sequence, the apparatus comprising means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images.
Preferably the medium are selected from a group comprising any region of the electromagnetic spectrum and sound. In a preferred embodiment the medium comprise visible and infrared.
Preferably the apparatus further comprises means for building region maps from the at least two images .
Preferably also the apparatus further comprises means for overlapping the region maps. Additionally the apparatus may further comprise means for matching region markers which are close to each other as points of correspondence to coregister the images.
Advantageously the apparatus further comprises means to fuse the coregistered images into a single image sequence .
According to a second aspect of the present invention there is provided an imaging system comprising a plurality of image sequence acquisition devices each acquiring images in a different medium, image registration means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images, image fusion means for fusing the images using the points of correspondence and image display means for displaying the fused single image sequence. At least one of the plurality of image sequence acquisition devices may be a passive device. Additionally at least one of the plurality of image sequence acquisition devices may be an active device.
Advantageously the plurality of image sequence acquisition devices comprise at least two sensors. Preferably the two or more sensors are selected from a group comprising video cameras, thermal infra red cameras, radar, millimetre wave radar, ground penetrating radar, ultrasound, near-infrared and ultraviolet.
Where the two or more sensors include video cameras. The video cameras may be of any recognised format, for example CCIR format colour video cameras .
Where the two or more sensors include radar the radar may be millimetre wave radar or ground penetrating radar.
Preferably the image display means is a standard colour monitor, such as Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) . Alternatively the image display means is a television, projector or head-up display system.
Preferably the imaging system includes processing means for further processing the fused image. The further processing means may carry out an automatic function. The automatic function may be quality inspection, motion detection or the setting of an intruder alarm.
Preferably, the at least two sensors directly output images in digital format. According to a third aspect of the present invention there is provided a method of registering images comprising the steps of:
(a) finding or locating points of correspondence in multiple image sequences, the image sequences being captured by multiple image sequence acquisition devices each acquiring images in a different medium; and
(b) combining the outputs of the multiple image sequence acquisition devices into a single image sequence;
(c) characterised in that the points of correspondence are obtained using non-static regions that appear in two or more of the multiple images.
Preferably the points of correspondence are identified by building Interest Region maps from the multiple images. Additionally, the corresponding regions may be identified from the Interest Region maps. Preferably also, the corresponding region maps are overlapping and Interest Markers which are close to each other are matched as Points of Correspondence.
According to a fourth aspect of the present invention there is provided a method of finding points of correspondence in multiple images, each acquired in a different medium using non static regions that appear in all of the multiple images by;
(a) acquiring multiple images using image acquisition means; (b) building Interest Region maps from the multiple images;
(c) identifying corresponding regions from the Interest Region maps of different images;
(d) overlapping the registered region maps from the different images; and
(e) matching Interest Markers which are close to each other as Points of Correspondence.
An example embodiment of the invention will now be described with reference to the accompanying figures in which;
Figure 1 illustrates apparatus for combining the output of multiple image sequence acquisition devices for display or further processing;
Figure 2 illustrates an example implementation of the apparatus of the present invention having a two camera image fusion device in a typical urban environment;
Figure 3 is a flow chart illustrating the processor for finding points of correspondence from different images in a two sensor implementation;
Figure 4 illustrates the first stage in the process described, wherein different region maps are generated from the image sequences; and
Figure 5 shows matching coregister Interest Region maps . The apparatus of the present invention automatically combines the output of multiple image sequence acquisition devices into a single image sequence for display or for further processing by a machine vision system. The apparatus enables more information to be provided in a single image sequence and can be provided by any of the individual image sequence acquisition devices alone. Thus the devices are sensitive to different physical phenomenon and capture the images in different medium. Specifically, the apparatus of the present invention provides for registration of 2D multispectral images based on the moving portions of the images to be fused, as opposed to current systems where the point of correspondence is determined from static positions on the images. The apparatus of the present invention may find application in areas such as medical diagnosis, CCTV surveillance systems, security alarm systems, fire fighting, automatic inspection, surveying, aviation and wildlife watching.
Figure 1 illustrates an apparatus for combining the output of multiple image sequence acquisition devices 1 into a single image sequence which can be displayed at 2. The process involves a coregistration stage 3 and a fusion/combination stage 4.
The image sequence acquisition device described in the present invention comprises image acquisition means, means to coregister the images, means to combine or fuse the images into a single image and means to output the fuse to image sequence for display of further processing. Specifically, the image sequence acquisition devices comprise sensors that can form two-dimensional images sequence of the scenes that they are exposed to and can output the image sequence as a digital representation.
In a preferred embodiment there is one primary sensor and one or more secondary sensors. The example embodiment described herein uses a CCIR format colour video camera as a primary sensor and an NTSC thermal infrared camera as a secondary sensor. The embodiment also uses the digitiser to convert the CCIR/NTSC format images to digital representation.
The sensors may be passive, as in the example embodiment described herein or alternatively may be active sensor such as radar. Examples of alternative image sequence acquisition devices include Millimetre Wave Radar, Ground Penetrating Radar, Ultrasound, Near-Infrared and Ultraviolet. An important aspect is that the images taken do not need to be the same size. In addition rather than use external means to digitise the images, the sensor system may directly output images in digital format .
Typically, the imaging devices will be positioned so that they overlook/observe and acquire images from the same target scene. Referring to Figure 2, a primary 5 and secondary 6 imaging device, in this case a camera, is attached to a pylon 7 and positioned to overlook a typical urban scene 8. The cameras are at similar height and point in approximately the same direction. Although the individual cameras must overlook the same scene, the cameras may be at physically different locations and may look at the scene from different elevations, rotations, and distances. As in the example embodiment, the images may be at different resolutions and it is therefore necessary to coregister the images before they can be combined.
Registration is achieved by means of any suitable image transform. This transform may be a global transform such as the well known Projective Transform or a local transform such as the well known Elastic Deformation Transform. The Projective Transform is achieved by an electronic system that implements the following 8 parameter relationship between the input co-ordinates x = i x, y)τ and the out co-ordinates X' = ( x ', y') τ :
Figure imgf000010_0001
Figure imgf000010_0002
where a±, i = 1..8 are transform parameters. As the transform parameters are likely to change infrequently and slowly, they can be calculated in parallel more slowly than real time. The parameters of the chosen transform are determined from Points of Correspondence between the individual images. The projective transform has 8 parameters and so requires 4 Points of Correspondence to be specified. Given more than 4 Points of Correspondence, the well known Least Squares approach can be used to automatically determine the parameters.
There is often very little correlation between images formed by different imaging sensors. However, activity in a scene is likely to yield inter frame differences in similar locations. If these locations can be identified, then they can be used as points of correspondence (POC) to coregister the images. The process of finding points of correspondence from difference images is illustrated for a two sensor embodiment in Figure 3. Referring to the flow chart of Figure 3, it can be seen that the process of finding points of correspondence between image sequences can be achieved by building Difference Region maps, through frame differencing, thresholding and region growing, finding an approximate transform then matching regions from the difference frequencies .
The first stage of the process is therefore to build Interest Region Meps from the image sequences. This stage comprises 3 steps which can be seen in Figure 4. Referring firstly to Figure 4a, in the example embodiment two reference images 9 and 10 are taken from the image sequence and the Interest Map is calculated by taking for each pel in the image the absolute difference between the earlier frame pel intensity and the later frame pel intensity. Next, according to Figure 4b, a threshold operator is applied to remove small differences, which can be attributed to noise in the sequence, thus yielding a binary image 11. Finally, referring to Figure 4, a region growing operator is applied to find all regions in the binary image and the centres of gravity (average pel location) of the regions are calculated, these are termed Interest Markers 12. This stage is applied to all the image sequences and it is important that the reference images from each sequence are taken at the same time points or very close together so that the Interest Maps correspond to the same time period. Alternatively a time stamp may be attached to each interest marker, and only markers with similar time stamps in the different images sequence allowed to be points of correspondence. As well as or instead of using different images, moving objects in image sequences may be tracked over time and their positions at each frame recorded as interest markers.
The second stage of the process to find points of correspondence is to identify corresponding regions from the Interest Region Maps of different image sequences. This is achieved by assuming that the registration between the difference maps can be approximated by a global transform such as the projective transform. The parameters ψ that minimise the distance between the Interest Markers in one sequence xτ and their closest neighbours (under the transform) in the second sequence XR are identified. This is achieved by an electronic system that minimises the following equation:
Figure imgf000012_0001
where f(x; ψ) is the chosen transform of the point x using parameters ψ and | . || denotes the distance. Xτ is chosen to be the image sequence with the smallest number of regions, or either sequence if both yield the same number of regions . In the example embodiment the distance measure is the Cartesian distance and the electronic system conducts and exhaustive search of the quantised transform parameter space. The search is constrained given that limits of the difference in translation/rotation/scaling can easily be identified. The level of quantisation of the parameter space is a compromise between speed and precision of the coregistration. This stage can only be carried out once sufficient interest markers are identified in the individual images to determine parameters of the chosen transform. To improve robustness, time stamps may be attached to interest markers so that only interest markers with similar time stamps may be points of correspondence .
In the example embodiment, the transform was chosen to be a rotation by θ and translation by b = (bx,by)τ; thus there are three parameters ψ = {bx,by,θ}. The search was constrained to horizontal/vertical translations of ± 50 pels in 2 pel increments and rotation of ±10° in 1° increments .
In the third and final stage of the process to find ■ Points of Correspondence, the registered region maps from the different image sequences 13 and 14 as illustrated in Figure 5a are overlapped as shown in Figure 5b. Interest Markers 15 that appear close to each other are matched as being Points of Correspondence, as illustrated in Figures 5c. By taking multiple samples over time, a number of Points of Correspondence can be recorded to improve accuracy. These can then be used to determine the transform parameters. Some difference regions may not appear in all of the image sequences due to being out of sight or invisible to some sensors. Such regions can be identified as having no matching regions, and hence discarded, as illustrated in Figure 5. Where one region is very close to two or more regions from a coregistered image, the multiple matches can be averaged to give the single region as illustrated in Figure 5. This may be caused by problems in thresholding the Interest Map. The next step in the process is the fusion process in which the coregistered images are combined and further processed, for example by using information from corresponding areas in multiple images to control an alarm system or by combining the different images into a single image. In the fusion step, information of interest to the application is preserved from the individual images. In the example embodiment presented here, an RGB colour visual image sequence is combined with a monochrome Thermal INFRARED Image sequence and an RGB colour fused image is created by an electronic device as the following relationship between the its input and output :
Figure imgf000014_0001
where rF (rv) , gF(gv)and bF(bv)are the red, green and blue intensity components of the Fused (Visual) pel respectively and mIR is the monochrome intensity of the Thermal INFRARED pel. This has the effect of making the fused image appear similar to the colour visual image but with hot objects slightly red and cold objects slightly blue .
The fused image sequence output by the device is in a format suitable for an image display device or for further processing by a machine vision system. The image display device used in the example embodiment described here is a standard colour monitor. Examples of alternative means of display include television, projectors and head-up display systems. Examples of further processing systems include automatic quality inspection and automatic motion detection.
The advantage of the present invention lies in the fact that there is provided an apparatus and method for automatically finding points of correspondence between images formed by sensors that are sensitive to different physical phenomenon wherein the technique assumes that temporal differences in the image sequences are likely to occur in similar places rather than rely on correlation of static features in the different images. Example results from a thermal INFRARED visual image application demonstrate that this technique can successfully be applied to find points of correspondence in areas of motion in the scene. These are the areas of most interest in some applications such as surveillance.
A further advantage of the present invention lies in the fact that the apparatus and method can be used to identify points of correspondence to coregister images using images formed by different imaging sensors where there is very little correlation.
Further modifications and improvements may be incorporated without departing from the scope of the invention herein intended.

Claims

Claims :
1. Apparatus for automatically registering images from a plurality of image sequence acquisition devices each acquiring images in a different medium to form a single image sequence, the apparatus comprising means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images.
2. Apparatus as claimed in Claim 1 further comprising means for building region maps from the at least two images .
3. Apparatus as claimed in Claim 2 further comprising means for overlapping the region maps.
4. Apparatus as claimed in Claim 3 further comprising means for matching region markers which are close to each other as points of correspondence to coregister the images.
5. Apparatus as claimed in Claim 4 further comprising means to fuse the coregistered images into a single image sequence.
6. Apparatus as claimed in any preceding Claim wherein the medium are selected from a group comprising any region of the electromagnetic spectrum and sound.
7. An imaging system comprising a plurality of image sequence acquisition devices each acquiring images in a different medium, image registration means for combining the images by finding or locating points of correspondence using non-static regions that appear in at least two of the images, image fusion means for fusing the images using the points of correspondence and image display means for displaying the fused single image sequence.
8. An imaging system as claimed in Claim 7 wherein at least one of the plurality of image sequence acquisition devices is a passive device.
9. An imaging system as claimed in Claim 7 or Claim 8 wherein at least one of the plurality of image sequence acquisition devices is an active device.
10. An imaging system as claimed in any one of Claims 7 to 9 wherein the plurality of image sequence acquisition devices comprise at least two sensors.
11. An imaging system as claimed in Claim 10 wherein the two or more sensors are selected from a group comprising video cameras, thermal infra red cameras, radar, millimetre wave radar, ground penetrating radar, ultrasound, near-infrared and ultraviolet .
12. An imaging system as claimed in any one of Claims 7 to 11 wherein the image display means is a standard monitor, such as Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) .
13. An imaging system as claimed in any one of Claims 7 to 11 wherein the image display means is a television.
14. An imaging system as claimed in any one of Claims 7 to 11 wherein the image display means is a projector .
15. An imaging system as claimed in any one of Claims 7 to 11 wherein the image display means is a head-up display system.
16. An imaging system as claimed in any one of Claims 7 to 15 having processing means for further processing the fused image.
17. An imaging system as claimed in Claim 16 wherein the further processing means carries out an automatic function.
18. An imaging system as claimed in any one of Claims 8 to 15 wherein the at least two sensors directly output images in digital format.
19. A method of registering images comprising the steps of:
a) finding or locating points of correspondence in multiple image sequences, the image sequences being captured by multiple image sequence acquisition devices each acquiring images in a different medium; and b) combining the outputs of the multiple image sequence acquisition devices into a single image sequence;
characterised in that the points of correspondence are obtained using non-static regions that appear in two or more of the multiple images.
20. A method as claimed in Claim 19 wherein the points of correspondence are identified by building Interest Region maps from the multiple images .
21. A method as claimed in Claims 19 to 20 wherein the corresponding regions are identified from the Interest Region maps.
22. A method as claimed in Claims 19 to 21 wherein the corresponding region maps are overlapping and Interest Markers which are close to each other are matched as Points of Correspondence.
3. A method of finding points of correspondence in multiple images, each acquired in a different medium using non static regions that appear in all of the multiple images by;
a) acquiring multiple images using image acquisition means; b) building Interest Region maps from the multiple images; c) identifying corresponding regions from the Interest Region maps of different images; d) overlapping the registered region maps from the different images; e) matching Interest Markers which are close to each other as Points of Correspondence.
PCT/GB2002/003949 2001-09-04 2002-08-27 Image fusion systems WO2003021967A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002326020A AU2002326020A1 (en) 2001-09-04 2002-08-27 Image fusion systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0121370.1A GB0121370D0 (en) 2001-09-04 2001-09-04 Image fusion systems
GB0121370.1 2001-09-04

Publications (2)

Publication Number Publication Date
WO2003021967A2 true WO2003021967A2 (en) 2003-03-13
WO2003021967A3 WO2003021967A3 (en) 2003-06-19

Family

ID=9921478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2002/003949 WO2003021967A2 (en) 2001-09-04 2002-08-27 Image fusion systems

Country Status (3)

Country Link
AU (1) AU2002326020A1 (en)
GB (1) GB0121370D0 (en)
WO (1) WO2003021967A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1303432C (en) * 2003-06-05 2007-03-07 上海交通大学 Remote sensing image picture element and characteristic combination optimizing mixing method
CN1313972C (en) * 2003-07-24 2007-05-02 上海交通大学 Image merging method based on filter group
CN100410684C (en) * 2006-02-23 2008-08-13 复旦大学 Remote sensing image fusion method based on Bayes linear estimation
WO2008141753A1 (en) * 2007-05-24 2008-11-27 Daimler Ag Method for object recognition
WO2009045478A1 (en) * 2007-10-03 2009-04-09 Searete Llc Vasculature and lymphatic system imaging and ablation
US8165663B2 (en) 2007-10-03 2012-04-24 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation
US8285367B2 (en) 2007-10-05 2012-10-09 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation associated with a reservoir
US8285366B2 (en) 2007-10-04 2012-10-09 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation associated with a local bypass
CN103576127A (en) * 2012-07-18 2014-02-12 地球物理测勘系统有限公司 Merged ground penetrating radar display for multiple antennas
EP2312936B1 (en) 2008-07-15 2017-09-06 Lely Patent N.V. Dairy animal treatment system
CN111340746A (en) * 2020-05-19 2020-06-26 深圳应急者安全技术有限公司 Fire fighting method and fire fighting system based on Internet of things

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265172A (en) * 1989-10-13 1993-11-23 Texas Instruments Incorporated Method and apparatus for producing optical flow using multi-spectral images
WO2000073995A2 (en) * 1999-06-01 2000-12-07 Microsoft Corporation A system and method for tracking objects by fusing results of multiple sensing modalities

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265172A (en) * 1989-10-13 1993-11-23 Texas Instruments Incorporated Method and apparatus for producing optical flow using multi-spectral images
WO2000073995A2 (en) * 1999-06-01 2000-12-07 Microsoft Corporation A system and method for tracking objects by fusing results of multiple sensing modalities

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GERN A ET AL: "Advanced lane recognition-fusing vision and radar" INTELLIGENT VEHICLES SYMPOSIUM, 2000. IV 2000. PROCEEDINGS OF THE IEEE DEARBORN, MI, USA 3-5 OCT. 2000, PISCATAWAY, NJ, USA,IEEE, US, 3 October 2000 (2000-10-03), pages 45-51, XP010528911 ISBN: 0-7803-6363-9 *
NIKOU C ET AL: "Robust voxel similarity metrics for the registration of dissimilar single and multimodal images" PATTERN RECOGNITION, PERGAMON PRESS INC. ELMSFORD, N.Y, US, vol. 32, no. 8, August 1999 (1999-08), pages 1351-1368, XP004169483 ISSN: 0031-3203 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1303432C (en) * 2003-06-05 2007-03-07 上海交通大学 Remote sensing image picture element and characteristic combination optimizing mixing method
CN1313972C (en) * 2003-07-24 2007-05-02 上海交通大学 Image merging method based on filter group
CN100410684C (en) * 2006-02-23 2008-08-13 复旦大学 Remote sensing image fusion method based on Bayes linear estimation
WO2008141753A1 (en) * 2007-05-24 2008-11-27 Daimler Ag Method for object recognition
WO2009045478A1 (en) * 2007-10-03 2009-04-09 Searete Llc Vasculature and lymphatic system imaging and ablation
US8165663B2 (en) 2007-10-03 2012-04-24 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation
US8285366B2 (en) 2007-10-04 2012-10-09 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation associated with a local bypass
US8285367B2 (en) 2007-10-05 2012-10-09 The Invention Science Fund I, Llc Vasculature and lymphatic system imaging and ablation associated with a reservoir
EP2312936B1 (en) 2008-07-15 2017-09-06 Lely Patent N.V. Dairy animal treatment system
CN103576127A (en) * 2012-07-18 2014-02-12 地球物理测勘系统有限公司 Merged ground penetrating radar display for multiple antennas
EP2687867A3 (en) * 2012-07-18 2014-08-13 Geophysical Survey Systems, Inc. Merged Ground Penetrating Radar Display for Multiple Antennas
US8957809B2 (en) 2012-07-18 2015-02-17 Geophysical Survey Systems, Inc. Merged ground penetrating radar display for multiple antennas
CN111340746A (en) * 2020-05-19 2020-06-26 深圳应急者安全技术有限公司 Fire fighting method and fire fighting system based on Internet of things

Also Published As

Publication number Publication date
AU2002326020A1 (en) 2003-03-18
WO2003021967A3 (en) 2003-06-19
GB0121370D0 (en) 2001-10-24

Similar Documents

Publication Publication Date Title
US11006104B2 (en) Collaborative sighting
EP2913796B1 (en) Method of generating panorama views on a mobile mapping system
CN104052938B (en) Apparatus and method for the multispectral imaging using three-dimensional overlay
US7321386B2 (en) Robust stereo-driven video-based surveillance
US7366359B1 (en) Image processing of regions in a wide angle video camera
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
CN103688292B (en) Image display device and method for displaying image
US20090015674A1 (en) Optical imaging system for unmanned aerial vehicle
US20090079830A1 (en) Robust framework for enhancing navigation, surveillance, tele-presence and interactivity
CN110910460B (en) Method and device for acquiring position information and calibration equipment
US20180089972A1 (en) System and method for surveilling a scene comprising an allowed region and a restricted region
US9418299B2 (en) Surveillance process and apparatus
WO2003021967A2 (en) Image fusion systems
KR20160078724A (en) Apparatus and method for displaying surveillance area of camera
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
JP2005217883A (en) Method for detecting flat road area and obstacle by using stereo image
EP3845922A1 (en) Calibration system for combined depth and texture sensor
JP2002288637A (en) Environmental information forming method
CN110572576A (en) Method and system for shooting visible light and thermal imaging overlay image and electronic equipment
JP2007011776A (en) Monitoring system and setting device
KR102473804B1 (en) method of mapping monitoring point in CCTV video for video surveillance system
US20180053304A1 (en) Method and apparatus for detecting relative positions of cameras based on skeleton data
CN110726407A (en) Positioning monitoring method and device
JP2017011598A (en) Monitoring system
JP2003179930A (en) Method and apparatus for extracting dynamic object

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA US UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP