US20070263941A1 - Method and system for removing path radiance effects from an image - Google Patents

Method and system for removing path radiance effects from an image Download PDF

Info

Publication number
US20070263941A1
US20070263941A1 US11/431,755 US43175506A US2007263941A1 US 20070263941 A1 US20070263941 A1 US 20070263941A1 US 43175506 A US43175506 A US 43175506A US 2007263941 A1 US2007263941 A1 US 2007263941A1
Authority
US
United States
Prior art keywords
image
path radiance
correction term
spectral
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/431,755
Inventor
Casey Smith
Richard Friedhoff
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tandent Computer Vision LLC
Original Assignee
Tandent Vision Science Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tandent Vision Science Inc filed Critical Tandent Vision Science Inc
Priority to US11/431,755 priority Critical patent/US20070263941A1/en
Assigned to TANDENT VISION SCIENCE, INC. reassignment TANDENT VISION SCIENCE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SMITH, CASEY ARTHUR, FRIEDHOFF, RICHARD MARK
Priority to PCT/US2007/011249 priority patent/WO2007133607A2/en
Priority to US11/801,384 priority patent/US7672537B2/en
Publication of US20070263941A1 publication Critical patent/US20070263941A1/en
Assigned to TANDENT COMPUTER VISION LLC reassignment TANDENT COMPUTER VISION LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TANDENT VISION SCIENCE, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/248Aligning, centring, orientation detection or correction of the image by interactive preprocessing or interactive shape modelling, e.g. feature points assigned by a user

Definitions

  • Images of earth-based objects are routinely captured by satellites in orbit around earth. These images are used in significant and often critical scientific, military and intelligence operations and studies. Typically, the orbiting satellites transmit captured images to earth-based stations for review and study. However, images of earth-based objects, received from a satellite transmission, are often distorted due to effects of the atmosphere. Light reflected from an earth-based object must travel through a thick layer of relatively dense atmosphere before being received by a sensor on board an orbiting satellite. The air in the atmosphere, as well as substances suspended in the atmosphere, such as water droplets and dust, can scatter light that is captured by a sensor on board the satellite. For example, light from the sun can illuminate a microscopic dust particle, that then reflects back to the satellite based sensor. The light measured at each sensor location, (and thus, each pixel of the image) includes the light reflected from the surface of the earth, and all of the light scattered towards the sensor from particles in the path between the earth surface and the orbiting satellite.
  • Path radiance Extra illumination at a sensor, caused by light scattered back from particles in the atmosphere, is referred to as path radiance.
  • Path radiance can occur whenever an imaged object is a significant distance from the sensor, and the medium between the object and the sensor is not a vacuum. The removal of path radiance effects is an important objective of designers of systems that involve capturing images from a large distance, such as the design of satellite transmission systems.
  • the present invention utilizes a constancy of a characteristic spectral ratio for an image to facilitate a correction of path radiance effects in an image.
  • an automated, computerized method for manipulating an image comprises the steps of selecting matched shadow/lit pairs of image portions from each of separate, different materials depicted in the image, utilizing the selected matched pairs to determine spectral ratio information for the image, calculating a path radiance correction term as a function of the spectral ratio information and utilizing the path radiance correction term to manipulate the image, to remove path radiance.
  • an automated, computerized method for manipulating an image comprises the steps of utilizing a relationship of equality for spectral information of the image to calculate a path radiance correction term, and utilizing the path radiance correction term to manipulate the image, to remove path radiance.
  • a computer system comprises a CPU and a memory storing an image file.
  • the computer system is arranged and configured to execute a routine to utilize a relationship of equality for spectral information of the image to calculate a path radiance correction term; and utilize the path radiance correction term to manipulate the image, to remove path radiance.
  • computer systems which include one or more computers configured (e.g., programmed) to perform the methods described above.
  • computer readable media are provided which have stored thereon computer executable process steps operable to control a computer(s) to implement the embodiments described above.
  • the automated, computerized methods can be performed by a digital computer, analog computer, optical sensor, state machine, sequencer or any device or apparatus that can be designed or programmed to carry out the steps of the methods of the present invention.
  • FIG. 1 is a block diagram of a computer system arranged and configured to perform operations related to images.
  • FIG. 2 shows an n X m pixel array image file for an image stored in the computer system of FIG. 1 .
  • FIG. 3 is a flow chart for performing a path radiance correction for satellite imagery, according to a feature of the present invention.
  • a CPU 12 is coupled to a device such as, for example, a receiver 14 via, for example, a USB port.
  • the receiver 14 operates to receive image transmissions from a distant source, such as, for example, an orbiting satellite 14 a via an antenna 14 b .
  • the satellite 14 a comprises a sensor such as a camera operated to capture images of the surface of the earth, transform those images into digital image files, and transmit the digital images to the receiver 14 , as is generally known.
  • the receiver 14 then operates to download the images to the CPU 12 .
  • the CPU 12 stores the downloaded images in a memory 16 as image files 18 .
  • the image files 18 can be accessed by the CPU 12 for display on a monitor 20 , or for print out on a printer 22 .
  • the CPU 12 can be equipped with a real time operating system for real time operations relating to images.
  • each image file 18 comprises an n ⁇ m pixel array.
  • Each pixel, p is a picture element corresponding to a discrete portion of the overall image. All of the pixels together define the image represented by the image file 18 .
  • Each pixel comprises a digital value corresponding to a set of color bands, for example, red, green and blue color components (RGB) of the picture element.
  • RGB red, green and blue color components
  • the present invention is applicable to any multi-band image, where each band corresponds to a piece of the electromagnetic spectrum.
  • the present invention can also be utilized in connection with a grayscale image (a single band).
  • the pixel array includes m columns of n rows each, starting with the pixel p (1,1) and ending with the pixel p(n, m).
  • the CPU 12 retrieves the corresponding image file 18 from the memory 16 , and operates the monitor 20 or printer 22 , as the case may be, as a function of the digital values of the pixels in the image file 18 , as is generally known.
  • the CPU 12 operates to analyze the RGB values of the pixels of a stored image file 18 to achieve various objectives, such as, for example, manipulation of the image to remove the effects of path radiance.
  • various objectives such as, for example, manipulation of the image to remove the effects of path radiance.
  • the path radiance contributes a constant term to the total radiance measured by the satellite sensor. Therefore, there is a constant value that, when subtracted from each pixel of a recorded image, changes each pixel so as to reflect the actual radiance of an object or surface without the interference of an intervening medium such as the atmosphere.
  • the CPU 12 is operated to ascertain a constant value that represents the effect of path radiance.
  • the constant value comprises a separate scalar value for each band in an image.
  • the present invention is based upon a constancy of a characteristic spectral ratio for an image, and utilizes this constancy to ascertain path radiance-correcting information for each wave band of the image.
  • an image comprises two components, material and illumination.
  • an illumination flux impinging on a material depicted in an image comprises an ambient illuminant and an incident illuminant.
  • the incident illuminant is light that causes a shadow and is found outside a shadow perimeter.
  • the ambient illuminant is light present on both the bright and dark sides of a shadow, but is more perceptible within the dark region of a shadow.
  • Spectra for the incident illuminant and the ambient illuminant can be different from one another.
  • a spectral shift caused by a shadow i.e., a decrease of the intensity of the incident illuminant, will be substantially invariant over different materials present in a scene depicted in an image when the scene is illuminated by a common illumination flux.
  • the spectral shift caused by a shadow can be expressed by a spectral ratio of colors across an illumination boundary defined by a shadow on a material.
  • spectral ratios throughout the image that are associated with illumination change should be consistently and approximately equal, regardless of the color of the bright side or the material object characteristics of the boundary.
  • a characteristic spectral ratio for a particular image or scene within an image is a spectral ratio associated with illumination change caused by a shadow, as occurs in the particular image.
  • D is a senor reading of a color of a material depicted in the image, in shadow (Dark)
  • B is the sensor reading for the color of that same material when fully lit by the incident illuminant (Bright).
  • the constancy of spectral ratios provides a basis for determination of a path radiance constant for an image. For example, consider two different materials depicted in an image stored as an image file 18 , M 1 and M 2 .
  • M 1 and M 2 The basic premise, as taught in the co-pending application Ser. No.
  • the equation would be used for each of the wavelengths corresponding to the red, green and blue frequencies of the electromagnetic spectrum.
  • k ⁇ ( D 2 ⁇ B 1 ⁇ ⁇ D 1 ⁇ B 2 ⁇ )/( B 1 ⁇ ⁇ B 2 ⁇ +D 2 ⁇ ⁇ D 1 ⁇ ).
  • a constant k ⁇ is determined for each of the wavelengths, ⁇ , corresponding to the red, green and blue frequencies of the electromagnetic spectrum.
  • the CPU 12 can operate to simply subtract the constant values from the respective bands of each pixel of the image, to thereby remove the effects of path radiance.
  • FIG. 3 there is shown a flow chart for performing a path radiance correction for satellite imagery, according to a feature of the present invention.
  • the routine of FIG. 3 can be applied to any image captured from a significant distance, and therefore subject to the effects of path radiance. These can include, for example, aerial photography and outdoor photography of distant scenes, such as a distant mountain range. Indeed, in any environment having a reasonably homogeneous distribution of reflective particulate matter between the sensor and the scene, an image file 18 corresponding to the scene can exhibit the effects of path radiance that can potentially be corrected according to the routine of FIG. 3 .
  • step 100 an image taken from a significant distance, for example, an image file 18 , is accessed by the CPU 12 .
  • a user selects matched pairs of lit and shadowed portions (pixels or sections) of the image on multiple different materials depicted in the image, one matched pair per material.
  • What is visible to the eye of a user upon display on the monitor 20 of a stored image file 18 accessed by the CPU 12 , is the pixel color values caused by the interaction between specular and body reflection properties of material objects depicted in, for example, a scene in the subject image file 18 , and illumination flux present at the time the image was captured by a sensor at a significant distance, as for example, a sensor on board the satellite 14 a.
  • Step 102 can be implemented by an interactive clicking by the user on the monitor 20 operating to display the subject image file 18 .
  • several pairs (more than two) of lit/shadow pixels for n different materials are selected by a user to improve the accuracy of the path radiance correction.
  • the accuracy and correctness of the characteristic ratio for an image is improved by determining spectral ratio information for illumination boundaries on a local level, that is, a characteristic spectral ratio is determined for each of several preselected local areas of a scene depicted in an image.
  • step 102 can be implemented automatically via an automatic technique for determining illumination flux in an image, as taught in the co-pending application Ser. No. 11/341,742.
  • step 104 the CPU 12 calculates a path radiance correction term in a manner that minimizes the differences between characteristic spectral ratios of the entire image.
  • the CPU 12 determines an optimized value for k ⁇ that accommodates spectral ratios that are as similar as possible.
  • the confidence weight reflects the user's confidence that a selected shadow is indeed across a single material, and can be set as a value selected from a scale of, for example, 0 to 1.
  • Each two material set can be designated by reference numerals i, j, wherein i represents material M 1 , and j represents material M 2 of each selected pair.
  • each material will have an associated weight, w i and w j for materials M i an M j , respectively.
  • each path radiance correction term from the list of possible values can be designated as k ⁇ ij .
  • an overall value for k ⁇ can be determined as a mean or median, or through use of a standard mean shift procedure.
  • k ⁇ ⁇ ⁇ ( ⁇ i ⁇ j ⁇ combine ⁇ ( w i , w j ) * k ⁇ ⁇ ⁇ ij ) / ( ⁇ i ⁇ j ⁇ combine ⁇ ( w i , w j ) ) .
  • a combine (w i , w j ) function is used because each k ⁇ ij is based upon two different materials, each having a weight assigned by the user.
  • the combine function can be determined in terms of a minimum, multiply or average.
  • combine (w i , w j ) min(w i , w j ).
  • combine (w i , w j ) w i *w j .
  • combine (w i , w j ) (w i +w j )/2.
  • the CPU 12 can be operated to define k ⁇ more rigorously than provided by the above described simple mean, median or mean shift approaches on the pairwise-calculated k ⁇ ij .
  • a more rigorous approach can be based upon a rigorous definition of dissimilarity so as to find a k ⁇ value that accommodates spectral ratios that are as similar as possible.
  • a sum-squared difference is utilized.
  • a sum-squared difference the sum of the squared differences between spectral ratios of the image should be as small as possible.
  • a sum absolute difference is utilized.
  • the sum of the absolute differences between spectral ratios should be as small as possible.
  • a third approach to the definition of dissimilarity involves a minimization of the maximum absolute distance between any two spectral ratios.
  • the selection of k ⁇ is set to minimize: max i ⁇ j (combine (w i , w j )*
  • the CPU 12 can proceed to execute a method to determine a k ⁇ for the image, as a function of the definition of dissimilarity.
  • a k ⁇ for the image, as a function of the definition of dissimilarity.
  • a search technique can be implemented so that the CPU 12 can search for a value of k ⁇ that minimizes the dissimilarity. For example, in a linear search, a range of values for k ⁇ is tested to determine a value with a minimum dissimilarity, as expressed by one of the definitions of dissimilarity described above. The search is bounded by minimum and maximum values for k ⁇ , and conducted through a range of m intervals, between the minimum and maximum values of k ⁇ . A dissimilarity for the spectral ratios is calculated for each value of k ⁇ in the range, and the value for k ⁇ with the minimum dissimilarity is selected. The search can be repeated for a range above and below the selected k ⁇ from the previous iteration. The number of iterations can be fixed at a predetermined number, or until a predetermined level of accuracy is reached.
  • Initial values for the maximum and minimum bounds for k ⁇ can correspond to the maximum and minimum values for k ⁇ in the list of possible values calculated by the CPU 12 , as described above.
  • the maximum value for k ⁇ can be set equal to the minimum measured image value for the current wavelength ⁇ , anywhere in the image and the minimum value for k ⁇ set at 0.
  • step 106 the CPU 12 utilizes the calculated k ⁇ for each wavelength, from step 104 , to correct each pixel of the image file 18 .
  • a correction term for each of the Red, Green and Blue bands of each pixel.
  • a multi-spectral image may have more than three bands.
  • the path radiance is removed from each pixel by subtracting k ⁇ from each wave band in each pixel: ⁇ i ⁇ :P i ⁇ P i ⁇ ⁇ k 80 , where P i ⁇ is the value of pixel i in the wave band ⁇ . If a correction causes P i ⁇ to be less than 0, then the pixel value is set to 0.
  • step 108 the CPU 12 outputs the image with path radiance removed.

Abstract

In a first exemplary embodiment of the present invention, an automated, computerized method for manipulating an image is provided. The method of the present invention comprises the steps of selecting matched shadow/lit pairs of image portions from each of separate, different materials depicted in the image, utilizing the selected matched pairs to determine spectral ratio information for the image, calculating a path radiance correction term as a function of the spectral ratio information and utilizing the path radiance correction term to manipulate the image, to remove path radiance.

Description

    BACKGROUND OF THE INVENTION
  • Images of earth-based objects are routinely captured by satellites in orbit around earth. These images are used in significant and often critical scientific, military and intelligence operations and studies. Typically, the orbiting satellites transmit captured images to earth-based stations for review and study. However, images of earth-based objects, received from a satellite transmission, are often distorted due to effects of the atmosphere. Light reflected from an earth-based object must travel through a thick layer of relatively dense atmosphere before being received by a sensor on board an orbiting satellite. The air in the atmosphere, as well as substances suspended in the atmosphere, such as water droplets and dust, can scatter light that is captured by a sensor on board the satellite. For example, light from the sun can illuminate a microscopic dust particle, that then reflects back to the satellite based sensor. The light measured at each sensor location, (and thus, each pixel of the image) includes the light reflected from the surface of the earth, and all of the light scattered towards the sensor from particles in the path between the earth surface and the orbiting satellite.
  • Extra illumination at a sensor, caused by light scattered back from particles in the atmosphere, is referred to as path radiance. Path radiance can occur whenever an imaged object is a significant distance from the sensor, and the medium between the object and the sensor is not a vacuum. The removal of path radiance effects is an important objective of designers of systems that involve capturing images from a large distance, such as the design of satellite transmission systems.
  • SUMMARY OF THE INVENTION
  • The present invention utilizes a constancy of a characteristic spectral ratio for an image to facilitate a correction of path radiance effects in an image.
  • In a first exemplary embodiment of the present invention, an automated, computerized method for manipulating an image is provided. The method of the present invention comprises the steps of selecting matched shadow/lit pairs of image portions from each of separate, different materials depicted in the image, utilizing the selected matched pairs to determine spectral ratio information for the image, calculating a path radiance correction term as a function of the spectral ratio information and utilizing the path radiance correction term to manipulate the image, to remove path radiance.
  • In a second exemplary embodiment of the present invention, an automated, computerized method for manipulating an image is provided. The method of the present invention comprises the steps of utilizing a relationship of equality for spectral information of the image to calculate a path radiance correction term, and utilizing the path radiance correction term to manipulate the image, to remove path radiance.
  • In a third exemplary embodiment of the present invention, a computer system comprises a CPU and a memory storing an image file. Pursuant to a feature of the present invention, the computer system is arranged and configured to execute a routine to utilize a relationship of equality for spectral information of the image to calculate a path radiance correction term; and utilize the path radiance correction term to manipulate the image, to remove path radiance.
  • In accordance with yet further embodiments of the present invention, computer systems are provided, which include one or more computers configured (e.g., programmed) to perform the methods described above. In accordance with other embodiments of the present invention, computer readable media are provided which have stored thereon computer executable process steps operable to control a computer(s) to implement the embodiments described above. The automated, computerized methods can be performed by a digital computer, analog computer, optical sensor, state machine, sequencer or any device or apparatus that can be designed or programmed to carry out the steps of the methods of the present invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a computer system arranged and configured to perform operations related to images.
  • FIG. 2 shows an n X m pixel array image file for an image stored in the computer system of FIG. 1.
  • FIG. 3 is a flow chart for performing a path radiance correction for satellite imagery, according to a feature of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Referring now to the drawings, and initially to FIG. 1, there is shown a block diagram of a computer system 10 arranged and configured to perform operations related to images. A CPU 12 is coupled to a device such as, for example, a receiver 14 via, for example, a USB port. The receiver 14 operates to receive image transmissions from a distant source, such as, for example, an orbiting satellite 14 a via an antenna 14 b. The satellite 14 a comprises a sensor such as a camera operated to capture images of the surface of the earth, transform those images into digital image files, and transmit the digital images to the receiver 14, as is generally known. The receiver 14 then operates to download the images to the CPU 12. The CPU 12 stores the downloaded images in a memory 16 as image files 18. The image files 18 can be accessed by the CPU 12 for display on a monitor 20, or for print out on a printer 22. The CPU 12 can be equipped with a real time operating system for real time operations relating to images.
  • As shown in FIG. 2, each image file 18 comprises an n×m pixel array. Each pixel, p, is a picture element corresponding to a discrete portion of the overall image. All of the pixels together define the image represented by the image file 18. Each pixel comprises a digital value corresponding to a set of color bands, for example, red, green and blue color components (RGB) of the picture element. The present invention is applicable to any multi-band image, where each band corresponds to a piece of the electromagnetic spectrum. The present invention can also be utilized in connection with a grayscale image (a single band). The pixel array includes m columns of n rows each, starting with the pixel p (1,1) and ending with the pixel p(n, m). When displaying or printing an image, the CPU 12 retrieves the corresponding image file 18 from the memory 16, and operates the monitor 20 or printer 22, as the case may be, as a function of the digital values of the pixels in the image file 18, as is generally known.
  • In an image operation, the CPU 12 operates to analyze the RGB values of the pixels of a stored image file 18 to achieve various objectives, such as, for example, manipulation of the image to remove the effects of path radiance. As recognized by the teachings of the present invention, if the imaged surface is approximately a constant distance away from the satellite sensor, and the atmosphere between the sensor and the imaged surface is reasonably homogeneous (e.g., an even distribution of dust and humidity), then the path radiance contributes a constant term to the total radiance measured by the satellite sensor. Therefore, there is a constant value that, when subtracted from each pixel of a recorded image, changes each pixel so as to reflect the actual radiance of an object or surface without the interference of an intervening medium such as the atmosphere.
  • According to a feature of the present invention, the CPU 12 is operated to ascertain a constant value that represents the effect of path radiance. In general, the constant value comprises a separate scalar value for each band in an image. For example, in an RGB image, there will be a first constant for the Red band of the image, a separate second constant for the Green band and a third for the Blue band. The present invention is based upon a constancy of a characteristic spectral ratio for an image, and utilizes this constancy to ascertain path radiance-correcting information for each wave band of the image.
  • As taught in co-pending application Ser. No. 11/341,742, filed on Jan. 27, 2006, entitled: “Method and System For Identifying Illumination Flux In An Image,” which is hereby incorporated by reference, an image comprises two components, material and illumination. Moreover, as further taught in the co-pending Application, an illumination flux impinging on a material depicted in an image comprises an ambient illuminant and an incident illuminant. The incident illuminant is light that causes a shadow and is found outside a shadow perimeter. The ambient illuminant is light present on both the bright and dark sides of a shadow, but is more perceptible within the dark region of a shadow.
  • Spectra for the incident illuminant and the ambient illuminant can be different from one another. A spectral shift caused by a shadow, i.e., a decrease of the intensity of the incident illuminant, will be substantially invariant over different materials present in a scene depicted in an image when the scene is illuminated by a common illumination flux. Thus, the spectral shift caused by a shadow can be expressed by a spectral ratio of colors across an illumination boundary defined by a shadow on a material. Inasmuch as an illumination boundary is caused by the interplay between the incident illuminant and the ambient illuminant, spectral ratios throughout the image that are associated with illumination change, should be consistently and approximately equal, regardless of the color of the bright side or the material object characteristics of the boundary. A characteristic spectral ratio for a particular image or scene within an image, is a spectral ratio associated with illumination change caused by a shadow, as occurs in the particular image.
  • An exemplary characteristic spectral ratio of an image can be expressed by the equation: S=D/(B−D), wherein D is a senor reading of a color of a material depicted in the image, in shadow (Dark), and B is the sensor reading for the color of that same material when fully lit by the incident illuminant (Bright). According to a feature of the present invention, the constancy of spectral ratios provides a basis for determination of a path radiance constant for an image. For example, consider two different materials depicted in an image stored as an image file 18, M1 and M2. The basic premise, as taught in the co-pending application Ser. No. 11/341,742, is that the spectral ratio (S1) associated with a shadow across M1, will be equal to the spectral ratio (S2) associated with a shadow across M2, or S1=S2. According to the exemplary spectral ratio equation, the spectral ratio for the first material M1, at a wavelength λ, S=D/(B−D), and for the second material M2, S=D/(B−D), wherein, D and D represent the Dark intensity of the materials M1 and M2, respectively, at the wavelength λ, and B and B represent the Bright color of the materials M1 and M2, respectively, at the wavelength λ. For each image file 18 of our example, the equation would be used for each of the wavelengths corresponding to the red, green and blue frequencies of the electromagnetic spectrum.
  • Due to the relationship of S=S, D1 80 /(B−D)=D/(B−D). The relationship of equality between spectral ratios of two different materials is true for the actual radiance of the surface. The surface scene as captured by a satellite sensor is altered by the imposition of path radiance, as discussed above. Each pixel of the image file 18 will reflect the additional radiance by a term kλ in each wave band λ. That is, each measured color in each pixel of the image is the sum of the actual radiance of that point in the scene and the path radiance, kλ. Accordingly, an equation can be derived, based upon two materials, to account for the effect of path radiance and establish the relationship of equality for the spectral ratios of the underlying image:
    (D −kλ)/(B k λ−(D −k λ))=(D −k λ)/(B −k λ−(D −k λ)).
  • Solving the above two-material relationship for the path radiance constant,
    k λ=(D B −D B )/(B −B +D −D ).
  • In our example, a constant kλ is determined for each of the wavelengths, λ, corresponding to the red, green and blue frequencies of the electromagnetic spectrum. Upon determination of a path radiance constant for each wavelength band of the image, according to the above-described feature of the present invention, the CPU 12 can operate to simply subtract the constant values from the respective bands of each pixel of the image, to thereby remove the effects of path radiance.
  • Referring now to FIG. 3, there is shown a flow chart for performing a path radiance correction for satellite imagery, according to a feature of the present invention. The routine of FIG. 3 can be applied to any image captured from a significant distance, and therefore subject to the effects of path radiance. These can include, for example, aerial photography and outdoor photography of distant scenes, such as a distant mountain range. Indeed, in any environment having a reasonably homogeneous distribution of reflective particulate matter between the sensor and the scene, an image file 18 corresponding to the scene can exhibit the effects of path radiance that can potentially be corrected according to the routine of FIG. 3. In step 100, an image taken from a significant distance, for example, an image file 18, is accessed by the CPU 12. In step 102, a user selects matched pairs of lit and shadowed portions (pixels or sections) of the image on multiple different materials depicted in the image, one matched pair per material. What is visible to the eye of a user, upon display on the monitor 20 of a stored image file 18 accessed by the CPU 12, is the pixel color values caused by the interaction between specular and body reflection properties of material objects depicted in, for example, a scene in the subject image file 18, and illumination flux present at the time the image was captured by a sensor at a significant distance, as for example, a sensor on board the satellite 14 a.
  • A user can select regions of a shadow across a single material because human eye physiology is capable of distinguishing between shadows and actual physical objects. Step 102 can be implemented by an interactive clicking by the user on the monitor 20 operating to display the subject image file 18. In a preferred embodiment of the present invention, several pairs (more than two) of lit/shadow pixels for n different materials are selected by a user to improve the accuracy of the path radiance correction. For example, as taught in the co-pending application Ser. No. 11/341,742, the accuracy and correctness of the characteristic ratio for an image is improved by determining spectral ratio information for illumination boundaries on a local level, that is, a characteristic spectral ratio is determined for each of several preselected local areas of a scene depicted in an image. The determination of locally relevant spectral ratios accommodates complexities that may be encountered in a real world image, for example, the interplay of several different sources of light, inter-reflections, and so on. Thus, if a user provides selected pairs of pixels across shadows for more than two different materials, the CPU 12 will be able to calculate a more robust estimate for the path radiance correction term, kλ. In the alternative, step 102 can be implemented automatically via an automatic technique for determining illumination flux in an image, as taught in the co-pending application Ser. No. 11/341,742.
  • In step 104, the CPU 12 calculates a path radiance correction term in a manner that minimizes the differences between characteristic spectral ratios of the entire image. In general, with more than two bright/dark pixel or image section pairs selected by a user, there is no single value for kλ that yields a same value for S across all of the selected materials of the image. According to a feature of the present invention, the CPU 12 determines an optimized value for kλ that accommodates spectral ratios that are as similar as possible.
  • Initially, in a straightforward implementation of the present invention, the CPU 12 calculates a kλ value for each two material set (M1, M2) of lit/shadow pairs selected by the user, in step 102, using the two material formula developed above, for each two material set: kλ=(DB−DB)/(B−B+D−D). This provides a list of possible values for kλ. Additionally, for each possible kλ, the user can select a confidence weight, w, for each selected material of the corresponding material pair. The confidence weight reflects the user's confidence that a selected shadow is indeed across a single material, and can be set as a value selected from a scale of, for example, 0 to 1. Each two material set can be designated by reference numerals i, j, wherein i represents material M1, and j represents material M2 of each selected pair. Thus, each material will have an associated weight, wi and wj for materials Mi an Mj, respectively. Moreover, each path radiance correction term from the list of possible values can be designated as kλij.
  • According to a feature of the present invention, an overall value for kλ can be determined as a mean or median, or through use of a standard mean shift procedure. For determination of a mean, k λ = ( i j combine ( w i , w j ) * k λ ij ) / ( i j combine ( w i , w j ) ) .
    A combine (wi, wj) function is used because each kλij is based upon two different materials, each having a weight assigned by the user. The combine function can be determined in terms of a minimum, multiply or average. In a minimum determination, combine (wi, wj)=min(wi, wj). In a multiply determination, combine (wi, wj)=wi*wj. Finally, in an average determination, combine (wi, wj)=(wi+wj)/2.
  • Pursuant to a further feature of the present invention, the CPU 12 can be operated to define kλ more rigorously than provided by the above described simple mean, median or mean shift approaches on the pairwise-calculated kλij. A more rigorous approach can be based upon a rigorous definition of dissimilarity so as to find a kλ value that accommodates spectral ratios that are as similar as possible.
  • In a first approach to a definition of dissimilarity, a sum-squared difference is utilized. In a sum-squared difference, the sum of the squared differences between spectral ratios of the image should be as small as possible. Thus, a selection for kλ is such as to minimize the following summation: ( i = 1 , n ) ( j = ( i + 1 ) , n ) combine ( w i , w j ) * ( ( D i λ - k λ ) / ( B i λ - k λ - ( D i λ - k λ ) ) - ( D j λ - k λ ) / ( B j λ - k λ - ( D j λ - k λ ) ) ) 2 ,
    for n values of different materials.
  • In a second approach, a sum absolute difference is utilized. Here, the sum of the absolute differences between spectral ratios should be as small as possible. The selection of kλ is set to minimize: ( i = 1 , n ) ( j = ( i + 1 ) , n ) combine ( w i , w j ) * ( D i λ - k λ ) / ( B i λ - k λ - ( D i λ - k λ ) ) - ( D j λ - k λ ) / ( B j λ - k λ - ( D j λ - k λ ) ) .
  • A third approach to the definition of dissimilarity involves a minimization of the maximum absolute distance between any two spectral ratios. The selection of kλ is set to minimize: maxi≠j (combine (wi, wj)*|(D−kλ)/(B−kλ−(D−kλ))−(D−kλ)/(B−kλ−(D−kλ))|)
  • Given one of the definitions of dissimilarity, the CPU 12 can proceed to execute a method to determine a kλ for the image, as a function of the definition of dissimilarity. In a closed-form solution, there is a value for kλ that minimizes the dissimilarity between spectral ratios. For example, a closed-form solution can be used for the sum-squared difference definition, wherein the CPU 12 can set V = ( i = 1 , n ) ( j = ( i + 1 ) , n ) combine ( w i , w j ) * ( ( D i λ - k λ ) / ( B i λ - k λ - ( D i λ - k λ ) ) - ( D j λ - k λ ) / ( B j λ - k λ - ( D j λ - k λ ) ) ) 2 ,
    and then differentiate V with respect to kλ. For any value of n, V is quadratic, so that d/dkλV=0 has a unique solution.
  • Alternatively, a search technique can be implemented so that the CPU 12 can search for a value of kλ that minimizes the dissimilarity. For example, in a linear search, a range of values for kλ is tested to determine a value with a minimum dissimilarity, as expressed by one of the definitions of dissimilarity described above. The search is bounded by minimum and maximum values for kλ, and conducted through a range of m intervals, between the minimum and maximum values of kλ. A dissimilarity for the spectral ratios is calculated for each value of kλ in the range, and the value for kλ with the minimum dissimilarity is selected. The search can be repeated for a range above and below the selected kλ from the previous iteration. The number of iterations can be fixed at a predetermined number, or until a predetermined level of accuracy is reached.
  • Initial values for the maximum and minimum bounds for kλ can correspond to the maximum and minimum values for kλ in the list of possible values calculated by the CPU 12, as described above. Or, in the alternative, the maximum value for kλ can be set equal to the minimum measured image value for the current wavelength λ, anywhere in the image and the minimum value for kλ set at 0.
  • In step 106, the CPU 12 utilizes the calculated kλ for each wavelength, from step 104, to correct each pixel of the image file 18. In a standard image, there will be a correction term for each of the Red, Green and Blue bands of each pixel. A multi-spectral image may have more than three bands. The path radiance is removed from each pixel by subtracting kλ from each wave band in each pixel: ∀i λ:P
    Figure US20070263941A1-20071115-P00001
    P−k80 , where P is the value of pixel i in the wave band λ. If a correction causes P to be less than 0, then the pixel value is set to 0.
  • In step 108, the CPU 12 outputs the image with path radiance removed.
  • In the preceding specification, the invention has been described with reference to specific exemplary embodiments and examples thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative manner rather than a restrictive sense.

Claims (19)

1. An automated, computerized method for manipulating an image comprising the steps of:
selecting matched shadow/lit pairs of image portions from each of separate, different materials depicted in the image;
utilizing the selected matched pairs to determine spectral ratio information for the image;
calculating a path radiance correction term as a function of the spectral ratio information; and
utilizing the path radiance correction term to manipulate the image, to remove path radiance.
2. The method of claim 1, wherein the step of selecting matched shadow/lit pairs of image portions from separate, different materials depicted in the image is carried out by a manual input by a user.
3. The method of claim 1, wherein the step of utilizing the selected matched pairs to determine spectral ratio information for the image is carried out by utilizing a relationship of equality for spectral ratios of the image.
4. The method of claim 3, wherein the step of calculating a path radiance correction term as a function of the spectral ratio information is carried out by executing a formula for a relationship of equality of spectral ratios for two different materials, expressed as: kλ=(DB−DB)/(B−B+D−D).
5. An automated, computerized method for manipulating an image comprising the steps of:
utilizing a relationship of equality for spectral information of the image to calculate a path radiance correction term; and
utilizing the path radiance correction term to manipulate the image, to remove path radiance.
6. The method of claim 5 wherein the step of utilizing a relationship of equality for spectral information of the image to calculate a path radiance correction term is carried out by calculating a set of possible correction values for the image, and determining an optimized value for the correction term from among the set of possible correction values.
7. The method of claim 6 wherein the step of determining an optimized value for the correction term from among the set of possible correction values is carried out by calculating a mean value from the set of possible correction values.
8. The method of claim 6 wherein the step of determining an optimized value for the correction term from among the set of possible correction values is carried out by calculating a median value from the set of possible correction values.
9. The method of claim 6 wherein the step of determining an optimized value for the correction term from among the set of possible correction values is carried out by executing a mean shift procedure on the set of possible correction values.
10. The method of claim 6 wherein the step of calculating a set of possible correction values for the image is carried out by executing a formula for a relationship of equality of spectral ratios for each of several sets of two different materials, expressed as: kλ=(DB−DB)/(B−B+D−D), for each two material set.
11. The method of claim 10 comprising the further step of assigning a confidence weight to each material of each two material set.
12. The method of claim 11 wherein the step of determining an optimized value for the correction term from among the set of possible correction values is carried out as a function of the confidence weight of each material.
13. The method of claim 5 wherein the step of utilizing a relationship of equality for spectral information of the image to calculate a path radiance correction term is carried out by selecting a value for the path radiance correction term to minimize a dissimilarity among spectral ratios related to the image.
14. The method of claim 13 wherein the dissimilarity is measured as a sum-squared difference.
15. The method of claim 13 wherein the dissimilarity is measured as a sum absolute difference.
16. The method of claim 13 wherein the dissimilarity is measured as a maximum absolute difference.
17. The method of claim 14 wherein the step of selecting a value for the path radiance correction term to minimize a dissimilarity among spectral ratios related to the image is carried out by executing a closed-form solution.
18. The method of claim 13 wherein the step of selecting a value for the path radiance correction term to minimize a dissimilarity among spectral ratios related to the image is carried out by executing a search procedure over a range of possible values for the path radiance correction term, between predetermined maximum and minimum values of the range.
19. A computer system which comprises:
a CPU; and
a memory storing an image file;
the CPU arranged and configured to execute a routine to utilize a relationship of equality for spectral information of the image to calculate a path radiance correction term; and utilize the path radiance correction term to manipulate the image, to remove path radiance.
US11/431,755 2006-05-10 2006-05-10 Method and system for removing path radiance effects from an image Abandoned US20070263941A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/431,755 US20070263941A1 (en) 2006-05-10 2006-05-10 Method and system for removing path radiance effects from an image
PCT/US2007/011249 WO2007133607A2 (en) 2006-05-10 2007-05-09 A method and system for improved detection of material reflectances in an image
US11/801,384 US7672537B2 (en) 2006-05-10 2007-05-09 Method and system for improved detection of material reflectances in an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/431,755 US20070263941A1 (en) 2006-05-10 2006-05-10 Method and system for removing path radiance effects from an image

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/801,384 Continuation-In-Part US7672537B2 (en) 2006-05-10 2007-05-09 Method and system for improved detection of material reflectances in an image

Publications (1)

Publication Number Publication Date
US20070263941A1 true US20070263941A1 (en) 2007-11-15

Family

ID=38685214

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/431,755 Abandoned US20070263941A1 (en) 2006-05-10 2006-05-10 Method and system for removing path radiance effects from an image
US11/801,384 Active 2027-06-18 US7672537B2 (en) 2006-05-10 2007-05-09 Method and system for improved detection of material reflectances in an image

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/801,384 Active 2027-06-18 US7672537B2 (en) 2006-05-10 2007-05-09 Method and system for improved detection of material reflectances in an image

Country Status (2)

Country Link
US (2) US20070263941A1 (en)
WO (1) WO2007133607A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2478467A4 (en) * 2009-09-15 2017-07-05 Tandent Vision Science, Inc. Method and system for processing an image received from a remote source

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8339017B2 (en) * 2005-08-29 2012-12-25 Kyocera Corporation Multi-layer piezoelectric element and injection apparatus using the same
US8577135B2 (en) * 2009-11-17 2013-11-05 Tandent Vision Science, Inc. System and method for detection of specularity in an image
US8577150B2 (en) * 2011-03-18 2013-11-05 Tandent Vision Science, Inc. System and method for removing specularity from an image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347475A (en) * 1991-09-20 1994-09-13 Amoco Corporation Method for transferring spectral information among spectrometers
US5353053A (en) * 1992-02-12 1994-10-04 Nec Corporation Method of correcting a measured image formed by a television camera
US5604534A (en) * 1995-05-24 1997-02-18 Omni Solutions International, Ltd. Direct digital airborne panoramic camera system and method
US5936731A (en) * 1991-02-22 1999-08-10 Applied Spectral Imaging Ltd. Method for simultaneous detection of multiple fluorophores for in situ hybridization and chromosome painting
US6219159B1 (en) * 1998-03-09 2001-04-17 Hewlett Packard Company Spectrally balanced scanner
US6904120B2 (en) * 2003-07-01 2005-06-07 General Electric Company Method and apparatus for correcting bone induced spectral artifacts

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774521A (en) * 1996-07-08 1998-06-30 Cedars-Sinai Medical Center Regularization technique for densitometric correction
US6074339A (en) * 1998-05-07 2000-06-13 Medtronic Ave, Inc. Expandable braid device and method for radiation treatment
US6504899B2 (en) * 2000-09-25 2003-01-07 The Board Of Trustees Of The Leland Stanford Junior University Method for selecting beam orientations in intensity modulated radiation therapy
US6543936B2 (en) * 2001-04-24 2003-04-08 Daniel Uzbelger Feldman Apparatus for diagnosis and/or treatment in the field of dentistry using fluoroscopic and conventional radiography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5936731A (en) * 1991-02-22 1999-08-10 Applied Spectral Imaging Ltd. Method for simultaneous detection of multiple fluorophores for in situ hybridization and chromosome painting
US5347475A (en) * 1991-09-20 1994-09-13 Amoco Corporation Method for transferring spectral information among spectrometers
US5353053A (en) * 1992-02-12 1994-10-04 Nec Corporation Method of correcting a measured image formed by a television camera
US5604534A (en) * 1995-05-24 1997-02-18 Omni Solutions International, Ltd. Direct digital airborne panoramic camera system and method
US6219159B1 (en) * 1998-03-09 2001-04-17 Hewlett Packard Company Spectrally balanced scanner
US6904120B2 (en) * 2003-07-01 2005-06-07 General Electric Company Method and apparatus for correcting bone induced spectral artifacts

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2478467A4 (en) * 2009-09-15 2017-07-05 Tandent Vision Science, Inc. Method and system for processing an image received from a remote source

Also Published As

Publication number Publication date
US7672537B2 (en) 2010-03-02
WO2007133607A3 (en) 2008-10-02
WO2007133607A2 (en) 2007-11-22
US20070263944A1 (en) 2007-11-15

Similar Documents

Publication Publication Date Title
US11810272B2 (en) Image dehazing and restoration
Baek et al. Compact single-shot hyperspectral imaging using a prism
Finlayson et al. On the removal of shadows from images
Rabatel et al. Getting simultaneous red and near-infrared band data from a single digital camera for plant monitoring applications: Theoretical and practical study
US8144975B2 (en) Method for using image depth information
Drew et al. Recovery of chromaticity image free from shadows via illumination invariance
US8577170B2 (en) Shadow detection in a single image
US20130064420A1 (en) Automated system and method for optical cloud shadow detection over water
US20100008595A1 (en) Automated atmospheric characterization of remotely sensed multi-spectral imagery
US8094964B2 (en) Methods and systems for estimating illumination source characteristics from a single image
US20090033755A1 (en) Image acquisition and processing engine for computer vision
EP2005365A2 (en) Method and system for separating illumination and reflectance using a log color space
US8385648B2 (en) Detecting illumination in images
US8334870B2 (en) Methods for obtaining a three-band intrinsic image from a single input image
US9418434B2 (en) Method for detecting 3D geometric boundaries in images of scenes subject to varying lighting
US7672537B2 (en) Method and system for improved detection of material reflectances in an image
Kitahara et al. Simultaneous Estimation of Spectral Reflectance and Normal from a Small Number of Images.
US10070111B2 (en) Local white balance under mixed illumination using flash photography
CN109643440B (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN111413279A (en) Video processing method and device for multispectral detection and multispectral detection terminal
US9384553B2 (en) Method for factorizing images of a scene into basis images
Barnard Modeling scene illumination colour for computer vision and image reproduction: A survey of computational approaches
Song et al. Advanced underwater image restoration in complex illumination conditions
US20180130189A1 (en) Image processing device, image processing method, and program recording medium
Hristova et al. High-dynamic-range image recovery from flash and non-flash image pairs

Legal Events

Date Code Title Description
AS Assignment

Owner name: TANDENT VISION SCIENCE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SMITH, CASEY ARTHUR;FRIEDHOFF, RICHARD MARK;REEL/FRAME:018058/0601;SIGNING DATES FROM 20060625 TO 20060628

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION

AS Assignment

Owner name: TANDENT COMPUTER VISION LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANDENT VISION SCIENCE, INC.;REEL/FRAME:049080/0636

Effective date: 20190501