US7787709B2 - Method and apparatus for illumination compensation of digital images - Google Patents

Method and apparatus for illumination compensation of digital images Download PDF

Info

Publication number
US7787709B2
US7787709B2 US11/541,711 US54171106A US7787709B2 US 7787709 B2 US7787709 B2 US 7787709B2 US 54171106 A US54171106 A US 54171106A US 7787709 B2 US7787709 B2 US 7787709B2
Authority
US
United States
Prior art keywords
image
pixel
virtual
pixel intensities
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US11/541,711
Other versions
US20070025633A1 (en
Inventor
Narayan Srinivasa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HRL Laboratories LLC
Original Assignee
HRL Laboratories LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HRL Laboratories LLC filed Critical HRL Laboratories LLC
Priority to US11/541,711 priority Critical patent/US7787709B2/en
Assigned to HRL LABORATORIES, LLC reassignment HRL LABORATORIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SRINIVASA, NARAYAN
Publication of US20070025633A1 publication Critical patent/US20070025633A1/en
Application granted granted Critical
Publication of US7787709B2 publication Critical patent/US7787709B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to illumination compensation in digital images. More particularly, the present invention relates to balancing the dynamic range of a scene within a digital image with contrast enhancement of the scene.
  • Image enhancement refers to the science of improving the quality of an image based on some absolute measure. While the problems with image quality are numerous, such as focus problems, depth-of-field issues, motion blur, dynamic range of scene and other sources of noise, the main focus in the digital image processing community has been to improve or enhance fine structure contrasts. This is primarily referred to as contrast enhancement. Contrast enhancement is useful when image contrasts are imperceptible or barely perceptible. Low contrast can be caused by scenes that are hazy or have other poor illumination conditions. This problem is further exacerbated by the fact that cameras with low dynamic range capture the scene with a reduction in dynamic range from the true range present in the scene and this comes with the penalty of further loss of contrast. However, there is experimental evidence that suggests that human perception (subjective quality) is the best when the image's contrast and dynamic range is high. This is the primary motivation for developing a new method for image enhancement.
  • Contrast enhancement techniques for images are broadly classified into two classes.
  • the first class modifies the intensity histogram of images for contrast enhancement.
  • a special case is the histogram equalization method. See, for example, Anil K. Jain, Fundamentals of Digital Image Processing , Prentice Hall, 1969, pp. 241-244. Histogram equalization applied to an entire image has the disadvantage of attenuating or even removing contrast information of small magnitudes in the scarcely populated histogram regions. This is because neighboring pixels in such regions are mapped to the same output histogram value.
  • Adaptive histogram equalization applies histogram modification based on local statistics of an image rather than on a global scale. See, for example, R. A. Hummel, “Image Enhancement By Histogram Transformation,” Computer Vision Graphics and Image Processing , vol. 6, 1977, pp. 184-195; R. B. Paranjape, W. M. Morrow and R. M. Rangayyan, “Adaptive-Neighborhood Histogram Equalization For Image Enhancement,” Computer Vision Graphics and Image Processing , vol. 54, no. 3, 1992, pp. 259-267; and, D. Mukherjee and B.
  • Contrast enhancement methods based on adaptive histogram equalization are generally computationally slow. Since such methods operate on purely local statistics, the methods sometime create artifacts that make it hard to distinguish real objects from clutter. There are other variations of adaptive histogram techniques that address the trade-off issues between computational speed and accuracy of enhancement. See, for example, S. M. Pizer, et. al, “Adaptive Histogram Equalization And Its Variations,” Computer Vision Graphics and Image Processing , vol. 39, 1987. pp. 355-368. J.
  • a second class of contrast enhancement techniques operate directly on the image by applying the principle of separating high and low frequency content in the image. In these cases the image histogram is not adjusted. The frequency contents are instead manipulated and filtered in such a way as to enhance the image contrast.
  • Two examples of this approach are the homomorphic filtering method and the unsharp masking approach. The homomorphic filtering method is described in more detail by A. V. Oppenheim, R. W. Schafer and T. G. Stockham Jr., in “Nonlinear Filtering Of Multiplied And Convolved Signals,” Proc. of IEEE , vol. 56, no. 8, 1968, pp. 1264-1291.
  • Prior art methods for contrast enhancement are typically deficient either in the quality of image enhancement due to inherent problems with the method or are computationally slow and require image-specific adjustments of several system parameters.
  • prior art methods based on modification of the intensity histogram of images have the disadvantage of attenuating or even removing contrast information of small magnitudes in the scarcely populated histogram regions. This result occurs because neighboring pixels in such regions are mapped to the same output histogram value.
  • Methods based on adaptive histogram modification attempt to address this problem, but these methods are computationally slow.
  • other prior art methods operate directly on the image by applying the principle of separating high and low frequency content in the image. In these methods, the image histogram is typically not adjusted. The frequency contents are instead manipulated and filtered in such a way as to enhance the image contrast. These methods are also generally computationally slow and require image specific adjustments to several parameters for improved results. Further, while the contrast enhancement is improved in most cases, the dynamic range of the image is not.
  • Embodiments of the present invention provide a method and apparatus for balancing the dynamic range and contrast content in an image such that the overall quality of the image is enhanced by adjusting only a single user-defined parameter.
  • Embodiments of the present invention provide contrast enhancement without requiring image specific adjustments to several parameters for improved results.
  • Embodiments of the present invention provide the desired illumination compensation in a computationally efficient manner.
  • An embodiment of the present invention provides a method for enhancing the quality of a digital image by using a single user-defined parameter.
  • a virtual image is created based on the single user-defined parameter and the original digital image.
  • An adaptive contrast enhancement algorithm operates on a logarithmically compressed version of the virtual image to produce adaptive contrast values for each pixel in the virtual image.
  • a dynamic range adjustment algorithm is used to generate logarithmic enhanced pixels based on the adaptive contrast values and the pixels of the logarithmically compressed version of the virtual image.
  • the logarithmic enhanced pixels are exponentially expanded and scaled to produce a compensated digital image.
  • Embodiments of the present invention provide image enhancement that can concurrently balance the dynamic range in a scene with the contrast in the scene.
  • a virtual image of the scene is generated to facilitate dynamic range compensation.
  • the virtual image is generated to simulate the dynamic range that would be captured from a scene as if the scene were viewed by a virtual camera of wider dynamic range.
  • An adaptive contrast enhancement algorithm operates on the local image statistics measured at each pixel. Using the virtual image in combination with the adaptive contrast enhancement algorithm, embodiments of the present invention are able to compensate for illumination for a wide range of illumination conditions.
  • the embodiments are computationally efficient and the quality of the output produced by the various embodiments may be controlled using a single user-defined parameter.
  • An embodiment of the present invention comprises a method for image enhancement having the steps of: receiving a digital image of a scene, the digital image comprising a plurality of digital image pixels; generating a virtual image of at least a portion of the digital image pixels, the virtual image having a plurality of virtual pixels, each virtual pixel having a pixel intensity; logarithmically converting the pixel intensities of at least a portion of the virtual pixels to produce a plurality of log virtual pixels; performing adaptive contrast enhancement on at least a portion of the log virtual pixels to produce a plurality of adaptive contrast values corresponding to said at least a portion of log virtual pixels; performing dynamic range adjustment using said at least a portion of the log virtual pixels and said plurality of adaptive contrast values to produce a plurality of log enhanced pixels; and exponentially converting at least a portion of the log enhanced pixels to produce a plurality of enhanced pixels, the plurality of enhanced pixels comprising an enhanced digital image.
  • Another embodiment of the present invention comprises an apparatus for producing an enhanced image from a digital image having a plurality of digital image pixels, the apparatus comprising: means for expanding a dynamic range of at least a portion of the digital image pixels to produce a virtual image comprising a plurality of virtual pixels; means for calculating a corresponding adaptive contrast value for a corresponding virtual pixel of the plurality of virtual pixels, said means for calculating receiving said plurality of virtual pixels and producing a plurality of adaptive contrast values; means for adjusting an intensity of a corresponding virtual pixel of said plurality of virtual pixels, said means for adjusting receiving said plurality of adaptive contrast values and said plurality of virtual pixels and producing a plurality of enhanced pixels; and, means for scaling said plurality of enhanced pixels to produce said enhanced image.
  • Still another embodiment of the present invention comprises a method for enhancing the contrast of a digital image wherein said digital image comprises an array of digital pixels, each digital pixel having a dynamic range, said method having the steps of: specifying a virtual camera parameter; generating an array of virtual pixels from said array of digital pixels, each virtual pixel having an expanded dynamic range based on said virtual camera parameter; logarithmically converting said array of virtual pixels to an array of log virtual pixels; generating an array of adaptive contrast values corresponding to said array of log virtual pixels; generating an array of log enhanced pixels from said log virtual pixels and from said array of adaptive contrast values; exponentially converting said array of log enhanced pixels to an array of enhanced pixels, each enhanced pixel having an intensity; and scaling the intensity of each enhanced pixel to be within a specified range for an image output device.
  • Still another embodiment of the present invention provides a method of creating a virtual image of a digital image, the virtual image having a higher dynamic range than the digital image, the digital image and the virtual image comprising a plurality of pixels, and the method comprising the steps of: specifying a virtual camera parameter; determining a pixel intensity for each pixel in the digital image; and calculating a pixel intensity for each pixel in the virtual image based on the virtual camera parameter and the pixel intensity for each pixel in the digital image.
  • Embodiments of the present invention are not limited to hardware only, software only, or firmware only implementations. Those skilled in the art will understand that embodiments of the present invention may be implemented using a variety of software, hardware, and/or firmware technologies known in the art.
  • FIG. 1 shows a block diagram for modeling the transformation of a viewed scene to an image as captured by an imaging device.
  • FIG. 2 shows a block diagram of an adaptive contrast enhancement process according to an embodiment of the present invention.
  • FIG. 3 shows a block diagram of an image enhancement system according to an embodiment of the present invention.
  • FIG. 4 depicts the enhancement in overall image quality and contrast provided by an embodiment of the present invention by showing three original images and their contrast and three compensated images and their contrast.
  • FIG. 5 depicts an original image of vehicles in a tunnel and the effect of specifying different virtual camera parameters with an embodiment of the present invention on the resulting compensated images.
  • FIG. 6 depicts an original image of vehicles in a rain scene and the effect of specifying different virtual camera parameters with an embodiment of the present invention on the resulting compensated images.
  • FIG. 7 depicts the effect of compensating only a portion of an image with an embodiment of the present invention.
  • FIG. 8 shows the results obtained for using different compensation techniques known in the art and the results obtained using an embodiment of the present invention on two original images.
  • FIG. 9 depicts a hardware block diagram for a computer system according to an embodiment of the present invention.
  • a typical device used to view and capture a scene is a Charge Coupled Device (CCD) camera array.
  • CCD Charge Coupled Device
  • the variations in the scene's illumination is transformed into a set of luminance values on the CCD camera array.
  • the dynamic range i.e., the ratio of maximum luminance in the scene to the minimum luminance
  • a CCD camera has a dynamic range between 16 and 32.
  • a CCD camera may not capture the true range of illumination variations found in the sunlit scene, for example.
  • the images captured by the CCD camera are instead compressed in their dynamic range.
  • this loss of information due to dynamic range compression leads to loss of contrast information.
  • contrast is attenuated wherein barely noticeable contrast becomes unnoticeable. This phenomenon can be observed using a simple mathematical model as described below.
  • I SCENE KI SCENE ⁇ (Eq. 1)
  • K the dynamic range constant
  • K the degree of dynamic range compression (if ⁇ 1.0) or expansion (if ⁇ >1.0).
  • is less than 1.0.
  • Equation 1 for small changes in the scene illumination ⁇ I SCENE , the corresponding change in the CCD image ⁇ I CCD can be written as:
  • the scene image I SCENE is first converted by a logarithmic operator 101 into a log image using an intermediate variable z.
  • a multiplier 105 multiplies the log image z by ⁇ .
  • An exponential operator 103 computes the exponent of the ⁇ times z product.
  • a multiplier 107 then multiplies the resultant by the constant K to complete the transformation of I SCENE into I CCD .
  • Embodiments of the present invention essentially involve the modification of the portion of the block diagram shown in FIG. 1 before the exponential operator 103 to provide for simultaneous adjustment of the dynamic range and the contrast in an image.
  • Embodiments of the present invention comprise three main modules: an adaptive contrast enhancing module, a dynamic range module, and a virtual image generation module.
  • the adaptive contrast enhancing module and the dynamic range module work in tandem to simultaneously enhance the overall image.
  • the virtual image generation module is used since the true image I SCENE is unknown, and only the acquired image I CCD , with a limited dynamic range, is available for processing.
  • the virtual image generation module helps to address this problem by creating a virtual image of high dynamic range.
  • Embodiments of the present invention are described below. First, the details of a preferred embodiment of the adaptive contrast enhancement module are presented. Second, a description of a preferred embodiment of the dynamic range module is presented. Third, a preferred embodiment of the virtual image generation module is presented. Fourth, embodiments of a system comprising the three modules are discussed, where the parameters of the system may be adjusted at both a pixel level as well as a global level by defining a single user parameter that enables the automatic setting of all other system parameters. Fifth, software and hardware implementations of the present invention are discussed. Finally, results of image enhancement according to the present invention will be presented.
  • the adaptive contrast enhancement module operates on the local image statistics measured at each pixel (i,j) of the log image z(i,j) as follows.
  • the log image representation has a useful property in that the local difference ⁇ z(i,j) is directly equal to the contrast in the input image I SCENE at pixel (i,j).
  • the local contrast at any given pixel is measured as the difference between z(i,j) and z mean (i,j).
  • the mean intensity, z mean (i,j) represents the mean intensity in the log domain within the local neighborhood of the pixel.
  • the local neighborhood is defined by using a 3 pixel by 3 pixel window. Other size windows may be used, but a 3 ⁇ 3 window is preferred, since it provides for computational efficiency and provides for the sharpest contrast.
  • the advantages of the locality property will also generally decrease with larger size windows.
  • Adaptive contrast enhancement is described by P. M. Narenda and R. C. Fitch in “Real-time Adaptive Contrast Enhancement,” IEEE Trans. on Pattern Analysis and Machine Intelligence , vol. PAMI-3, no. 6, November 1981, pp. 655-661. It is also described by M. A. Cohen and S. Grossberg in “Neural Dynamics Of Brightness Perception: Features, Boundaries, Diffusion And Resonance,” Perception and Psychophysics , vol. 41, 1987, pp. 117-158. Typically, in these previous efforts and others known in the art, the local contrast is scaled by a constant factor and then combined with the scaled image from a dynamic range module. However, such a procedure does not discriminate between small and large contrast regions.
  • the choice of the scaling factor has to balance the prevention of large contrasts from saturating the image while at the same time prevent small contrast regions from becoming imperceptible.
  • image specific tuning of the scaling factor is generally required to avoid saturating the image while still allowing small contrast regions to be seen.
  • an adaptive scale parameter ⁇ (i,j) for each pixel is generated.
  • the value of this parameter is defined as:
  • ⁇ ⁇ ( i , j ) [ v ⁇ ( i , j ) - C ] + v ⁇ ( i , j ) ( Eq . ⁇ 3 )
  • v(i,j) is the variance of intensity at pixel (i,j) and C is a constant.
  • the function [x] + max(x,0) represents a rectification of input x.
  • Equation 3 the variance v(i,j) is computed as (I(i,j) ⁇ M(i,j)) 2 where I(i,j) is the intensity at pixel (i,j) and M(i,j) is the mean intensity of pixels within the local neighborhood centered at pixel (i,j).
  • the constant C must be greater than zero and should be small (C ⁇ 0.001). In a preferred embodiment of the present invention, a value of 0.0005 is used for C. Typically, a small constant C ensures that the rectification in the numerator of Equation 3 will be greater than zero and thus makes the adaptive scale parameter ⁇ (i,j) effective.
  • the adaptive scale parameter ⁇ (i,j) operates as follows. When the observed intensity z(i,j) at a given pixel is larger than the mean z mean (i,j) then the observed intensity at that pixel corresponds to a large contrast region. A large contrast region provides that the variance v(i,j) at that pixel is large.
  • Equation 4 above is similar to Equation (1) in Narenda and Fitch, discussed above.
  • the algorithm disclosed by Narenda and Fitch directly operates on the input image, while embodiments of the present invention generally operate on the log value of the image.
  • the weighting in embodiments of the present invention is adaptive while that described in Narenda and Fitch depends on a user defined constant. Narenda and Fitch disclose that this constant needs to be set manually on a per image basis, especially for high dynamic range images, so as to balance out large excursions in intensity. While embodiments of the present invention may rely on the setting of the constant C for the calculation of the adaptive scaling parameter (see Equation 3 above), this constant does not have to be changed on a per image basis.
  • the adaptive contrast is adjusted as follows. From Equation 3, when the variance in intensity is high at a pixel, then the adaptive parameter ⁇ (i,j) is close to 1.0. From Equation 4, when the adaptive parameter ⁇ (i,j) is close to 1, the adaptive local mean az mean (i,j) almost fully corresponds to the observed intensity z(i,j) at that pixel. From Equation 5, when the adaptive local mean is nearly equal to the observed intensity, the adaptive contrast a ⁇ z(i,j) at that pixel is close to 0. Therefore, high contrast regions in the input image cause minimal change to the adaptive contrast. Similarly, from Equation 3, if the variance at a pixel is close to 0, the adaptive parameter ⁇ (i,j) is close to 0.
  • Equation 5 provides that the adaptive contrast a ⁇ z(i,j) at that pixel is close to the actual contrast information. Therefore, low contrast regions in the input image will result in minimal loss of adaptive contrast.
  • the adaptive approach described above provides the ability to strike a balance to preserve both large contrasts without saturating the image while at the same time enhance the small contrasts so as to make them perceptible.
  • FIG. 2 shows a block diagram that implements the process of adaptive contrast computation described above.
  • the scene image I input is converted by the logarithmic operator 101 into a log image using an intermediate variable z.
  • a local mean operator 211 calculates the local mean intensity LM in the log domain within the local neighborhood of the pixel (i,j). As described above, a preferred local neighborhood is 3 ⁇ 3, but other sizes may be used.
  • the local mean intensity LM is then subtracted from the log image z at a first subtractor 213 to provide a local difference value LD.
  • the adaptive parameter ⁇ (i,j) is calculated as discussed above.
  • a multiplier 215 then multiples the local difference value by the adaptive parameter, which is added to the local mean intensity by adder 217 to create an adaptive local mean ALM.
  • a second subtractor 219 subtracts the log image z from the adaptive local mean to create an adaptive local difference ALD, also referred to as “contrast gain.”
  • Embodiments of the present invention also preferably comprise a dynamic range adjustment module.
  • the dynamic range of an image can be defined as follows:
  • the local dynamic range for each pixel in the image may be considered as the ratio of the intensity of each pixel to the minimum intensity of the image. That is, the local dynamic range D i (i,j) at each pixel may be calculated as follows:
  • the dynamic range adjustment module of embodiments of the present invention essentially performs a scaling operation on the log image.
  • the scale factor ⁇ D is, however, adjusted such that it forms a convex sum (see Equation 10 below) with the scale factor ⁇ C used for the adaptive contrast enhancement discussed above.
  • the convex sum ensures that the overall effect on the compensated image is balanced as a compromise between the dynamic range component and the contrast component.
  • the value for the constant factor in Equation 10 decides the overall magnitude of the compensation.
  • the constant factor is 1.0.
  • preferred embodiments of the present invention also comprise a virtual image generation module.
  • a virtual image generation module provides improved performance for the following reasons. In the adaptive contrast enhancement modules and dynamic range adjustment modules, it is assumed that the true image I SCENE with its inherent high dynamic range is known. However, I SCENE is unknown. Instead, what is available is an image I input that has already been captured by an imaging device, such as a CCD camera. The captured image I input will necessarily have a lower dynamic range than the true image I SCENE . The virtual image generation module helps to address this problem by creating a virtual image of high dynamic range.
  • a parameter is used that defines the “desired” dynamic range for the virtual camera image.
  • This parameter may be a user-defined parameter.
  • the virtual camera parameter B represents the bits/pixel for the virtual camera. The higher the bits/pixel, the greater the dynamic range is for the virtual camera image. Once the virtual camera parameter B is defined, then the maximum intensity that the virtual camera can register can be computed as 2 B .
  • the dynamic range D i (i,j) for the pixels within the actual camera image can be calculated based upon the minimum intensity z min within the image.
  • the dynamic range D i (i,j) can be calculated based upon the maximum possible intensity of the actual camera image. That is, if the actual camera image is acquired with a camera having an intensity resolution of X bits/pixel, the maximum possible intensity for a pixel is 2 X .
  • the dynamic range for any given pixel in the input image may then be calculated as follows:
  • the minimal intensity for the virtual camera image will occur at the same pixel that has the minimal intensity in the input image.
  • the ratio of the dynamic range for the virtual image D v (i,j) to the dynamic range for the input image D i (i,j) is 2 (B ⁇ X) , where B and X are as defined above.
  • a transformation model from the dynamic range for the input image to the dynamic range of the virtual image is defined as follows:
  • the dynamic range of the virtual image D v (i,j) may also be defined in the same manner as the dynamic range for the input image D i (i,j) shown by Equation 11. That is,
  • I virtual ⁇ ( i , j ) 2 ( B - XRD ) ⁇ I input ⁇ ( i , j ) RD ⁇ ⁇ or , alternatively , Eq . ⁇ ( 14 )
  • I virtual ⁇ ( i , j ) 2 ( B - X ⁇ ( B - X ) ⁇ X B ) ⁇ I input ⁇ ( i , j ) X B ⁇ ( B - X ) Eq . ⁇ ( 15 )
  • the computation for the intensities of the pixels of the virtual image make use of the single virtual camera parameter B, which, as indicated above, may be user defined, and the intensity resolution X of the input image.
  • I virtual (i,j) cannot be visualized as a standard image (because of higher bits/pixel), it provides an image with a higher dynamic range compared to I input (the actual camera image) described in the previous sections, since, as described above, the dynamic range of I input is limited to the dynamic range of the actual camera that captured the image. In preferred embodiments of the present invention, all the computations described in the previous modules use this virtual image.
  • FIG. 3 A block diagram of an image enhancement system 300 according to the present invention is shown in FIG. 3 .
  • the image enhancement system 300 comprises embodiments of an adaptive contrast enhancer module 210 , a dynamic range module 220 and a virtual image generation module 230 , as discussed above.
  • the system 300 receives an input image I input (i,j) that is operated on by the virtual image generation module 230 to produce a virtual image I virtual (i,j).
  • the virtual image I virtual (i,j) is then converted to a log image z(i,j) by the logarithmic operator 101 .
  • the adaptive contrast enhancer module 210 is used to produce the adaptive contrast a ⁇ z(i,j).
  • the dynamic range module 220 receives both the log image z(i,j) and the adaptive contrast a ⁇ z(i,j) to produce a modified log image.
  • the exponential operator 103 then converts the modified log image to a modified image.
  • the multiplier 107 then multiplies the modified image by a scale factor K to produce a compensated image I output (i,j).
  • the virtual image generator module 230 preferably uses the virtual camera parameter B in the calculations for the virtual image.
  • the virtual camera parameter may be provided as a user-specified input 290 .
  • the virtual camera parameter B may also be used to calculate the scale factors, ⁇ C and ⁇ D , used in the calculations in the dynamic range module 220 .
  • the virtual camera parameter B may also be used in the calculation of the scale factor K used in producing the compensated image I output (i,j).
  • the value of the dynamic range module scale factor ⁇ D may be calculated as follows:
  • Equation 16 suggests that the scaling factor for the dynamic range will monotonically decrease with an increase in the bits/pixel value of the virtual image.
  • the scaling factor ⁇ C for the output of the adaptive contrast enhancer 210 can be readily obtained using Equation 10.
  • the ⁇ D calculation 291 in FIG. 3 may be obtained from Equation 16 and the virtual camera parameter B.
  • the ⁇ C calculation 293 may be obtained from ⁇ D and Equation 10.
  • K ( 2 Y - 1 ) * e B A ( I virtual max ) ⁇ D ⁇ ⁇
  • A represents the average image intensity for the input image
  • Y represents the dynamic range (in bits/pixel) of a display device used to display the compensated image.
  • the K factor helps in scaling the compensated image I output to be within the visible range of the display device so as to visualize the compensation/enhancement as well as use the image for further processing, if necessary. Values below and above the display range are clipped.
  • image processing systems can achieve a balance between improving the dynamic range of the image while still enhancing the contrast for low contrast regions and retaining the contrast for high contrast regions.
  • the parameters are global and hence need to be set only in the beginning of the computations.
  • a user need only specify the virtual camera parameter B and all other scale factors may be calculated from that one parameter.
  • the parameters may be computed locally for each pixel. Computations of the parameters for each pixel may be performed by calculating the adaptive contrast scale factor ⁇ C based on the dynamic range at each pixel using Equation 7. That is, rather than using a single adaptive contrast scale factor ⁇ C for the entire image, the adaptive contrast scale factor is calculated for each pixel in the image. Using Equation 7, the pixel-wise adaptive contrast scale factor is calculated as follows:
  • the average intensity A in the pixel local neighborhood is used, instead of the average intensity of the whole image.
  • the average intensity A is calculated using a local neighborhood of 3 ⁇ 3 pixels.
  • Embodiments of the present invention may be implemented with software.
  • a program written in the C programming language implementing an embodiment of the present invention.
  • This program loads an image contained in a disk file and displays a compensated image using the “global” image enhancement technique described above.
  • the user provides the virtual camera parameter and the program calculates the other constants necessary for the generation of the virtual image and the compensated image.
  • Distribution and installation of this software is generally accomplished using media well-known in the art, such as CDROMs.
  • Embodiments of the present invention may be provided by a computer-based system that executes software designed to implement an embodiment of the method of the present invention.
  • FIG. 9 depicts a typical computer system that may be used in an embodiment of the present invention.
  • Program code and parameter settings may be stored in a storage device 810 , such as a disk drive, tape drive, solid state memory, or other storage devices known in the art.
  • the storage device 810 may also store files containing uncompensated images and may be used to store files containing compensated images. Images may also be acquired using an image acquisition device 850 , such as a CCD camera.
  • a processor 820 for example, a commercially available processor such as a Pentium® microprocessor from Intel Corporation, is coupled to the image acquisition device 850 to receive the acquired image.
  • the acquired image may be output in a digital form by the acquisition device 850 or the processor 820 may convert an acquired analog image to a digital image.
  • the processor 820 executes the computer-based commands needed to perform the image compensation according to embodiments of the present invention.
  • a user input device 840 such as a keyboard, is used to provide user-specified data and commands to the processor 820 .
  • a display device 830 such as a computer monitor, provides a display of the compensated image, or may provide a display of multiple images, such as both the original acquired image and the compensated image.
  • FIGS. 4 , 5 , and 6 An embodiment of the present invention has been applied to several example images captured using a regular CCD camera under various illumination conditions. The results of applying this embodiment are shown in FIGS. 4 , 5 , and 6 .
  • FIGS. 5 and 6 show the effect of using different B values for two very different illumination conditions.
  • the image is of vehicles inside a tunnel.
  • the overall contrast has improved considerably.
  • the contrast gets amplified further. So, depending on the application, an appropriate B can be used.
  • a similar improvement can also be observed for images of FIG. 6 .
  • the mist due to rain covers the back of the car and makes it unclear.
  • the results from enhancing the image according to the present invention improves the images considerably.
  • Another useful aspect of the present invention is that it may be applied for patches of the scene just as easily as the whole image without the need to change any system parameters. This feature is especially desirable if computational speed is very important.
  • By facilitating the processing of small image patches without any change in the overall method used it is possible for interesting applications in computer vision to be realized, such as robust tracking of the objects under variable illumination. This capability is demonstrated through a vision-based object-tracking example, as shown in FIG. 7 .
  • a video track is established for a vehicle, such as the small pickup truck shown in FIG. 7 , and the pickup truck goes under the bridge (as shown in FIG. 7 ), then the video track could be lost due to this drastic change in illumination.
  • Embodiments of the present invention address this problem by applying the provided image enhancement selectively within the predicted video track. This enables the features to be tracked in a stable fashion since both the dynamic range and contrast information are improved for the video track. This can be observed by comparing the area around the pickup truck in the input image to the compensated image shown in FIG. 7 .
  • results obtained using an embodiment of the present invention are compared with other methods for image enhancement.
  • the other methods are histogram equalization as described in Jain, as discussed above, which is an example of the most common algorithm of the first class of algorithms outlined above, and the Narendra-Fitch algorithm as described in the reference authored by P. M. Narenda and R. C. Fitch, discussed above, which is an example of the second class of algorithms outlined above.
  • These two approaches have been selected for comparison because they are the most commonly used methods in their respective class of algorithms and are computationally the most efficient.
  • the first comparison was performed for a tunnel scene with vehicles in front of a host vehicle equipped with an 8-bit/pixel CCD camera.
  • This scene is an extreme situation where the whole image is dark where even details of the lanes near the vehicle are not clearly visible (See FIG. 8 , first row, first column).
  • the results of the three approaches are shown in the remaining three columns of the first row.
  • the histogram-equalized image looks quite washed out because of an imbalance between the dynamic range in the image and the contrast enhancement. This is because histogram equalization attenuates pixels with low contrast. For these pixels, neighboring pixels map to the same bin of the histogram and thus result in a loss of small contrasts.
  • the Narendra-Fitch algorithm performs better than the histogram-equalized image in terms of contrast and overall image quality.
  • the image produced by an embodiment of the present invention is superior to both these methods in terms of contrast and dynamic range. A qualitative comparison shows that the image produced the embodiment of the present invention is better than both the other prior art approaches.
  • the second comparison is for a road scene on a rainy day with vehicles in front of a host vehicle equipped with an 8-bit/pixel CCD camera.
  • This scene is also complicated because of the foggy conditions and the mist that make the vehicles very fuzzy in appearance.
  • the histogram equalization method performs an extreme compensation for dynamic range at the expense of contrast information. This can be seen in FIG. 8 (second row and second column).
  • the results for the Narendra-Fitch algorithm and the embodiment of the present invention are better. However, the results obtained using the present invention are still superior to the Narendra-Fitch algorithm in terms of the balance between the dynamic range and contrast information.

Abstract

A method for enhancing the quality of a digital image by using a single user-defined parameter. A virtual image is created based on the single user-defined parameter and the original digital image. An adaptive contrast enhancement algorithm operates on a logarithmically compressed version of the virtual image to produce adaptive contrast values for each pixel in the virtual image. A dynamic range adjustment algorithm is used to generate logarithmic enhanced pixels based on the adaptive contrast values and the pixels of the logarithmically compressed version of the virtual image. The logarithmic enhanced pixels are exponentially expanded and scaled to produce a compensated digital image.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This patent application is a divisional application of U.S. application Ser. No. 10/356,155, filed on Jan. 30, 2003, now U.S. Pat. No. 7,164,808 which is related to and claims benefit of U.S. Provisional Application 60/406,462 filed on Aug. 27, 2002, which is incorporated by reference in its entirety.
BACKGROUND
1. Field
The present invention relates to illumination compensation in digital images. More particularly, the present invention relates to balancing the dynamic range of a scene within a digital image with contrast enhancement of the scene.
2. Description of Related Art
Image enhancement refers to the science of improving the quality of an image based on some absolute measure. While the problems with image quality are numerous, such as focus problems, depth-of-field issues, motion blur, dynamic range of scene and other sources of noise, the main focus in the digital image processing community has been to improve or enhance fine structure contrasts. This is primarily referred to as contrast enhancement. Contrast enhancement is useful when image contrasts are imperceptible or barely perceptible. Low contrast can be caused by scenes that are hazy or have other poor illumination conditions. This problem is further exacerbated by the fact that cameras with low dynamic range capture the scene with a reduction in dynamic range from the true range present in the scene and this comes with the penalty of further loss of contrast. However, there is experimental evidence that suggests that human perception (subjective quality) is the best when the image's contrast and dynamic range is high. This is the primary motivation for developing a new method for image enhancement.
Contrast enhancement techniques for images are broadly classified into two classes. The first class modifies the intensity histogram of images for contrast enhancement. A special case is the histogram equalization method. See, for example, Anil K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, 1969, pp. 241-244. Histogram equalization applied to an entire image has the disadvantage of attenuating or even removing contrast information of small magnitudes in the scarcely populated histogram regions. This is because neighboring pixels in such regions are mapped to the same output histogram value.
Another method using histogram modification is adaptive histogram equalization. Adaptive histogram equalization applies histogram modification based on local statistics of an image rather than on a global scale. See, for example, R. A. Hummel, “Image Enhancement By Histogram Transformation,” Computer Vision Graphics and Image Processing, vol. 6, 1977, pp. 184-195; R. B. Paranjape, W. M. Morrow and R. M. Rangayyan, “Adaptive-Neighborhood Histogram Equalization For Image Enhancement,” Computer Vision Graphics and Image Processing, vol. 54, no. 3, 1992, pp. 259-267; and, D. Mukherjee and B. Chatterji, “Adaptive Neighborhood Extended Contrast Enhancement And Its Modifications,” Graphical Models and Image Processing, vol. 57, no. 3, 1995, pp. 254-265. Contrast enhancement methods based on adaptive histogram equalization are generally computationally slow. Since such methods operate on purely local statistics, the methods sometime create artifacts that make it hard to distinguish real objects from clutter. There are other variations of adaptive histogram techniques that address the trade-off issues between computational speed and accuracy of enhancement. See, for example, S. M. Pizer, et. al, “Adaptive Histogram Equalization And Its Variations,” Computer Vision Graphics and Image Processing, vol. 39, 1987. pp. 355-368. J. Alex Stark in “Adaptive Image Contrast Enhancement Using Generalizations Of Histograms,” IEEE Transactions on Image Processing, vol. 9, no. 5, May 2000, describes a generalized histogram representation with a goal of reducing the number of parameters to adjust and yet obtain a wide variety of contrast enhancement results.
A second class of contrast enhancement techniques operate directly on the image by applying the principle of separating high and low frequency content in the image. In these cases the image histogram is not adjusted. The frequency contents are instead manipulated and filtered in such a way as to enhance the image contrast. Two examples of this approach are the homomorphic filtering method and the unsharp masking approach. The homomorphic filtering method is described in more detail by A. V. Oppenheim, R. W. Schafer and T. G. Stockham Jr., in “Nonlinear Filtering Of Multiplied And Convolved Signals,” Proc. of IEEE, vol. 56, no. 8, 1968, pp. 1264-1291.
More recent variations of techniques based on the separation of high and low frequency content operate directly on the image based on purely local statistics. See, for example, P. M. Narendra and R. C. Fitch, “Real-Time Adaptive Contrast Enhancement,” IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 3, no. 6, 1981, pp. 655-661. The local statistics of the image are characterized by the mean and variance in intensity. The mean represents the low frequency content while the variance represents the high frequency content in the image. An adaptive scheme based on these two parameters is used to manipulate the image content so as to enhance contrast in the image. These approaches have also been extended using a multi-scale algorithm as described in A. Toet, “Adaptive Multi-Scale Contrast Enhancement Through Non-Linear Pyramid Recombination,” Pattern Recognition Letters, vol. 11, 1990, pp. 735-742, and K. Schutte, “Multi-Scale Adaptive Gain Control Of IR Images,” Proc. of SPIE, vol. 3661, 1997, pp. 906-914. A similar approach is the ON-OFF filter that was designed as a set of two parallel modules that measures local statistics using the differences of a Gaussian filter. See, for example, S. Grossberg and D. Todorovic, “Neural Dynamics Of 1-D And 2-D Brightness Perception: A Unified Model Of Classical And Recent Phenomena,” Perception and Psychophysics, vol. 43, 1988, pp. 241-277. In this approach, contrast enhancement is realized by combining the outputs of the two modules to improve robustness. The OFF module performs an image inversion on the input image before extracting contrast information.
Prior art methods for contrast enhancement are typically deficient either in the quality of image enhancement due to inherent problems with the method or are computationally slow and require image-specific adjustments of several system parameters. As discussed above, prior art methods based on modification of the intensity histogram of images have the disadvantage of attenuating or even removing contrast information of small magnitudes in the scarcely populated histogram regions. This result occurs because neighboring pixels in such regions are mapped to the same output histogram value. Methods based on adaptive histogram modification attempt to address this problem, but these methods are computationally slow. As also discussed above, other prior art methods operate directly on the image by applying the principle of separating high and low frequency content in the image. In these methods, the image histogram is typically not adjusted. The frequency contents are instead manipulated and filtered in such a way as to enhance the image contrast. These methods are also generally computationally slow and require image specific adjustments to several parameters for improved results. Further, while the contrast enhancement is improved in most cases, the dynamic range of the image is not.
Therefore, there exists a need in the art for enhancing the contrast of a digital image without attenuating or removing contrast information in portions of the image and without requiring significant computation times. There also exists a need in the art for performing such contrast enhancement without requiring image specific adjustments to several parameters to obtain satisfactory results. Finally, there exists a need in the art for controlling dynamic range while contrast enhancement is being performed.
SUMMARY
Embodiments of the present invention provide a method and apparatus for balancing the dynamic range and contrast content in an image such that the overall quality of the image is enhanced by adjusting only a single user-defined parameter. Embodiments of the present invention provide contrast enhancement without requiring image specific adjustments to several parameters for improved results. Embodiments of the present invention provide the desired illumination compensation in a computationally efficient manner.
An embodiment of the present invention provides a method for enhancing the quality of a digital image by using a single user-defined parameter. A virtual image is created based on the single user-defined parameter and the original digital image. An adaptive contrast enhancement algorithm operates on a logarithmically compressed version of the virtual image to produce adaptive contrast values for each pixel in the virtual image. A dynamic range adjustment algorithm is used to generate logarithmic enhanced pixels based on the adaptive contrast values and the pixels of the logarithmically compressed version of the virtual image. The logarithmic enhanced pixels are exponentially expanded and scaled to produce a compensated digital image.
Embodiments of the present invention provide image enhancement that can concurrently balance the dynamic range in a scene with the contrast in the scene. A virtual image of the scene is generated to facilitate dynamic range compensation. The virtual image is generated to simulate the dynamic range that would be captured from a scene as if the scene were viewed by a virtual camera of wider dynamic range. An adaptive contrast enhancement algorithm operates on the local image statistics measured at each pixel. Using the virtual image in combination with the adaptive contrast enhancement algorithm, embodiments of the present invention are able to compensate for illumination for a wide range of illumination conditions. The embodiments are computationally efficient and the quality of the output produced by the various embodiments may be controlled using a single user-defined parameter.
An embodiment of the present invention comprises a method for image enhancement having the steps of: receiving a digital image of a scene, the digital image comprising a plurality of digital image pixels; generating a virtual image of at least a portion of the digital image pixels, the virtual image having a plurality of virtual pixels, each virtual pixel having a pixel intensity; logarithmically converting the pixel intensities of at least a portion of the virtual pixels to produce a plurality of log virtual pixels; performing adaptive contrast enhancement on at least a portion of the log virtual pixels to produce a plurality of adaptive contrast values corresponding to said at least a portion of log virtual pixels; performing dynamic range adjustment using said at least a portion of the log virtual pixels and said plurality of adaptive contrast values to produce a plurality of log enhanced pixels; and exponentially converting at least a portion of the log enhanced pixels to produce a plurality of enhanced pixels, the plurality of enhanced pixels comprising an enhanced digital image.
Another embodiment of the present invention comprises an apparatus for producing an enhanced image from a digital image having a plurality of digital image pixels, the apparatus comprising: means for expanding a dynamic range of at least a portion of the digital image pixels to produce a virtual image comprising a plurality of virtual pixels; means for calculating a corresponding adaptive contrast value for a corresponding virtual pixel of the plurality of virtual pixels, said means for calculating receiving said plurality of virtual pixels and producing a plurality of adaptive contrast values; means for adjusting an intensity of a corresponding virtual pixel of said plurality of virtual pixels, said means for adjusting receiving said plurality of adaptive contrast values and said plurality of virtual pixels and producing a plurality of enhanced pixels; and, means for scaling said plurality of enhanced pixels to produce said enhanced image.
Still another embodiment of the present invention comprises a method for enhancing the contrast of a digital image wherein said digital image comprises an array of digital pixels, each digital pixel having a dynamic range, said method having the steps of: specifying a virtual camera parameter; generating an array of virtual pixels from said array of digital pixels, each virtual pixel having an expanded dynamic range based on said virtual camera parameter; logarithmically converting said array of virtual pixels to an array of log virtual pixels; generating an array of adaptive contrast values corresponding to said array of log virtual pixels; generating an array of log enhanced pixels from said log virtual pixels and from said array of adaptive contrast values; exponentially converting said array of log enhanced pixels to an array of enhanced pixels, each enhanced pixel having an intensity; and scaling the intensity of each enhanced pixel to be within a specified range for an image output device.
Still another embodiment of the present invention provides a method of creating a virtual image of a digital image, the virtual image having a higher dynamic range than the digital image, the digital image and the virtual image comprising a plurality of pixels, and the method comprising the steps of: specifying a virtual camera parameter; determining a pixel intensity for each pixel in the digital image; and calculating a pixel intensity for each pixel in the virtual image based on the virtual camera parameter and the pixel intensity for each pixel in the digital image.
Embodiments of the present invention are not limited to hardware only, software only, or firmware only implementations. Those skilled in the art will understand that embodiments of the present invention may be implemented using a variety of software, hardware, and/or firmware technologies known in the art.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a block diagram for modeling the transformation of a viewed scene to an image as captured by an imaging device.
FIG. 2 shows a block diagram of an adaptive contrast enhancement process according to an embodiment of the present invention.
FIG. 3 shows a block diagram of an image enhancement system according to an embodiment of the present invention.
FIG. 4 depicts the enhancement in overall image quality and contrast provided by an embodiment of the present invention by showing three original images and their contrast and three compensated images and their contrast.
FIG. 5 depicts an original image of vehicles in a tunnel and the effect of specifying different virtual camera parameters with an embodiment of the present invention on the resulting compensated images.
FIG. 6 depicts an original image of vehicles in a rain scene and the effect of specifying different virtual camera parameters with an embodiment of the present invention on the resulting compensated images.
FIG. 7 depicts the effect of compensating only a portion of an image with an embodiment of the present invention.
FIG. 8 shows the results obtained for using different compensation techniques known in the art and the results obtained using an embodiment of the present invention on two original images.
FIG. 9 depicts a hardware block diagram for a computer system according to an embodiment of the present invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
A typical device used to view and capture a scene is a Charge Coupled Device (CCD) camera array. During the image capturing process, the variations in the scene's illumination is transformed into a set of luminance values on the CCD camera array. When the scene illumination is not uniform (such as in a sunlit landscape), the dynamic range (i.e., the ratio of maximum luminance in the scene to the minimum luminance) can be on the order of 2000. Typically, a CCD camera has a dynamic range between 16 and 32. Thus, a CCD camera may not capture the true range of illumination variations found in the sunlit scene, for example. The images captured by the CCD camera are instead compressed in their dynamic range. In particular, this loss of information due to dynamic range compression leads to loss of contrast information. In other words, contrast is attenuated wherein barely noticeable contrast becomes unnoticeable. This phenomenon can be observed using a simple mathematical model as described below.
Let the scene image be represented by ISCENE and the corresponding image by the CCD camera be represented by ICCD. Then the dynamic range transformation from ISCENE to ICCD can be modeled using the point transfer function as:
I CCD=KISCENE γ  (Eq. 1)
where γ is the dynamic range constant while K is a constant scaling factor. The γ controls the degree of dynamic range compression (if γ<1.0) or expansion (if γ>1.0). For most regular CCD cameras with reduced dynamic range, γ is less than 1.0. Using Equation 1, for small changes in the scene illumination δISCENE, the corresponding change in the CCD image δICCD can be written as:
δ I CCD I CCD = γ δ I SCENE I SCENE ( Eq . 2 )
where the two ratios on the left and right side of Equation 2 represent contrast information for the CCD and the scene images, respectively. Equation 2 suggests that the contrast in the CCD image is scaled by the γ parameter but is independent of intensity of illumination. Thus, for γ<1, as is observed in most CCD cameras, the contrast information registered by the CCD camera for any scene image is attenuated and furthermore is scaled down uniformly irrespective of the ambient illumination conditions (i.e., in bright or dark areas of the image).
One common approach to model the image registration process described above is based on the block diagram as shown in FIG. 1. The scene image ISCENE is first converted by a logarithmic operator 101 into a log image using an intermediate variable z. A multiplier 105 multiplies the log image z by γ. An exponential operator 103 computes the exponent of the γ times z product. A multiplier 107 then multiplies the resultant by the constant K to complete the transformation of ISCENE into ICCD.
Embodiments of the present invention essentially involve the modification of the portion of the block diagram shown in FIG. 1 before the exponential operator 103 to provide for simultaneous adjustment of the dynamic range and the contrast in an image. Embodiments of the present invention comprise three main modules: an adaptive contrast enhancing module, a dynamic range module, and a virtual image generation module. Preferably, the adaptive contrast enhancing module and the dynamic range module work in tandem to simultaneously enhance the overall image. The virtual image generation module is used since the true image ISCENE is unknown, and only the acquired image ICCD, with a limited dynamic range, is available for processing. The virtual image generation module helps to address this problem by creating a virtual image of high dynamic range.
Embodiments of the present invention are described below. First, the details of a preferred embodiment of the adaptive contrast enhancement module are presented. Second, a description of a preferred embodiment of the dynamic range module is presented. Third, a preferred embodiment of the virtual image generation module is presented. Fourth, embodiments of a system comprising the three modules are discussed, where the parameters of the system may be adjusted at both a pixel level as well as a global level by defining a single user parameter that enables the automatic setting of all other system parameters. Fifth, software and hardware implementations of the present invention are discussed. Finally, results of image enhancement according to the present invention will be presented.
Adaptive Contrast Enhancement
The adaptive contrast enhancement module operates on the local image statistics measured at each pixel (i,j) of the log image z(i,j) as follows. The log image representation has a useful property in that the local difference δz(i,j) is directly equal to the contrast in the input image ISCENE at pixel (i,j). The local contrast at any given pixel is measured as the difference between z(i,j) and zmean(i,j). The mean intensity, zmean(i,j), represents the mean intensity in the log domain within the local neighborhood of the pixel. In a preferred embodiment of the present invention, the local neighborhood is defined by using a 3 pixel by 3 pixel window. Other size windows may be used, but a 3×3 window is preferred, since it provides for computational efficiency and provides for the sharpest contrast. The advantages of the locality property will also generally decrease with larger size windows.
Adaptive contrast enhancement is described by P. M. Narenda and R. C. Fitch in “Real-time Adaptive Contrast Enhancement,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-3, no. 6, November 1981, pp. 655-661. It is also described by M. A. Cohen and S. Grossberg in “Neural Dynamics Of Brightness Perception: Features, Boundaries, Diffusion And Resonance,” Perception and Psychophysics, vol. 41, 1987, pp. 117-158. Typically, in these previous efforts and others known in the art, the local contrast is scaled by a constant factor and then combined with the scaled image from a dynamic range module. However, such a procedure does not discriminate between small and large contrast regions. As a result, the choice of the scaling factor has to balance the prevention of large contrasts from saturating the image while at the same time prevent small contrast regions from becoming imperceptible. Thus, according to these prior art procedures, image specific tuning of the scaling factor is generally required to avoid saturating the image while still allowing small contrast regions to be seen.
According to embodiments of the present invention, instead of allowing a single scale factor to control the contrast content, an adaptive scale parameter π(i,j) for each pixel is generated. The value of this parameter is defined as:
π ( i , j ) = [ v ( i , j ) - C ] + v ( i , j ) ( Eq . 3 )
where v(i,j) is the variance of intensity at pixel (i,j) and C is a constant. The function [x]+=max(x,0) represents a rectification of input x.
In Equation 3, the variance v(i,j) is computed as (I(i,j)−M(i,j))2 where I(i,j) is the intensity at pixel (i,j) and M(i,j) is the mean intensity of pixels within the local neighborhood centered at pixel (i,j). In Equation 3, the constant C must be greater than zero and should be small (C<0.001). In a preferred embodiment of the present invention, a value of 0.0005 is used for C. Typically, a small constant C ensures that the rectification in the numerator of Equation 3 will be greater than zero and thus makes the adaptive scale parameter π(i,j) effective.
The adaptive scale parameter π(i,j) operates as follows. When the observed intensity z(i,j) at a given pixel is larger than the mean zmean(i,j) then the observed intensity at that pixel corresponds to a large contrast region. A large contrast region provides that the variance v(i,j) at that pixel is large. Using the adaptive scale parameter, a new adaptive local mean azmean(i,j) is defined as:
az MEAN(i,j)=(1−π(i,j))z MEAN(i,j)+π(i,j)z(i,j)  (Eq. 4)
where the adaptive mean is derived as weighted sum between the actual mean zmean(i,j) and the input signal z(i,j). Finally, the adaptive contrast aδz(i,j) at a pixel is computed as follows:
aδz(i,j)=z(i,j)−az MEAN(i,j)  (Eq. 5)
Equation 4 above is similar to Equation (1) in Narenda and Fitch, discussed above. However, the algorithm disclosed by Narenda and Fitch directly operates on the input image, while embodiments of the present invention generally operate on the log value of the image. Further, the weighting in embodiments of the present invention is adaptive while that described in Narenda and Fitch depends on a user defined constant. Narenda and Fitch disclose that this constant needs to be set manually on a per image basis, especially for high dynamic range images, so as to balance out large excursions in intensity. While embodiments of the present invention may rely on the setting of the constant C for the calculation of the adaptive scaling parameter (see Equation 3 above), this constant does not have to be changed on a per image basis. Essentially, the setting of the constant C is more relaxed, since it is preferably small and is relatively insensitive to variations within that bound, irrespective of the type of image. Further, Narenda and Fitch do not disclose the balance between contrast enhancement and dynamic range content in an enhanced scene provided by embodiments of the present invention as is described below.
Using the weighted scheme described above, the adaptive contrast is adjusted as follows. From Equation 3, when the variance in intensity is high at a pixel, then the adaptive parameter π(i,j) is close to 1.0. From Equation 4, when the adaptive parameter π(i,j) is close to 1, the adaptive local mean azmean(i,j) almost fully corresponds to the observed intensity z(i,j) at that pixel. From Equation 5, when the adaptive local mean is nearly equal to the observed intensity, the adaptive contrast aδz(i,j) at that pixel is close to 0. Therefore, high contrast regions in the input image cause minimal change to the adaptive contrast. Similarly, from Equation 3, if the variance at a pixel is close to 0, the adaptive parameter π(i,j) is close to 0. From equation 4, the adaptive parameter π(i,j) being close to 0 results in the adaptive local mean azmean(i,j) almost fully corresponding to the actual mean intensity zmean(i,j) at that pixel. Since the local contrast is defined as the difference between the intensity z(i,j) and the actual mean intensity zmean(i,j), when the adaptive local mean is nearly equal to the actual mean intensity, Equation 5 provides that the adaptive contrast aδz(i,j) at that pixel is close to the actual contrast information. Therefore, low contrast regions in the input image will result in minimal loss of adaptive contrast. The adaptive approach described above provides the ability to strike a balance to preserve both large contrasts without saturating the image while at the same time enhance the small contrasts so as to make them perceptible.
FIG. 2 shows a block diagram that implements the process of adaptive contrast computation described above. In FIG. 2, the scene image Iinput is converted by the logarithmic operator 101 into a log image using an intermediate variable z. A local mean operator 211 calculates the local mean intensity LM in the log domain within the local neighborhood of the pixel (i,j). As described above, a preferred local neighborhood is 3×3, but other sizes may be used. The local mean intensity LM is then subtracted from the log image z at a first subtractor 213 to provide a local difference value LD. The adaptive parameter π(i,j) is calculated as discussed above. A multiplier 215 then multiples the local difference value by the adaptive parameter, which is added to the local mean intensity by adder 217 to create an adaptive local mean ALM. A second subtractor 219 subtracts the log image z from the adaptive local mean to create an adaptive local difference ALD, also referred to as “contrast gain.”
Dynamic Range Adjustment
Embodiments of the present invention also preferably comprise a dynamic range adjustment module. The dynamic range of an image (scene, camera image, etc.) can be defined as follows:
D = z max z min ( Eq . 6 )
where zmax and zmin correspond to maximum and minimum intensity found in the image. The local dynamic range for each pixel in the image may be considered as the ratio of the intensity of each pixel to the minimum intensity of the image. That is, the local dynamic range Di(i,j) at each pixel may be calculated as follows:
D i ( i , j ) = I ( i , j ) z min ( Eq . 7 )
where I(i,j) represents the intensity of each pixel.
The dynamic range adjustment module of embodiments of the present invention essentially performs a scaling operation on the log image. The scaling operation may be represented as follows:
z adj(i,j)=z(i,j)*γD  (Eq. 8)
where γD is the dynamic range scale factor. This operation ensures that zmax and zmin are separated further apart so that D is high.
The scale factor γD is, however, adjusted such that it forms a convex sum (see Equation 10 below) with the scale factor γC used for the adaptive contrast enhancement discussed above. The convex sum ensures that the overall effect on the compensated image is balanced as a compromise between the dynamic range component and the contrast component. The value for the constant factor in Equation 10 decides the overall magnitude of the compensation. In a preferred embodiment of the present invention, the constant factor is 1.0. The overall effect of the adaptive contrast enhancement module and the dynamic range adjustment module is that a modified log image zmod is obtained that can be represented as:
z mod(i,j)=aδz(i,j)*γc +z(i,j)*γD  (Eq. 9)
where:
γCD=const.  (Eq. 10)
In preferred embodiments of the present invention, the values for the two scale factors, γC and γD, will be chosen based on a user defined input as is described below.
Virtual Image Generation
Finally, preferred embodiments of the present invention also comprise a virtual image generation module. A virtual image generation module provides improved performance for the following reasons. In the adaptive contrast enhancement modules and dynamic range adjustment modules, it is assumed that the true image ISCENE with its inherent high dynamic range is known. However, ISCENE is unknown. Instead, what is available is an image Iinput that has already been captured by an imaging device, such as a CCD camera. The captured image Iinput will necessarily have a lower dynamic range than the true image ISCENE. The virtual image generation module helps to address this problem by creating a virtual image of high dynamic range.
In order to create a virtual image, a parameter is used that defines the “desired” dynamic range for the virtual camera image. This parameter may be a user-defined parameter. The virtual camera parameter B represents the bits/pixel for the virtual camera. The higher the bits/pixel, the greater the dynamic range is for the virtual camera image. Once the virtual camera parameter B is defined, then the maximum intensity that the virtual camera can register can be computed as 2B.
As shown above in Equation 7, the dynamic range Di(i,j) for the pixels within the actual camera image can be calculated based upon the minimum intensity zmin within the image. Alternatively, as described immediately above, the dynamic range Di(i,j) can be calculated based upon the maximum possible intensity of the actual camera image. That is, if the actual camera image is acquired with a camera having an intensity resolution of X bits/pixel, the maximum possible intensity for a pixel is 2X. The dynamic range for any given pixel in the input image may then be calculated as follows:
D i ( i , j ) = 2 x I input ( i , j ) ( Eq . 11 )
The minimal intensity for the virtual camera image will occur at the same pixel that has the minimal intensity in the input image. At that pixel, the ratio of the dynamic range for the virtual image Dv(i,j) to the dynamic range for the input image Di(i,j) is 2(B−X), where B and X are as defined above. Using this ratio, a transformation model from the dynamic range for the input image to the dynamic range of the virtual image is defined as follows:
D v ( i , j ) = D i ( i , j ) RD , if B > X = D i ( i , j ) , if B = X where R = X B and D = B - X ( Eq . 12 )
Standard imaging devices, such as CCD cameras, typically provide a bits/pixel value of 8, so X in Equation 12 is typically 8.
The dynamic range of the virtual image Dv(i,j) may also be defined in the same manner as the dynamic range for the input image Di(i,j) shown by Equation 11. That is,
D v ( i , j ) = 2 x I virtual ( i , j ) ( Eq . 13 )
The pixel-wise intensities for the virtual image Ivirtual(i,j) can then be found for Equations 11, 12, and 13 as follows:
I virtual ( i , j ) = 2 ( B - XRD ) I input ( i , j ) RD or , alternatively , Eq . ( 14 ) I virtual ( i , j ) = 2 ( B - X ( B - X ) X B ) I input ( i , j ) X B ( B - X ) Eq . ( 15 )
As defined, the computation for the intensities of the pixels of the virtual image make use of the single virtual camera parameter B, which, as indicated above, may be user defined, and the intensity resolution X of the input image. While Ivirtual(i,j) cannot be visualized as a standard image (because of higher bits/pixel), it provides an image with a higher dynamic range compared to Iinput (the actual camera image) described in the previous sections, since, as described above, the dynamic range of Iinput is limited to the dynamic range of the actual camera that captured the image. In preferred embodiments of the present invention, all the computations described in the previous modules use this virtual image.
Image Enhancement System
A block diagram of an image enhancement system 300 according to the present invention is shown in FIG. 3. The image enhancement system 300 comprises embodiments of an adaptive contrast enhancer module 210, a dynamic range module 220 and a virtual image generation module 230, as discussed above. The system 300 receives an input image Iinput(i,j) that is operated on by the virtual image generation module 230 to produce a virtual image Ivirtual(i,j). The virtual image Ivirtual(i,j) is then converted to a log image z(i,j) by the logarithmic operator 101. The adaptive contrast enhancer module 210 is used to produce the adaptive contrast aδz(i,j). The dynamic range module 220 receives both the log image z(i,j) and the adaptive contrast aδz(i,j) to produce a modified log image. The exponential operator 103 then converts the modified log image to a modified image. The multiplier 107 then multiplies the modified image by a scale factor K to produce a compensated image Ioutput(i,j).
As discussed above, the virtual image generator module 230 preferably uses the virtual camera parameter B in the calculations for the virtual image. The virtual camera parameter may be provided as a user-specified input 290. The virtual camera parameter B may also be used to calculate the scale factors, γC and γD, used in the calculations in the dynamic range module 220. Finally, the virtual camera parameter B may also be used in the calculation of the scale factor K used in producing the compensated image Ioutput(i,j).
The value of the dynamic range module scale factor γD may be calculated as follows:
γ D = 1 B - X if B > X = 1 if B = X ( Eq . 16 )
where B is the virtual camera parameter and X is the intensity resolution of the device used to capture the image, as discussed above. Equation 16 suggests that the scaling factor for the dynamic range will monotonically decrease with an increase in the bits/pixel value of the virtual image. The scaling factor γC for the output of the adaptive contrast enhancer 210 can be readily obtained using Equation 10. When the virtual image has the same quality as the input image, then B=X and γD is set to 1.0. This implies that γC for the input is zero. This is reasonable because the virtual image has the same bits/pixel as the input image. Hence, the γD calculation 291 in FIG. 3 may be obtained from Equation 16 and the virtual camera parameter B. The γC calculation 293 may be obtained from γD and Equation 10.
The value for K is also obtained using the B parameter as follows:
K = ( 2 Y - 1 ) * B A ( I virtual max ) γ D where A = I input avg and I virtual max = 2 B ( Eq . 17 )
where A represents the average image intensity for the input image and Y represents the dynamic range (in bits/pixel) of a display device used to display the compensated image. Thus, the K factor helps in scaling the compensated image Ioutput to be within the visible range of the display device so as to visualize the compensation/enhancement as well as use the image for further processing, if necessary. Values below and above the display range are clipped. The final output Ioutput can then be expressed in terms of the three system parameters as follows:
I output=Keaδz*γ C +z*γ D   (Eq. 18)
Using the algorithm described above, image processing systems according to the present invention can achieve a balance between improving the dynamic range of the image while still enhancing the contrast for low contrast regions and retaining the contrast for high contrast regions. The parameters are global and hence need to be set only in the beginning of the computations. As described above, a user need only specify the virtual camera parameter B and all other scale factors may be calculated from that one parameter.
In an alternative embodiment, the parameters may be computed locally for each pixel. Computations of the parameters for each pixel may be performed by calculating the adaptive contrast scale factor γC based on the dynamic range at each pixel using Equation 7. That is, rather than using a single adaptive contrast scale factor γC for the entire image, the adaptive contrast scale factor is calculated for each pixel in the image. Using Equation 7, the pixel-wise adaptive contrast scale factor is calculated as follows:
γ C ( i , j ) = I ( i , j ) z min ( Eq . 19 )
Using Equation 10, the appropriate value for γD at each pixel can be computed as follows:
γD(i,j)=const−γC(i,j)  (Eq. 20)
For the computation of the constant K in this alternative embodiment, the average intensity A in the pixel local neighborhood is used, instead of the average intensity of the whole image. Preferably, the average intensity A is calculated using a local neighborhood of 3×3 pixels. The advantage of this alternative embodiment is that the parameters are more “optimal,” but requires more computations compared to the global approach.
Software Implementation
Embodiments of the present invention may be implemented with software. Presented in the COMPUTER PROGRAM LISTING section of this specification is a program written in the C programming language implementing an embodiment of the present invention. This program loads an image contained in a disk file and displays a compensated image using the “global” image enhancement technique described above. The user provides the virtual camera parameter and the program calculates the other constants necessary for the generation of the virtual image and the compensated image. Distribution and installation of this software (and other software that implements embodiments of the present invention) is generally accomplished using media well-known in the art, such as CDROMs.
Hardware Implementation
Embodiments of the present invention may be provided by a computer-based system that executes software designed to implement an embodiment of the method of the present invention. FIG. 9 depicts a typical computer system that may be used in an embodiment of the present invention. Program code and parameter settings may be stored in a storage device 810, such as a disk drive, tape drive, solid state memory, or other storage devices known in the art. The storage device 810 may also store files containing uncompensated images and may be used to store files containing compensated images. Images may also be acquired using an image acquisition device 850, such as a CCD camera. A processor 820, for example, a commercially available processor such as a Pentium® microprocessor from Intel Corporation, is coupled to the image acquisition device 850 to receive the acquired image. The acquired image may be output in a digital form by the acquisition device 850 or the processor 820 may convert an acquired analog image to a digital image. The processor 820 executes the computer-based commands needed to perform the image compensation according to embodiments of the present invention. A user input device 840, such as a keyboard, is used to provide user-specified data and commands to the processor 820. A display device 830, such as a computer monitor, provides a display of the compensated image, or may provide a display of multiple images, such as both the original acquired image and the compensated image.
Results
An embodiment of the present invention has been applied to several example images captured using a regular CCD camera under various illumination conditions. The results of applying this embodiment are shown in FIGS. 4, 5, and 6. In FIG. 4, there are three rows of different input images. Each row has four columns. The first pair of columns corresponds to the input image and its corresponding contrast image using the Sobel edge operator known in the art. The next pair of columns corresponds to the compensated image provided by an embodiment of the present invention and the corresponding contrast image using the Sobel edge operator. All these examples use B=10 (and X=8). From FIG. 4, it can be seen that the contrast information in the scene has dramatically improved. Furthermore, the overall dynamic range has also improved.
FIGS. 5 and 6 show the effect of using different B values for two very different illumination conditions. In FIG. 5, the image is of vehicles inside a tunnel. In both these cases, the overall contrast has improved considerably. Also, by increasing the B value, the contrast gets amplified further. So, depending on the application, an appropriate B can be used. A similar improvement can also be observed for images of FIG. 6. In this case, the mist due to rain covers the back of the car and makes it unclear. The results from enhancing the image according to the present invention improves the images considerably.
Another useful aspect of the present invention is that it may be applied for patches of the scene just as easily as the whole image without the need to change any system parameters. This feature is especially desirable if computational speed is very important. By facilitating the processing of small image patches without any change in the overall method used, it is possible for interesting applications in computer vision to be realized, such as robust tracking of the objects under variable illumination. This capability is demonstrated through a vision-based object-tracking example, as shown in FIG. 7. Typically, if a video track is established for a vehicle, such as the small pickup truck shown in FIG. 7, and the pickup truck goes under the bridge (as shown in FIG. 7), then the video track could be lost due to this drastic change in illumination. Embodiments of the present invention address this problem by applying the provided image enhancement selectively within the predicted video track. This enables the features to be tracked in a stable fashion since both the dynamic range and contrast information are improved for the video track. This can be observed by comparing the area around the pickup truck in the input image to the compensated image shown in FIG. 7.
Finally, the results obtained using an embodiment of the present invention are compared with other methods for image enhancement. The other methods are histogram equalization as described in Jain, as discussed above, which is an example of the most common algorithm of the first class of algorithms outlined above, and the Narendra-Fitch algorithm as described in the reference authored by P. M. Narenda and R. C. Fitch, discussed above, which is an example of the second class of algorithms outlined above. These two approaches have been selected for comparison because they are the most commonly used methods in their respective class of algorithms and are computationally the most efficient. The first comparison was performed for a tunnel scene with vehicles in front of a host vehicle equipped with an 8-bit/pixel CCD camera. This scene is an extreme situation where the whole image is dark where even details of the lanes near the vehicle are not clearly visible (See FIG. 8, first row, first column). The results of the three approaches are shown in the remaining three columns of the first row. The histogram-equalized image looks quite washed out because of an imbalance between the dynamic range in the image and the contrast enhancement. This is because histogram equalization attenuates pixels with low contrast. For these pixels, neighboring pixels map to the same bin of the histogram and thus result in a loss of small contrasts. The Narendra-Fitch algorithm performs better than the histogram-equalized image in terms of contrast and overall image quality. The image produced by an embodiment of the present invention is superior to both these methods in terms of contrast and dynamic range. A qualitative comparison shows that the image produced the embodiment of the present invention is better than both the other prior art approaches.
The second comparison is for a road scene on a rainy day with vehicles in front of a host vehicle equipped with an 8-bit/pixel CCD camera. This scene is also complicated because of the foggy conditions and the mist that make the vehicles very fuzzy in appearance. Similar to the first example, the histogram equalization method performs an extreme compensation for dynamic range at the expense of contrast information. This can be seen in FIG. 8 (second row and second column). The results for the Narendra-Fitch algorithm and the embodiment of the present invention are better. However, the results obtained using the present invention are still superior to the Narendra-Fitch algorithm in terms of the balance between the dynamic range and contrast information.
From the foregoing description, it will be apparent that the present invention has a number of advantages, some of which have been described above, and others of which are inherent in the embodiments of the invention described above. Also, it will be understood that modifications can be made to the method described above without departing from the teachings of the subject matter described herein. As such, the invention is not to be limited to the described embodiments except as required by the appended claims.
Computer Program Listing
Shown below is a computer program listing of a program in the C programming language for implementing an embodiment of the invention. This computer listing is presented to aid in the understanding of the invention. It is understood that the present invention is not limited to the computer program listing presented below. Other embodiments of the present invention may be provided by modifying this code. Still other embodiments of the present invention may be provided by software written in other programming languages. Still other embodiments of the present invention may be provided by computer programs, subroutines, functions, objects, or other software implementations having different structures, variables, memory usage, etc. than that shown below. Finally, as discussed above, embodiments of the present invention may be provided by hardware systems, firmware systems, or a mix of hardware, software, and/or firmware.
/* C program for illumination compensation */
/* In this version, all parameters are computed off-line */
/* based on the user-defined bits/pixel parameter */
/* Written by Narayan Srinivasa */
/* Copyright 2002 HRL Laboratories LLC */
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#define NR 240 //image height
#define NC 320 //image width
#define scale 3 // window for extracting statistics
#define OFFSET 5 //all operation offset from the borders
#define H_CONST 0.45 //constant multiplier
#define BASEH 8  //base horizontal threshold value
#define V_CONST 0 //constant multiplier
#define BASEV 6 //base vertical threshold value
#define EDGE_H 40 //horizontal edge filter threshold
#define EDGE_V 40 //vertical edge filter threshold
#define DIFF_H 2 // gradient search distance
#define FOCAL_POINT 60 // set focal point for parameter (distance)
estimation
#define HORIZON 60 // set horizon for edge detection, parameter
(distance) estimation...
// store offsets for statistics computation
int s_x[9] = {−1,0,1,−1,0,1,−1,0,1};
int s_y[9] = {−1,−1,−1,0,0,0,1,1,1};
/* Display Image */
void display(char *arv,int *im,int cn,int rn)
{ FILE *fp;
 int j,k;
fp = fopen(arv,“w”);
fprintf(fp,“P2\n”);
fprintf(fp,“%d %d\n”,cn,rn);
fprintf(fp,“%d\n”,255);
 for(j = 0;j < rn;j++) {
 for(k = 0;k < cn;k++) {
  fprintf(fp,“%d ”,im[j*cn+k]);}
 fprinrf(fp,“\n”);}
fclose(fp);
}
// compute the horizontal and vertical edge map
void edge_extract(int *im2, int *res2,int *res3,int *res4,int *vect,float *ma,int *ind)
{
/*
input:
 im2 : grayscale image
output:
 res2 contains length filtered horizontal edge
 res3 contains length filtered vertical edge
 res4 contains length filtered combined edge
   */
 int i,j,k,p;
 int n,wh,wv;
 int v_count[NC]={0};
wh=0;
wv=0;
p=0;
*ma = −1000.0;
for (j= HORIZON;j<NR−OFFSET−3;j++){
 wh=0;
 for (k=OFFSET;k<NC−OFFSET;k++)
 {
 i=j*NC + k;
/* Horizontal edge detection. Edge detection occurs when the intensity gradient
between two vertically aligned pixels exceeds some threshold (EDGE_H).
*/
if (abs(im2[i]−im2[i+DIFF_H*NC]) > EDGE_H )
 {res2[i]=255;
 if(abs(im2[i]−im2[i+DIFF_H*NC]) > EDGE_H+15) {
 if(*ma < fabs((float)(im2[i]−im2[i+DIFF_H*NC]))) {
 *ma = fabs((float)(im2[i]−im2[i+DIFF_H*NC]));
 *ind = i;}
 res4[i] = 255; }
 wh++;
 }
 else if (wh == 0)
  {res4[i]=0;
  res2[i]=0;
  }
  else if (wh < BASEH+ H_CONST*vect[i])
   {res4[i] =0;
   res2[i] =0;
   for (n=0;n<wh;n++)
   res2[i−n−1]=0;
   wh=0;
   }
      else {
       wh=0; res4[i] = 0;
     }
if(k == NC−OFFSET−1)
 {
 if (wh < BASEH+ H_CONST*vect[i]) {
  res2[i]=0;
  for (n=0;n<wh;n++)
   res2[i−n−1]=0;
   wh = 0;}
   }
/* Vertical edge detection. Edge detection occurs when the intensity gradient
between two horizontally aligned pixels exceeds some threshold (EDGE_V).
*/
if ( abs(im2[i]−im2[i+1]) > EDGE_V )
 {res3[i]=255;
 if(abs(im2[i]−im2[i+1]) > EDGE_V+10) {
  if(*ma < fabs((float)(im2[i]−im2[i+1]))) {
   *ma = fabs((float)(im2[i]−im2[i+1]));
   *ind = i;}
  res4[i]=255; }
 v_count[k]=v_count[k] + 1;
 }
 else if (v_count[k] == 0)
 res3[i]=0;
 else if (v_count[k] < BASEV+ V_CONST*vect[i])
  { res3[i]=0;
  for (n=0;n<v_count[k];n++)
  res3[i−(n+1)*NC]=0;
  }
 else {
  v_count[k]=0;}
 if(j == NR−OFFSET−4)
 { if (v_count[k] < BASEV+ V_CONST*vect[i])
  {res3[i]=0;
   for (n=0;n<v_count[k];n++) res3[i−(n+1)*NC]=0;
  }
 }
} /*for(k ..*/
} /*for(j ==.*/
}
/* Main routine */
main( )
{
 int i,j,k,kindex[9];
 int i1,j1,k1,ind,cnt;
 float imax,imin,Dr,pa,vn,Dd,ldmax;
 float gamma_c,gamma_d,ksup,ymin,ymax;
 float almean,aldz,val1,val2,ma,max;
 int nc1,nr1,M,num,N;
 int *im,*im1;
 int *res,res1,*ms,*vect;
 unsigned char imr[NR*NC];
 unsigned char ch[20];
 clock_t tbeg,tend,tbeg1,tend1;
 double valt,valt1;
 float mean,std,val,cs,value,*z;
 float fac,bits,pmax,xmax,zmax,C,xval;
 float mistd,mastd,mmin,mmax,lambda;
 FILE *fp,*fp1;
/* Load image file in binary format */
//fp = fopen(“L0409I696W4_0958.pgm”,“r”);
 fp = fopen(“L0409I696W2_1437.pgm”,“r”);
 fscanf(fp,“%s”,ch);
 fscanf(fp,“%d%d”,&nc1,&nr1);
 fscanf(fp,“%d”,&num);
 fread(imr,sizeof(unsigned char),nc1*nr1,fp);
 fclose(fp);
N = nr1*nc1;
im = (int *) calloc(N,sizeof(int));
im1 = (int *) calloc(N,sizeof(int));
z = (float *) calloc(N,sizeof(float));
// edge map arrays
res = (int *) calloc(N,sizeof(int));
res1 = (int *) calloc(N,sizeof(int));
ms = (int *) calloc(N,sizeof(int));
//length filter vector
vect = (int *) calloc(N,sizeof(int));
/* compute the vect values for edge extraction */
for (j=0; j<NR;j++)
for (i=0; i<NC;i++)
 vect[i+ j*NC]= ((j−FOCAL_POINT)*(j−FOCAL_POINT) + (i−NC/2)*(i−NC/2))/500;
printf(“Input the total number of bits/pixel in camera: ”);
scanf(“%f”,&bits);
printf(“\n”);
/* compute the various parameters for the illumination compensation */
xmax = pow(2.0,bits);
lambda = bits−8;
if(bits == 8) lambda = 1.0;
gamma_d = 1.0/lambda;
C = 4.0−0.25*(lambda−3);
if(C < 0) C = 1.0;
gamma_c = C − gamma_d;
vn = 12.0;
ksup = 255.0/pow(xmax,8.0*gamma_d/bits);
tbeg = clock( );
// compute the log image “z” based on the user selected
// bits/pixel choice
xval = log(xmax);
 for(i = 0;i < N;i++)
 { im[i] = (int) imr[i];
 Dr = 255.0/im[i];
 Dd = pow(Dr,lambda);
 z[i] = xval−log(Dd);
}
/* Get edge map and the pixel (ind) with the highest contrast and the contrast value (ma) */
edge_extract(im,res,res1,ms,vect,&ma,&ind);
display(“eh0”,res,nc1,nr1);
display(“ev0”,res1,nc1,nr1);
display(“et0”,ms,nc1,nr1);
// compute local statistics and estimate compensated illumination
value = scale*scale;
 for(k = 0;k < N;k++)
 {
 i = (int) (k/nc1);
 j = k − i*nc1;
// for all pixels within the image - scale offsets from each side
// to avoid the border effect
 if((i >= scale) && (i < nr1−scale) && (j >= scale) && (j < nc1−scale))
  {
// compute local mean
 mean = 0.0;
 for(i1 = 0;i1 < (int) value;i1++)
  { kindex[i1] = (i + s_x[i1])*nc1 + (j + s_y[i1]);
  mean += z[kindex[i1]];
  }
 mean /= value;
// compute local standard deviation
 std = 0.0;
 for(i1 = 0;i1 < (int)value;i1++)
 { val = z[kindex[i1]] − mean;
  std += val*val;
 }
 std /= value;
// apply the gamma_c and gamma_d based correction to z
 if(std − vn < 0.0)
 val2 = (z[k]−mean)*gamma_c+gamma_d*z[k];
 else {pa = (std − vn)/std;
 almean = mean + pa*(z[k]−mean);
 aldz = z[k]− almean;
 val2 = aldz*gamma_c + gamma_d*z[k];}
// compute compensated image intensity by reverting
// back to normal from the log domain
 im1[k] = (int) (ksup*exp(val2));
// clip any excess intensity outside the display range
 if(im1[k] > 255) im1[k] =255;
 if(im1[k] < 0) im1[k] = 0;
  } //if(i >...
 } // for(k = ..
tend = clock( );
valt = (double) (tend−tbeg)/CLOCKS_PER_SEC;
printf(“Time = %If secs\n”,valt);
display(“B1=12”,im1,nc1,nr1);
edge_extract(im1,res,res1,ms,vect,&ma,&ind);
display(“eh1”,res,nc1,nr1);
display(“ev1”,res1,nc1,nr1);
display(“et1”,ms,nc1,nr1);
free((int *)im);
free((int *)im1);
free((char *)res);
free((char *)z);
free((char *)res1);
free((char *)ms);
free((char *)vect);
}

Claims (10)

1. A method of virtual image generation, the method comprising:
identifying a plurality of pixel intensities corresponding to pixels of a digital image;
identifying a selected intensity resolution in bits per pixel;
calculating another plurality of pixel intensities based on the selected intensity resolution and the plurality of pixel intensities; and
generating a virtual image of the digital image by using the another plurality of pixel intensities to configure another plurality of pixels that comprise the virtual image.
2. The method of claim 1, further comprising:
specifying a virtual camera parameter that defines the selected intensity resolution; and
calculating the another plurality of pixel intensities based on the virtual camera parameter.
3. A computer-readable medium having computer-executable instructions for performing the method according to claim 1.
4. A computer system using computer-executed instructions for performing the method of claim 1, the system comprising: a processor for executing the computer executed instructions; a digital image acquisition device for acquiring the digital image; a display device coupled to said processor; a user input device coupled to said processor, said user input device transferring user commands to said processor;
and a storage device coupled to said processor.
5. The method of claim 1, further comprising:
identifying a user-defined intensity resolution; and
calculating the another plurality of pixel intensities based on the user-defined intensity resolution.
6. The method of claim 1, further comprising:
identifying a plurality of different intensity resolutions each in bits per pixel; and
calculating the another plurality of pixel intensities based on the plurality of different intensity resolutions.
7. The method of claim 1, further comprising:
identifying an intensity resolution in bits per pixel associated with the digital image;
identifying a different intensity resolution in bits per pixel that defines a dynamic range for the virtual image; and
calculating the another plurality of pixel intensities based on the intensity resolution as well as the different intensity resolution.
8. The method of claim 1, further comprising:
specifying a numerical value of a first parameter that defines the intensity resolution of the digital image;
specifying a numerical value of a second parameter that defines the selected intensity resolution, the numerical value of the second parameter being greater than the numerical value of the first parameter; and
calculating the another plurality of pixel intensities based on the intensity resolution of the digital image, the selected intensity resolution and the plurality of pixel intensities, the numerical values of the first and second parameters being implemented as input values to calculate each pixel intensity from among the another plurality of pixel intensities.
9. A method of virtual image generation, the method comprising:
specifying a virtual camera parameter having a value of B bits/pixel;
identifying a plurality of pixel intensities associated with a digital image, the plurality of pixel intensities being represented by Iinput(i,j) and having an intensity resolution of X bits/pixel; and
calculating another plurality of pixel intensities based on the virtual camera parameter and the plurality of pixel intensities, the another plurality of pixel intensities being represented by Ivirtual(i,j) and being calculated using an equation substantially as follows:
I virtual ( i , j ) = 2 ( B - X ( B - X ) X B ) I input ( i , j ) X B ( B - X ) .
10. A method of virtual image generation, the method comprising:
identifying a plurality of pixel intensities corresponding to pixels of a digital image;
specifying a numerical value of a first parameter that defines an intensity resolution of the digital image in bits per pixel;
specifying a numerical value of a second parameter that defines a selected intensity resolution in bits per pixel, the numerical value of the second parameter being greater than the numerical value of the first parameter;
calculating another plurality of pixel intensities based on the intensity resolution of the digital image, the selected intensity resolution and the plurality of pixel intensities, the numerical values of the first and second parameters being implemented as input values to calculate each pixel intensity from among the another plurality of pixel intensities; and
generating a virtual image of the digital image by using the another plurality of pixel intensities to configure pixels of the virtual image.
US11/541,711 2002-08-27 2006-09-29 Method and apparatus for illumination compensation of digital images Expired - Fee Related US7787709B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/541,711 US7787709B2 (en) 2002-08-27 2006-09-29 Method and apparatus for illumination compensation of digital images

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US40646202P 2002-08-27 2002-08-27
US10/356,155 US7164808B2 (en) 2002-08-27 2003-01-30 Method and apparatus for illumination compensation of digital images
US11/541,711 US7787709B2 (en) 2002-08-27 2006-09-29 Method and apparatus for illumination compensation of digital images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/356,155 Division US7164808B2 (en) 2002-08-27 2003-01-30 Method and apparatus for illumination compensation of digital images

Publications (2)

Publication Number Publication Date
US20070025633A1 US20070025633A1 (en) 2007-02-01
US7787709B2 true US7787709B2 (en) 2010-08-31

Family

ID=31981134

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/356,155 Expired - Fee Related US7164808B2 (en) 2002-08-27 2003-01-30 Method and apparatus for illumination compensation of digital images
US11/541,711 Expired - Fee Related US7787709B2 (en) 2002-08-27 2006-09-29 Method and apparatus for illumination compensation of digital images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/356,155 Expired - Fee Related US7164808B2 (en) 2002-08-27 2003-01-30 Method and apparatus for illumination compensation of digital images

Country Status (1)

Country Link
US (2) US7164808B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090290815A1 (en) * 2005-01-26 2009-11-26 Koninklijke Philips Electronics, N.V. Sparkle processing
CN105046674A (en) * 2015-07-14 2015-11-11 中国科学院电子学研究所 Nonuniformity correction method of multi-pixel parallel scanning infrared CCD images

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7106352B2 (en) * 2003-03-03 2006-09-12 Sun Microsystems, Inc. Automatic gain control, brightness compression, and super-intensity samples
US7103227B2 (en) * 2003-03-19 2006-09-05 Mitsubishi Electric Research Laboratories, Inc. Enhancing low quality images of naturally illuminated scenes
JP4271978B2 (en) * 2003-04-18 2009-06-03 株式会社日立製作所 Video display device
US7266229B2 (en) * 2003-07-24 2007-09-04 Carestream Health, Inc. Method for rendering digital radiographic images for display based on independent control of fundamental image quality parameters
US7424169B2 (en) * 2003-08-15 2008-09-09 Xerox Corporation Active compensation of streaks using spatial filtering and feedback control
US7302082B2 (en) * 2004-02-13 2007-11-27 Chunghwa Telecom Co., Ltd. Method for detecting motion pixels in image
DE102004042792B3 (en) * 2004-09-03 2006-06-08 Siemens Ag Method for improving the presentation of CT images
US8050512B2 (en) * 2004-11-16 2011-11-01 Sharp Laboratories Of America, Inc. High dynamic range images from low dynamic range images
DE102004061507B4 (en) * 2004-12-21 2007-04-12 Siemens Ag Method for correcting inhomogeneities in an image and imaging device therefor
KR100640063B1 (en) * 2005-02-18 2006-10-31 삼성전자주식회사 Method for enhancing image considering to exterior illuminance and apparatus thereof
US7492962B2 (en) * 2005-08-25 2009-02-17 Delphi Technologies, Inc. System or method for enhancing an image
DE102006011066A1 (en) * 2006-03-08 2007-09-13 Eads Deutschland Gmbh Method and device for dynamic compression in pictures or signal series
US7835587B2 (en) * 2006-12-18 2010-11-16 Intel Corporation Method and apparatus for local standard deviation based histogram equalization for adaptive contrast enhancement
US8355595B2 (en) * 2007-05-15 2013-01-15 Xerox Corporation Contrast enhancement methods and apparatuses
US20090160945A1 (en) * 2007-12-21 2009-06-25 Dell Products L.P. Systems and Methods for Enhancing Image Quality of a Web Camera Image
US8331150B2 (en) * 2008-01-03 2012-12-11 Aplus Flash Technology, Inc. Integrated SRAM and FLOTOX EEPROM memory device
US8479015B2 (en) * 2008-10-17 2013-07-02 Oracle International Corporation Virtual image management
US8423328B2 (en) * 2009-09-30 2013-04-16 International Business Machines Corporation Method of distributing a random variable using statistically correct spatial interpolation continuously with spatially inhomogeneous statistical correlation versus distance, standard deviation, and mean
US9172960B1 (en) * 2010-09-23 2015-10-27 Qualcomm Technologies, Inc. Quantization based on statistics and threshold of luminanceand chrominance
CN102456222B (en) * 2010-10-29 2013-12-04 深圳迈瑞生物医疗电子股份有限公司 Method and device for organized equalization in image
US9445011B2 (en) * 2012-10-22 2016-09-13 GM Global Technology Operations LLC Dynamic rearview mirror adaptive dimming overlay through scene brightness estimation
FR3015090B1 (en) * 2013-12-18 2016-01-15 Thales Sa METHOD OF PROCESSING IMAGES, PARTICULARLY FROM NIGHT VISUALIZATION SYSTEMS AND SYSTEM THEREFOR
JP6506422B2 (en) * 2016-02-05 2019-04-24 株式会社日立製作所 Medical image diagnosis support apparatus and magnetic resonance imaging apparatus
CN111045054B (en) * 2019-04-19 2021-09-14 中航安贞(浙江)信息科技有限公司 Navigation data based serial number identification platform
GB2588674B (en) * 2019-11-01 2022-02-02 Apical Ltd Image processing
MY197448A (en) * 2019-11-29 2023-06-19 Mimos Berhad A method for detecting a moving vehicle
CN111445394B (en) * 2019-12-10 2023-06-20 西南技术物理研究所 Visible light image self-adaptive enhancement method for air-to-ground observation
CN113658067B (en) * 2021-08-11 2022-08-12 沭阳天勤工具有限公司 Water body image enhancement method and system in air tightness detection based on artificial intelligence

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4924522A (en) * 1987-08-26 1990-05-08 Ncr Corporation Method and apparatus for displaying a high resolution image on a low resolution CRT
US5224177A (en) * 1991-10-31 1993-06-29 The University Of Chicago High quality film image correction and duplication method and system
US6078686A (en) 1996-09-30 2000-06-20 Samsung Electronics Co., Ltd. Image quality enhancement circuit and method therefor
US20030039408A1 (en) * 2001-07-02 2003-02-27 Smith Joshua Edward Method and apparatus for selective encoding of a textured, three dimensional model to reduce model size
US20030072496A1 (en) 2001-06-25 2003-04-17 Science And Technology Corporation Method of improving a digital image as a function of its dynamic range
US20030117654A1 (en) * 2001-12-21 2003-06-26 Wredenhagen G. Finn System and method for dynamically enhanced colour space
US6677959B1 (en) * 1999-04-13 2004-01-13 Athentech Technologies Inc. Virtual true color light amplification
US6782137B1 (en) 1999-11-24 2004-08-24 General Electric Company Digital image display improvement system and method
US6788340B1 (en) * 1999-03-15 2004-09-07 Texas Instruments Incorporated Digital imaging control with selective intensity resolution enhancement
US6888552B2 (en) * 2001-06-08 2005-05-03 University Of Southern California High dynamic range image editing
US7149262B1 (en) * 2000-07-06 2006-12-12 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4924522A (en) * 1987-08-26 1990-05-08 Ncr Corporation Method and apparatus for displaying a high resolution image on a low resolution CRT
US5224177A (en) * 1991-10-31 1993-06-29 The University Of Chicago High quality film image correction and duplication method and system
US6078686A (en) 1996-09-30 2000-06-20 Samsung Electronics Co., Ltd. Image quality enhancement circuit and method therefor
US6788340B1 (en) * 1999-03-15 2004-09-07 Texas Instruments Incorporated Digital imaging control with selective intensity resolution enhancement
US6677959B1 (en) * 1999-04-13 2004-01-13 Athentech Technologies Inc. Virtual true color light amplification
US6782137B1 (en) 1999-11-24 2004-08-24 General Electric Company Digital image display improvement system and method
US7149262B1 (en) * 2000-07-06 2006-12-12 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution
US6888552B2 (en) * 2001-06-08 2005-05-03 University Of Southern California High dynamic range image editing
US20030072496A1 (en) 2001-06-25 2003-04-17 Science And Technology Corporation Method of improving a digital image as a function of its dynamic range
US20030039408A1 (en) * 2001-07-02 2003-02-27 Smith Joshua Edward Method and apparatus for selective encoding of a textured, three dimensional model to reduce model size
US20030117654A1 (en) * 2001-12-21 2003-06-26 Wredenhagen G. Finn System and method for dynamically enhanced colour space

Non-Patent Citations (16)

* Cited by examiner, † Cited by third party
Title
Cohen, M., et al., "Neural dynamics of brightness perception : Features , boundaries, diffusion, and resonance," Perception and Psychophysics, vol. 36, No. 5, pp. 428-456 (1984).
Fundamentals of Digital Image Processing, Prentice Hall, 1969, pp. 241-244.
G. Deng, et al. "The Study of Logarithmic Image Processing Model and Its Application to Image Enhancement," IEEE Transactions on Image Processing, vol. 4, No. 4, Apr. 1995, pp. 506-512.
Grossberg, S., et al., "A Neural Network Architecture for Preattentive Vision," IEEE Transactions on Biomedical Engineering, vol. 36, No. 1, pp. 65-84 (Jan. 1989).
Grossberg, S., et al., "Neural Dynamics of a 1D and a 2D Brightness Perception: A Unified Model of Classical and Recent Phenomena," Perception and Psycophysics, vol. 43, 1988, pp. 241-277.
Hummel, R.A., "Image Enhancement by Histogram Transformation," Computer Vision Graphics and Image Processing, vol. 6., 1977, pp. 184-195.
Mohiy M. Hadhoud, "Image Contrast Enhancement Using Homomorphic Processing and Adaptive Filters," 16th National Radio Science Conference, NRSC '99, Ain Shams University, Feb. 23-25, 1999, Cairo, Egypt, pp. C5-1 to C5-7.
Mukherjee, D., et al., "Adaptive Neighborhood Extended Contrast Enhancement And it's Modifications," Graphical Models and Image Processing, vol. 57, No. 3, 1995, pp. 254-265.
Narendra, P.M., et al., "Real-Time Adaptive Contrast Enhancement," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 3, No. 6, pp. 655-661.
Oppenheim, A.V., "Non-Linear Filtering of Multiplied and Convolved Signals," Proc. of IEEE, vol. 56, No. 8, 1968, pp. 1264-1291.
Paranjape, R.B., et al., "Adaptive Neighborhood Histogram Equalization for Image Enhancement," Computer Vision Graphics and Image Processing, pp. 259-267.
Paul E. Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH 97, Aug. 1997. *
Pizer, S.M., et al., "Adapive Histogram Equalization and its Variations," Computer Vison Graphics and Image Processing, vol. 39, 1987, pp. 355-368.
Schutte, K., "Multi-Scale Adaptive Gain Control of IR Images,"Proc. of SPIE, vol. 3661, 1997, pp. 906-914.
Stark, J.A., "Adaptive Image Contrast Enhancement Using Generalizations of Histograms," IEEE Transactions on Image Processing, vol. 9, No. 5, May 2000.
Toet, A., "Adaptive Multiscale Contrast Enhancement Through Non-Linear Pyramid Recombination," Pattern Recognition Letters, vol. 11, 1990, pp. 906-914.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090290815A1 (en) * 2005-01-26 2009-11-26 Koninklijke Philips Electronics, N.V. Sparkle processing
US8068691B2 (en) * 2005-01-26 2011-11-29 Koninklijke Philips Electronics N.V. Sparkle processing
CN105046674A (en) * 2015-07-14 2015-11-11 中国科学院电子学研究所 Nonuniformity correction method of multi-pixel parallel scanning infrared CCD images

Also Published As

Publication number Publication date
US20040042676A1 (en) 2004-03-04
US7164808B2 (en) 2007-01-16
US20070025633A1 (en) 2007-02-01

Similar Documents

Publication Publication Date Title
US7787709B2 (en) Method and apparatus for illumination compensation of digital images
Fattal et al. Gradient domain high dynamic range compression
US20170365046A1 (en) Algorithm and device for image processing
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
US7590303B2 (en) Image enhancement method using local illumination correction
EP1209621B1 (en) A method for enhancing a digital image based upon pixel color
KR100769220B1 (en) Brightness level converting apparatus, brightness level converting method, solid-state image pickup apparatus, and recording medium
US6462768B1 (en) Image enhancement
US7065257B2 (en) Image processing method and apparatus
EP1341124B1 (en) Method for sharpening a digital image with signal to noise estimation
EP1111907A2 (en) A method for enhancing a digital image with noise-dependant control of texture
EP1111906A2 (en) A method for enhancing the edge contrast of a digital image independently from the texture
CN112734650A (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
US20220101503A1 (en) Method and apparatus for combining low-dynamic range images to a single image
CN116579953A (en) Self-supervision water surface image enhancement method and related equipment
CN114429426B (en) Low-illumination image quality improvement method based on Retinex model
Wang et al. Nighttime image dehazing using color cast removal and dual path multi-scale fusion strategy
CN114998173A (en) High dynamic range imaging method for space environment based on local area brightness adjustment
Wang et al. Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping
Pardhi et al. Contrast Enhancement Using Adaptive Threshold Based Dynamic Range Adjustment In Luv Colour Space
Lee et al. Ghost and noise removal in exposure fusion for high dynamic range imaging
Zhu et al. Underwater image color correction and adaptive contrast algorithm improvement based on fusion algorithm
EP4209990A2 (en) Blended gray image enhancement
Li et al. Nighttime Haze Image Restoration using Rolling Guidance Filter
Manik Kumbhar et al. IMAGE DEHZING: A STUDY

Legal Events

Date Code Title Description
AS Assignment

Owner name: HRL LABORATORIES, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SRINIVASA, NARAYAN;REEL/FRAME:018382/0725

Effective date: 20030115

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220831