US20070146506A1 - Single-image vignetting correction - Google Patents
Single-image vignetting correction Download PDFInfo
- Publication number
- US20070146506A1 US20070146506A1 US11/384,063 US38406306A US2007146506A1 US 20070146506 A1 US20070146506 A1 US 20070146506A1 US 38406306 A US38406306 A US 38406306A US 2007146506 A1 US2007146506 A1 US 2007146506A1
- Authority
- US
- United States
- Prior art keywords
- vignetting
- image
- segment
- pixel
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012937 correction Methods 0.000 title claims description 15
- 238000000034 method Methods 0.000 claims abstract description 117
- 230000008569 process Effects 0.000 claims abstract description 101
- 230000011218 segmentation Effects 0.000 claims abstract description 94
- 230000009471 action Effects 0.000 claims description 69
- 230000000694 effects Effects 0.000 claims description 24
- 230000003287 optical effect Effects 0.000 claims description 14
- 238000005286 illumination Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 7
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 230000001747 exhibiting effect Effects 0.000 claims description 3
- 230000000704 physical effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 49
- 238000013459 approach Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000005055 memory storage Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- CDFKCKUONRRKJD-UHFFFAOYSA-N 1-(3-chlorophenoxy)-3-[2-[[3-(3-chlorophenoxy)-2-hydroxypropyl]amino]ethylamino]propan-2-ol;methanesulfonic acid Chemical compound CS(O)(=O)=O.CS(O)(=O)=O.C=1C=CC(Cl)=CC=1OCC(O)CNCCNCC(O)COC1=CC=CC(Cl)=C1 CDFKCKUONRRKJD-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- RVRCFVVLDHTFFA-UHFFFAOYSA-N heptasodium;tungsten;nonatriacontahydrate Chemical compound O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.[Na+].[Na+].[Na+].[Na+].[Na+].[Na+].[Na+].[W].[W].[W].[W].[W].[W].[W].[W].[W].[W].[W] RVRCFVVLDHTFFA-UHFFFAOYSA-N 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Definitions
- Vignetting refers to the phenomenon of brightness attenuation away from an image's center, and is an artifact that is prevalent in photography. Although perhaps not objectionable to the average viewer at low levels, it can significantly impair computer vision algorithms that rely on precise intensity data to analyze a scene. Applications in which vignetting distortions can be particularly damaging include photometric methods such as shape from shading, appearance-based techniques such as object recognition, and image mosaicing.
- vignetting effects Some arise from the optical properties of camera lenses, the most prominent of which is off-axis illumination fall-off or the cos 4 law. These contributions to vignetting result from foreshortening of the lens when viewed from increasing angles from the optical axis. Other sources of vignetting are geometric in nature. For example, light arriving at oblique angles to the optical axis may be partially obstructed by the field stop or lens rim.
- the most straightforward approach involves capturing an image completely spanned by a uniform scene region, such that brightness variations can solely be attributed to vignetting.
- ratios of intensity with respect to the pixel on the optical axis describe the vignetting function.
- Suitable imaging conditions for this approach can be challenging to produce due to uneven illumination and camera tilt, and the vignetting measurements are valid only for images captured by the camera under the same camera settings.
- a calibration image can be recorded only if the camera is at hand; consequently, this approach cannot be used to correct images captured by unknown cameras, such as images downloaded from the web.
- a vignetting function can alternatively be computed from image sequences with overlapping views of an arbitrary static scene.
- point correspondences are first determined in the overlapping image regions. Since a given scene point has a different position in each image, its brightness may be differently attenuated by vignetting. From the aggregate attenuation information from all correspondences, the vignetting function can be accurately recovered without assumptions on the scene.
- the present invention is directed toward a system and process to correct for vignetting in an image using just that image.
- the technique extracts vignetting information from both textured and untextured regions.
- advantage is taken of physical vignetting characteristics to diminish the influence of textures and other sources of intensity variation.
- Vignetting information from disparate image regions is also employed to ensure consistency across the regions. As a result, large image regions appropriate for vignetting function estimation are identified.
- the present system and process iteratively re-segments the image with respect to progressively refined estimates of the vignetting function. Additionally, spatial variations in segmentation scale are used in a manner that enhances collection of reliable vignetting data.
- the present vignetting correction system and process involves first segmenting an input image using a spatially varying segmentation scale that produces reliable segments exhibiting vignetting that is consistent with prescribed physical vignetting characteristics and that conforms to vignetting observed in other segments. A vignetting function is then estimated for the input image that defines a corrected intensity for each pixel using the reliable segments. This last-computed vignetting function estimate is applied to each pixel of the input image to produce a current refined image. The segmenting and vignetting function estimating is repeated using the current refined image in lieu of the input image and the resulting estimate is applied to the input image to produce a new current refined image. This continues until it is determined that the vignetting function estimate has converged. At that point the last produced current refined image is designated as the final vignetting corrected image.
- the segmenting of the input image or the current refined image is accomplished in one embodiment of the present system and process by first segmenting the image at a prescribed initial segmentation scale. Then, for each segment, a reliability factor is computed that represents the degree to which the segment under consideration exhibits consistency with physical vignetting characteristics and conforms to vignetting observed in other segments. In addition, it is determined if the reliability factor of the segment under consideration exceeds a prescribed reliability threshold that indicates the segment is acceptable for vignetting estimation. Whenever the reliability factor does not exceed the reliability threshold, the segment under consideration is recursively segmented at increasingly finer segmentation scales until each of the resulting smaller segments has a reliability factor that exceeds the reliability threshold or becomes less than a prescribed minimum segment size. Those segments determined to have a reliability factor which exceeds the reliability threshold and which are at least as large as the minimum segment size are designated to be reliable segments.
- One way of recursively segmenting a segment under consideration involves dividing the segment using a finer segmentation scale than that last employed on the segment to produce a plurality of smaller segments. For each smaller segment produced, its reliability factor is computed and it is determined if the factor exceeds the reliability threshold. Whenever the reliability factor of the smaller segment under consideration does not exceed the reliability threshold, it is determined if the size of the smaller segment is less than a prescribed minimum segment size. If the segment under consideration exceeds the minimum segment size, then the foregoing is repeated for that segment.
- FIG. 1 is a diagram depicting a general purpose computing device constituting an exemplary system for implementing the present invention.
- FIG. 2 is a flow chart diagramming a generalized process to correct for vignetting in an image using just the image itself according to the present invention.
- FIG. 3 is a diagram illustrating the geometry associated with the Kang-Weiss vignetting model tilt factor.
- FIG. 4 is a diagram depicting the computer program modules making up one embodiment of a vignetting correction system according to the present invention.
- FIGS. 5 A-B are a continuing flow chart diagramming a process to correct for vignetting in an image which represents one way of implementing the vignetting correction system of FIG. 4 .
- FIG. 1 illustrates an example of a suitable computing system environment 100 .
- the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100 .
- the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote computer storage media including memory storage devices.
- an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 110 .
- Components of computer 110 may include, but are not limited to, a processing unit 120 , a system memory 130 , and a system bus 121 that couples various system components including the system memory to the processing unit 120 .
- the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- Computer 110 typically includes a variety of computer readable media.
- Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110 .
- Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
- the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132 .
- ROM read only memory
- RAM random access memory
- BIOS basic input/output system
- RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120 .
- FIG. 1 illustrates operating system 134 , application programs 135 , other program modules 136 , and program data 137 .
- the computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
- FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152 , and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
- removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
- the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140
- magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150 .
- hard disk drive 141 is illustrated as storing operating system 144 , application programs 145 , other program modules 146 , and program data 147 . Note that these components can either be the same as or different from operating system 134 , application programs 135 , other program modules 136 , and program data 137 . Operating system 144 , application programs 145 , other program modules 146 , and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
- a user may enter commands and information into the computer 110 through input devices such as a keyboard 162 and pointing device 161 , commonly referred to as a mouse, trackball or touch pad.
- Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
- These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus 121 , but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190 .
- computers may also include other peripheral output devices such as speakers 197 and printer 196 , which may be connected through an output peripheral interface 195 .
- a camera 192 (such as a digital/electronic still or video camera, or film/photographic scanner) capable of capturing a sequence of images 193 can also be included as an input device to the personal computer 110 . Further, while just one camera is depicted, multiple cameras could be included as input devices to the personal computer 110 .
- the images 193 from the one or more cameras are input into the computer 110 via an appropriate camera interface 194 .
- This interface 194 is connected to the system bus 121 , thereby allowing the images to be routed to and stored in the RAM 132 , or one of the other data storage devices associated with the computer 110 .
- image data can be input into the computer 110 from any of the aforementioned computer-readable media as well, without requiring the use of the camera 192 .
- the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180 .
- the remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110 , although only a memory storage device 181 has been illustrated in FIG. 1 .
- the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173 , but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170 .
- the computer 110 When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173 , such as the Internet.
- the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160 , or other appropriate mechanism.
- program modules depicted relative to the computer 110 may be stored in the remote memory storage device.
- FIG. 1 illustrates remote application programs 185 as residing on memory device 181 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- the present computer-based system and process to correct for vignetting in an image using just the image itself is generally accomplished via the following process actions, as shown in the high-level flow diagram of FIG. 2 .
- an input image is segmented using a spatially varying segmentation scale that produces reliable segments (process action 200 ).
- a segment is considered reliable if it exhibits vignetting that is consistent with prescribed physical vignetting characteristics and if it conforms to vignetting observed in other segments.
- a vignetting function is then estimated for the input image that defines a corrected intensity for each pixel using the reliable segments (process action 202 ).
- the last-computed vignetting function estimate is applied to each pixel of the input image to produce a current refined image (process action 204 ).
- the segmenting and vignetting function estimating actions are repeated using the current refined image in lieu of the input image and the resulting estimate is applied to the input image to produce a new current refined image (process action 206 ). This continues until it is determined that the vignetting function estimate has converged (process action 208 ). At that point the last produced current refined image is designated as the final vignetting corrected image (process action 210 ).
- vignetting correction uses a parametric vignetting model to simplify estimation and minimize the influence of image noise.
- empirical models such as polynomial functions and hyperbolic cosine functions.
- Existing models based on physical considerations include those which account for off-axis illumination and light path obstruction, and the Kang and Weiss model which additionally incorporates scene-based tilt effects.
- Tilt describes intensity variations within a scene region that are caused by differences in distance from the camera, i.e., closer points appear brighter due to the inverse square law of illumination.
- the intensity attenuation effects caused by tilt must be accounted for in single-image vignetting estimation.
- an important property of physical models is that their highly structured and constrained form facilitates estimation in cases where data is sparse and/or noisy.
- an extension of the Kang-Weiss model is used, originally designed for a single planar surface of constant albedo, to multiple surfaces of possibly different color.
- the linear model of geometric vignetting is generalized to a polynomial form.
- the model ⁇ in Eq. (1) can be decomposed into the global vignetting function ⁇ of the camera and the local tilt effects T in the scene. Note that ⁇ is rotationally symmetric; thus, it can be specified as a 1D function of the radial distance r i from the image center.
- T i cos ⁇ ⁇ ⁇ s i ⁇ ( 1 + tan ⁇ ⁇ ⁇ s i f ⁇ ( u i ⁇ sin ⁇ ⁇ ⁇ s i - v i ⁇ cos ⁇ ⁇ ⁇ s i ) ) 3 , ( 3 ) where s i indexes the segment containing pixel i.
- This generalized representation provides a closer fit to the geometric vignetting effects observed in practice.
- a polynomial representing only the geometric component by a polynomial allows the overall model to explicitly account for local tilt effects and global off-axis illumination.
- z represents an RGB vector.
- z is expressed herein as a single color channel, and overall energies are averaged from separate color components.
- the parameters to be estimated are the focal length f in the off-axis component, the ⁇ coefficients of the geometric factor, the tilt angles ⁇ s and ⁇ s , the scene radiance of the center pixel I 0 , and the radiance ratio ⁇ s of each segment.
- minimization of this energy function can intuitively be viewed as simultaneously solving for local segment parameters I s s, ⁇ s and ⁇ s that give a smooth alignment of vignetting attenuations between segments, while optimizing the underlying global vignetting parameters f, ⁇ 1 , . . . , ⁇ p .
- the vignetting corrected image is then given by z i / ⁇ r i . Note that the local tilt factor is retained in the corrected image so as not to produce an unnatural-looking result.
- One embodiment of the present computer-based system to correct for vignetting in an image is based on the program modules depicted in FIG. 4 .
- the input image is first segmented at a coarse scale using a segmentation module 400 , and for each segment a reliability measure of the segment data for vignetting estimation is computed via the reliability module 402 .
- a higher reliability factor is assigned by module 402 .
- Low reliability factors may indicate segments with multiple distinct surfaces, so these segments are recursively segmented by the segmentation module 400 at incrementally finer segmentation scales until the reliability factors of the smaller segments exceed a threshold or segments becomes negligible in size.
- the segmentation scale varies spatially in a manner that facilitates collection of vignetting data.
- segments with high reliability factors are used by the vignetting estimation module 404 to estimate the vignetting function parameters. Since the preceding segmentations may be corrupted by the presence of vignetting, the subsequent iteration of the procedure re-computes segmentation boundaries from a refined image corrected by the vignetting correction module 406 using the current vignetting estimate. Better segmentation results lead to improved vignetting estimates, and these iterations are repeated until the estimates converge. At convergence, the last computed vignetting function is applied by the vignetting correction module 406 to the input image to produce a final vignetting corrected image.
- process action 500 the image that is to be corrected is input.
- the input image (or a current refined image if in existence) is then segmented at a segmentation scale prescribed for the current segmentation level (process action 502 ), and a previously unselected one of the resulting image segments associated with the current segmentation level is selected (process action 504 ).
- the current segmentation level is the first level. The significance of the segmentation level will become apparent shortly.
- the segmentation scale becomes finer with each successive segmentation level.
- a reliability factor is computed for the selected segment in process action 508 . As indicated previously, this reliability factor represents the degree to which the segment exhibits consistency with physical vignetting characteristics and conforms to vignetting observed in other segments. It is next determined if the reliability factor exceeds the prescribed reliability threshold (process action 510 ). As described above, this threshold is indicative of whether a segment is acceptable for vignetting estimation purposes.
- the segmentation level is incremented by one (process action 512 ) and the selected segment is segmented using the incremented segmentation scale (process action 513 ).
- Process actions 502 through 518 are then performed as appropriate. In this way the selected segment is divided using a finer segmentation scale assigned to the new segmentation level resulting in two or more smaller segments.
- process action 506 determines whether there are any previously unselected segments remaining in the current segmentation level (process action 514 ). If so, then process actions 504 through 518 are repeated as appropriate to consider other segments in the current level. However, if there are no previously unselected segments remaining in the current segmentation level, then it is determined if there is a segmentation level preceding the current level (process action 516 ). If there is, the segmentation level is decremented by one (process action 518 ), and process actions 502 through 518 are repeated as appropriate starting with process action 514 as shown in FIG. 5A .
- process action 516 If, however, it is determined in process action 516 that there are no segmentation levels preceding the current level, then the process continues with the computation of a current vignetting function estimate for the image using just those segments determined to have a reliability factor which exceeds the reliability threshold and which are at least as large as the minimum segment size (process action 520 ). As described previously, the vignetting function defines a corrected intensity for each pixel of the input image. The current vignetting function estimate is then applied to each pixel of the input image to produce a current refined image (process action 522 ). Next, it is determined if more than one vignetting function estimate has been computed (process action 524 ).
- process action 526 it is determined if the current vignetting function estimate has converged. As described earlier, the vignetting function estimate has converged if it has not changed more than a prescribed amount in the last iteration. If it is determined that the vignetting function estimate has converged, the current refined image is designated as the final vignetting corrected image (process action 528 ) and the process ends. However, if in process action 524 it is determined that more than one vignetting function estimate has not been computed, or it is determined in process action 526 that the current vignetting function estimate has not converged, then process actions 502 through 528 are repeated as appropriate for each successive iteration of the correction process until the vignetting function estimate converges.
- segmentation scales are spatially varied over the image, and the adverse effects of vignetting on segmentation are progressively reduced as the vignetting function estimate is refined.
- Sets of pixels with the same scene radiance provide more valuable information if they span a broader range of vignetting attenuations.
- larger segments are therefore preferable. While relatively large segments can be obtained with a coarse segmentation scale, many of these segments may be unreliable for vignetting estimation since they may contain multiple surfaces or include areas with non-uniform illumination.
- the present system and process recursively segments it into smaller segments that potentially consist of better data for vignetting estimation. This recursive segmentation proceeds until segments have a high reliability weight or become of negligible size according to a threshold (such as 225 pixels as used in tested embodiments). Segments of very small size generally contain insignificant changes in vignetting attenuation, and the inclusion of such segments would bias the optimization process.
- segmentation scale is controlled by a parameter on variation within each feature class, where a feature may simply be pixel intensity or color.
- a finer partitioning of a low-weight segment can be obtained by segmenting the segment with a decreased parameter value.
- degree of segmentation is set according to a given number of feature classes in an image. There exist various ways to set the number of classes, including user specification, data clustering, and minimum description length criteria. For recursive segmentation, since each segment belongs to a certain class, a finer partitioning of the segment can be obtained by segmenting it with the number of feature classes specified as two.
- segmentation scale varies over an image in a manner designed to maximize the quality of vignetting data.
- graph cut segmentation was employed with per-pixel feature vectors composed of six color/texture attributes.
- the color components are the RGB values, and the local texture descriptors are the polarity, anisotropy and normalized texture contrast.
- Two pixels of the same scene radiance may exhibit significantly different image intensities due to variations in vignetting attenuation.
- segmentation a consequence of this vignetting is that a homogeneous scene area may be divided into separate image segments. Vignetting may also result in heterogeneous image areas being segmented together due to lower contrasts at greater radial distances. For better stability in vignetting estimation, the effects of vignetting on segmentation should be minimized.
- the estimated vignetting function is accounted for in segmentations during the subsequent iteration.
- the vignetting corrected image computed with the currently estimated parameters is used in place of the original input image in determining segmentation boundaries.
- the corrected image is used only for segmentation purposes, and the colors in the original image are still used for vignetting estimation.
- a segment is considered to be reliable if it exhibits consistency with physical vignetting characteristics and conforms to vignetting observed elsewhere in the image.
- vignetting is a low frequency phenomenon with a wavelength on the order of the image width. This difference in frequency characteristics allows vignetting effects to be discerned in many textured segments.
- an estimate of the vignetting function is determined and used as ⁇ ′ in the following iteration.
- computed weights will more closely reflect the quality of segment data. In cases where the texture or shading in a segment coincidentally approximates the characteristics of vignetting, it will be assigned a low weight if it is inconsistent with the vignetting observed in other parts of the image.
- a stepwise method is used for parameter initialization prior to estimating the vignetting function.
- initial values of relative scene radiances ⁇ s are determined for each segment without consideration of vignetting and tilt parameters. For pixels i and j at the same radius r but from different segments, their vignetting attenuation should be equal, so their image values z i and z j should differ only in scene radiance.
- a set of pixels at a given radius and within the same segment may be represented by a single pixel with the average color of the set.
- the local tilt parameters ⁇ s , ⁇ s are estimated by optimizing the energy function in Eq. 5 with the other parameters fixed to their initialization values. After this initialization stage, all the parameters are jointly optimized in Eq. 5 to finally estimate the vignetting function.
- the optimizations of Eq. 5 and Eq. 8 are computed using the Levenberg-Marquardt technique.
Abstract
Description
- Vignetting refers to the phenomenon of brightness attenuation away from an image's center, and is an artifact that is prevalent in photography. Although perhaps not objectionable to the average viewer at low levels, it can significantly impair computer vision algorithms that rely on precise intensity data to analyze a scene. Applications in which vignetting distortions can be particularly damaging include photometric methods such as shape from shading, appearance-based techniques such as object recognition, and image mosaicing.
- Several mechanisms may be responsible for vignetting effects. Some arise from the optical properties of camera lenses, the most prominent of which is off-axis illumination fall-off or the cos4 law. These contributions to vignetting result from foreshortening of the lens when viewed from increasing angles from the optical axis. Other sources of vignetting are geometric in nature. For example, light arriving at oblique angles to the optical axis may be partially obstructed by the field stop or lens rim.
- To determine the vignetting effects in an image, the most straightforward approach involves capturing an image completely spanned by a uniform scene region, such that brightness variations can solely be attributed to vignetting. In such a calibration image, ratios of intensity with respect to the pixel on the optical axis describe the vignetting function. Suitable imaging conditions for this approach, however, can be challenging to produce due to uneven illumination and camera tilt, and the vignetting measurements are valid only for images captured by the camera under the same camera settings. Moreover, a calibration image can be recorded only if the camera is at hand; consequently, this approach cannot be used to correct images captured by unknown cameras, such as images downloaded from the web.
- A vignetting function can alternatively be computed from image sequences with overlapping views of an arbitrary static scene. In this approach, point correspondences are first determined in the overlapping image regions. Since a given scene point has a different position in each image, its brightness may be differently attenuated by vignetting. From the aggregate attenuation information from all correspondences, the vignetting function can be accurately recovered without assumptions on the scene.
- These previous approaches require either a collection of overlapping images or an image of a calibration scene. However, often in practice only a single image of an arbitrary scene is available. The previous techniques gain information for vignetting correction from pixels with equal scene radiance but differing attenuations of brightness. For a single arbitrary input image, this information becomes challenging to obtain, since it is difficult to identify pixels having the same scene radiance while differing appreciably in vignetting attenuation.
- The present invention is directed toward a system and process to correct for vignetting in an image using just that image. To maximize the use of available information in the image, the technique extracts vignetting information from both textured and untextured regions. In extracting vignetting information from a given region, advantage is taken of physical vignetting characteristics to diminish the influence of textures and other sources of intensity variation. Vignetting information from disparate image regions is also employed to ensure consistency across the regions. As a result, large image regions appropriate for vignetting function estimation are identified. To counter the adverse effects of vignetting on segmentation, the present system and process iteratively re-segments the image with respect to progressively refined estimates of the vignetting function. Additionally, spatial variations in segmentation scale are used in a manner that enhances collection of reliable vignetting data.
- In general, the present vignetting correction system and process involves first segmenting an input image using a spatially varying segmentation scale that produces reliable segments exhibiting vignetting that is consistent with prescribed physical vignetting characteristics and that conforms to vignetting observed in other segments. A vignetting function is then estimated for the input image that defines a corrected intensity for each pixel using the reliable segments. This last-computed vignetting function estimate is applied to each pixel of the input image to produce a current refined image. The segmenting and vignetting function estimating is repeated using the current refined image in lieu of the input image and the resulting estimate is applied to the input image to produce a new current refined image. This continues until it is determined that the vignetting function estimate has converged. At that point the last produced current refined image is designated as the final vignetting corrected image.
- The segmenting of the input image or the current refined image is accomplished in one embodiment of the present system and process by first segmenting the image at a prescribed initial segmentation scale. Then, for each segment, a reliability factor is computed that represents the degree to which the segment under consideration exhibits consistency with physical vignetting characteristics and conforms to vignetting observed in other segments. In addition, it is determined if the reliability factor of the segment under consideration exceeds a prescribed reliability threshold that indicates the segment is acceptable for vignetting estimation. Whenever the reliability factor does not exceed the reliability threshold, the segment under consideration is recursively segmented at increasingly finer segmentation scales until each of the resulting smaller segments has a reliability factor that exceeds the reliability threshold or becomes less than a prescribed minimum segment size. Those segments determined to have a reliability factor which exceeds the reliability threshold and which are at least as large as the minimum segment size are designated to be reliable segments.
- One way of recursively segmenting a segment under consideration involves dividing the segment using a finer segmentation scale than that last employed on the segment to produce a plurality of smaller segments. For each smaller segment produced, its reliability factor is computed and it is determined if the factor exceeds the reliability threshold. Whenever the reliability factor of the smaller segment under consideration does not exceed the reliability threshold, it is determined if the size of the smaller segment is less than a prescribed minimum segment size. If the segment under consideration exceeds the minimum segment size, then the foregoing is repeated for that segment.
- It is noted that while the foregoing limitations in existing vignetting correction schemes described in the Background section can be resolved by a particular implementation of a system and process according to the present invention, this system and process is in no way limited to implementations that just solve any or all of the noted disadvantages. Rather, the present system and process has a much wider application as will become evident from the descriptions to follow.
- It should also be noted that this Summary is provided to introduce a selection of concepts, in a simplified form, that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. In addition to the just described benefits, other advantages of the present invention will become apparent from the detailed description which follows hereinafter when taken in conjunction with the drawing figures which accompany it.
- The specific features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
-
FIG. 1 is a diagram depicting a general purpose computing device constituting an exemplary system for implementing the present invention. -
FIG. 2 is a flow chart diagramming a generalized process to correct for vignetting in an image using just the image itself according to the present invention. -
FIG. 3 is a diagram illustrating the geometry associated with the Kang-Weiss vignetting model tilt factor. -
FIG. 4 is a diagram depicting the computer program modules making up one embodiment of a vignetting correction system according to the present invention. - FIGS. 5A-B are a continuing flow chart diagramming a process to correct for vignetting in an image which represents one way of implementing the vignetting correction system of
FIG. 4 . - In the following description of embodiments of the present invention reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
- 1.0 The Computing Environment
- Before providing a description of embodiments of the present invention, a brief, general description of a suitable computing environment in which portions of the invention may be implemented will be described.
FIG. 1 illustrates an example of a suitablecomputing system environment 100. Thecomputing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should thecomputing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in theexemplary operating environment 100. - The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
- With reference to
FIG. 1 , an exemplary system for implementing the invention includes a general purpose computing device in the form of acomputer 110. Components ofcomputer 110 may include, but are not limited to, aprocessing unit 120, asystem memory 130, and asystem bus 121 that couples various system components including the system memory to theprocessing unit 120. Thesystem bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus. -
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed bycomputer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed bycomputer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media. - The
system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements withincomputer 110, such as during start-up, is typically stored inROM 131.RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processingunit 120. By way of example, and not limitation,FIG. 1 illustratesoperating system 134, application programs 135,other program modules 136, andprogram data 137. - The
computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,FIG. 1 illustrates ahard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, amagnetic disk drive 151 that reads from or writes to a removable, nonvolatilemagnetic disk 152, and anoptical disk drive 155 that reads from or writes to a removable, nonvolatileoptical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. Thehard disk drive 141 is typically connected to thesystem bus 121 through a non-removable memory interface such asinterface 140, andmagnetic disk drive 151 andoptical disk drive 155 are typically connected to thesystem bus 121 by a removable memory interface, such asinterface 150. - The drives and their associated computer storage media discussed above and illustrated in
FIG. 1 , provide storage of computer readable instructions, data structures, program modules and other data for thecomputer 110. InFIG. 1 , for example,hard disk drive 141 is illustrated as storingoperating system 144,application programs 145,other program modules 146, andprogram data 147. Note that these components can either be the same as or different fromoperating system 134, application programs 135,other program modules 136, andprogram data 137.Operating system 144,application programs 145,other program modules 146, andprogram data 147 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into thecomputer 110 through input devices such as akeyboard 162 andpointing device 161, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to theprocessing unit 120 through auser input interface 160 that is coupled to thesystem bus 121, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). Amonitor 191 or other type of display device is also connected to thesystem bus 121 via an interface, such as avideo interface 190. In addition to the monitor, computers may also include other peripheral output devices such asspeakers 197 andprinter 196, which may be connected through an outputperipheral interface 195. A camera 192 (such as a digital/electronic still or video camera, or film/photographic scanner) capable of capturing a sequence ofimages 193 can also be included as an input device to thepersonal computer 110. Further, while just one camera is depicted, multiple cameras could be included as input devices to thepersonal computer 110. Theimages 193 from the one or more cameras are input into thecomputer 110 via anappropriate camera interface 194. Thisinterface 194 is connected to thesystem bus 121, thereby allowing the images to be routed to and stored in theRAM 132, or one of the other data storage devices associated with thecomputer 110. However, it is noted that image data can be input into thecomputer 110 from any of the aforementioned computer-readable media as well, without requiring the use of thecamera 192. - The
computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as aremote computer 180. Theremote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to thecomputer 110, although only amemory storage device 181 has been illustrated inFIG. 1 . The logical connections depicted inFIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. - When used in a LAN networking environment, the
computer 110 is connected to theLAN 171 through a network interface oradapter 170. When used in a WAN networking environment, thecomputer 110 typically includes amodem 172 or other means for establishing communications over theWAN 173, such as the Internet. Themodem 172, which may be internal or external, may be connected to thesystem bus 121 via theuser input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to thecomputer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,FIG. 1 illustratesremote application programs 185 as residing onmemory device 181. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. - The exemplary operating environment having now been discussed, the remaining parts of this description section will be devoted to a description of the program modules embodying the invention.
- 2.0 Single-Image Vignetting Correction System and Process
- The present computer-based system and process to correct for vignetting in an image using just the image itself is generally accomplished via the following process actions, as shown in the high-level flow diagram of
FIG. 2 . First, an input image is segmented using a spatially varying segmentation scale that produces reliable segments (process action 200). In the context of the present vignetting correction system, a segment is considered reliable if it exhibits vignetting that is consistent with prescribed physical vignetting characteristics and if it conforms to vignetting observed in other segments. A vignetting function is then estimated for the input image that defines a corrected intensity for each pixel using the reliable segments (process action 202). The last-computed vignetting function estimate is applied to each pixel of the input image to produce a current refined image (process action 204). The segmenting and vignetting function estimating actions are repeated using the current refined image in lieu of the input image and the resulting estimate is applied to the input image to produce a new current refined image (process action 206). This continues until it is determined that the vignetting function estimate has converged (process action 208). At that point the last produced current refined image is designated as the final vignetting corrected image (process action 210). - A description of each of the foregoing processes actions, as well as the basis of the system and process will be provided in the sections that follow.
- 2.1 Vignetting Model
- Most methods for vignetting correction use a parametric vignetting model to simplify estimation and minimize the influence of image noise. Typically used are empirical models such as polynomial functions and hyperbolic cosine functions. Existing models based on physical considerations include those which account for off-axis illumination and light path obstruction, and the Kang and Weiss model which additionally incorporates scene-based tilt effects. Tilt describes intensity variations within a scene region that are caused by differences in distance from the camera, i.e., closer points appear brighter due to the inverse square law of illumination. Although not intrinsic to the imaging system, the intensity attenuation effects caused by tilt must be accounted for in single-image vignetting estimation. Besides having physically meaningful parameters, an important property of physical models is that their highly structured and constrained form facilitates estimation in cases where data is sparse and/or noisy. In this work, an extension of the Kang-Weiss model is used, originally designed for a single planar surface of constant albedo, to multiple surfaces of possibly different color. Additionally, the linear model of geometric vignetting is generalized to a polynomial form.
- 2.2 Kang-Weiss Model
- Consider an image with zero skew, an aspect ratio of 1, and principal point at the image center with image coordinates (u, v)=(0; 0). In the Kang-Weiss vignetting model, brightness ratios are described in terms of an off-axis illumination factor A, a geometric factor G, and a tilt factor T. For a pixel i at (ui,vi) with distance ri from the image center, the vignetting function φ is expressed as
φi =A i G i T i=θri T i for i=1 . . . N, (1)
where
N is the number of pixels in the image, f is the effective focal length of the camera, and α1 represents a coefficient in the geometric vignetting factor. The tilt parameters χ and τ respectively describe the rotation angle of a planar scene surface around an axis parallel to the optical axis, and the rotation angle around the x-axis of this rotated plane, as illustrated inFIG. 3 . - The model φ in Eq. (1) can be decomposed into the global vignetting function θ of the camera and the local tilt effects T in the scene. Note that θ is rotationally symmetric; thus, it can be specified as a 1D function of the radial distance ri from the image center.
- 2.3 Extended Vignetting Model
- In an arbitrary input image, numerous segments with different local tilt factors may exist. To account for multiple surfaces in an image, an extension of the Kang-Weiss model is employed in which different image segments can have different tilt angles. The tilt factor of Eq. (2) is modified to
where si indexes the segment containing pixel i. - The linear geometric factor is also extended to a more general polynomial form:
G i=(1−α1 r i− . . . −αp r i p), (4)
where p represents a polynomial order that can be arbitrarily set according to a desired precision. This generalized representation provides a closer fit to the geometric vignetting effects observed in practice. In contrast to using a polynomial as the overall vignetting model, representing only the geometric component by a polynomial allows the overall model to explicitly account for local tilt effects and global off-axis illumination.
2.4 Vignetting Energy Function - Let the scene radiance Is of a segment s be expressed by its ratio λs to the scene radiance I0 of the center pixel, i.e., Is=λsI0. Given an image with M segments of different scene radiance, the vignetting solution can be formulated as the minimization of the following energy function:
where i indexes the Ns pixels in segment s, zi is the pixel value in the vignetted image, and wi is a weight assigned to pixel i. In color images, z represents an RGB vector. For ease of explanation, z is expressed herein as a single color channel, and overall energies are averaged from separate color components. - In this energy function, the parameters to be estimated are the focal length f in the off-axis component, the α coefficients of the geometric factor, the tilt angles τs and χs, the scene radiance of the center pixel I0, and the radiance ratio λs of each segment. In processing multiple image segments, minimization of this energy function can intuitively be viewed as simultaneously solving for local segment parameters Iss, τs and χs that give a smooth alignment of vignetting attenuations between segments, while optimizing the underlying global vignetting parameters f, α1, . . . , αp. With the estimated parameters, the vignetting corrected image is then given by zi/θr
i . Note that the local tilt factor is retained in the corrected image so as not to produce an unnatural-looking result. - Although typical images are considerably more complex than the uniform planar surfaces considered in this formulation, it will later be described how vignetting data in an image can be separated from other intensity variations such as texture, and how the weights w are set to enable robust use of this energy function.
- 2.5 Vignetting Correction
- One embodiment of the present computer-based system to correct for vignetting in an image is based on the program modules depicted in
FIG. 4 . In each iteration, the input image is first segmented at a coarse scale using asegmentation module 400, and for each segment a reliability measure of the segment data for vignetting estimation is computed via thereliability module 402. For segments that exhibit greater consistency with physical vignetting characteristics and with other segments, a higher reliability factor is assigned bymodule 402. Low reliability factors may indicate segments with multiple distinct surfaces, so these segments are recursively segmented by thesegmentation module 400 at incrementally finer segmentation scales until the reliability factors of the smaller segments exceed a threshold or segments becomes negligible in size. With this segmentation approach, the segmentation scale varies spatially in a manner that facilitates collection of vignetting data. - After spatially adaptive segmentation, segments with high reliability factors are used by the vignetting
estimation module 404 to estimate the vignetting function parameters. Since the preceding segmentations may be corrupted by the presence of vignetting, the subsequent iteration of the procedure re-computes segmentation boundaries from a refined image corrected by thevignetting correction module 406 using the current vignetting estimate. Better segmentation results lead to improved vignetting estimates, and these iterations are repeated until the estimates converge. At convergence, the last computed vignetting function is applied by thevignetting correction module 406 to the input image to produce a final vignetting corrected image. - One way of implementing the foregoing system is outlined in the process flow diagram of FIGS. 5A-B. First, in
process action 500, the image that is to be corrected is input. The input image (or a current refined image if in existence) is then segmented at a segmentation scale prescribed for the current segmentation level (process action 502), and a previously unselected one of the resulting image segments associated with the current segmentation level is selected (process action 504). Initially, the current segmentation level is the first level. The significance of the segmentation level will become apparent shortly. In addition, as described previously, the segmentation scale becomes finer with each successive segmentation level. It is next determined whether the size of the selected segment (e.g., as measured by the number of pixels in the segment) is smaller than a prescribed minimum segment size (process action 506). If not, then a reliability factor is computed for the selected segment inprocess action 508. As indicated previously, this reliability factor represents the degree to which the segment exhibits consistency with physical vignetting characteristics and conforms to vignetting observed in other segments. It is next determined if the reliability factor exceeds the prescribed reliability threshold (process action 510). As described above, this threshold is indicative of whether a segment is acceptable for vignetting estimation purposes. If it is determined that the reliability factor does not exceed the reliability threshold, then the segmentation level is incremented by one (process action 512) and the selected segment is segmented using the incremented segmentation scale (process action 513).Process actions 502 through 518 are then performed as appropriate. In this way the selected segment is divided using a finer segmentation scale assigned to the new segmentation level resulting in two or more smaller segments. - However, if it is determined in
process action 506 that the size of the selected segment is smaller than the minimum segment size, or it is determined inprocess action 510 that the reliability factor does exceed the reliability threshold, then it is determined whether there are any previously unselected segments remaining in the current segmentation level (process action 514). If so, then processactions 504 through 518 are repeated as appropriate to consider other segments in the current level. However, if there are no previously unselected segments remaining in the current segmentation level, then it is determined if there is a segmentation level preceding the current level (process action 516). If there is, the segmentation level is decremented by one (process action 518), andprocess actions 502 through 518 are repeated as appropriate starting withprocess action 514 as shown inFIG. 5A . - If, however, it is determined in
process action 516 that there are no segmentation levels preceding the current level, then the process continues with the computation of a current vignetting function estimate for the image using just those segments determined to have a reliability factor which exceeds the reliability threshold and which are at least as large as the minimum segment size (process action 520). As described previously, the vignetting function defines a corrected intensity for each pixel of the input image. The current vignetting function estimate is then applied to each pixel of the input image to produce a current refined image (process action 522). Next, it is determined if more than one vignetting function estimate has been computed (process action 524). If so, then it is determined if the current vignetting function estimate has converged (process action 526). As described earlier, the vignetting function estimate has converged if it has not changed more than a prescribed amount in the last iteration. If it is determined that the vignetting function estimate has converged, the current refined image is designated as the final vignetting corrected image (process action 528) and the process ends. However, if inprocess action 524 it is determined that more than one vignetting function estimate has not been computed, or it is determined inprocess action 526 that the current vignetting function estimate has not converged, then processactions 502 through 528 are repeated as appropriate for each successive iteration of the correction process until the vignetting function estimate converges. - The major components of the vignetting correction system and process will now be described in more detail in the following sections.
- 2.5.1 Vignetting-Based Image Segmentation
- To obtain information for vignetting estimation, pixels having the same scene radiance need to be identified in the input image. The present system and process addresses this problem with unique adaptations to existing segmentation methods. To facilitate the location of reliable vignetting data, segmentation scales are spatially varied over the image, and the adverse effects of vignetting on segmentation are progressively reduced as the vignetting function estimate is refined.
- 2.5.1.1 Spatial Variations in Scale
- Sets of pixels with the same scene radiance provide more valuable information if they span a broader range of vignetting attenuations. In the context of segmentation, larger segments are therefore preferable. While relatively large segments can be obtained with a coarse segmentation scale, many of these segments may be unreliable for vignetting estimation since they may contain multiple surfaces or include areas with non-uniform illumination. In an effort to gain useful data from an unreliable segment, the present system and process recursively segments it into smaller segments that potentially consist of better data for vignetting estimation. This recursive segmentation proceeds until segments have a high reliability weight or become of negligible size according to a threshold (such as 225 pixels as used in tested embodiments). Segments of very small size generally contain insignificant changes in vignetting attenuation, and the inclusion of such segments would bias the optimization process.
- In the recursive segmentation procedure, incrementally finer scales of segmentation are used. For methods such as mean shift and segment competition, segmentation scale is controlled by a parameter on variation within each feature class, where a feature may simply be pixel intensity or color. With such approaches, a finer partitioning of a low-weight segment can be obtained by segmenting the segment with a decreased parameter value. In other techniques such as graph cuts and Blobworld, the degree of segmentation is set according to a given number of feature classes in an image. There exist various ways to set the number of classes, including user specification, data clustering, and minimum description length criteria. For recursive segmentation, since each segment belongs to a certain class, a finer partitioning of the segment can be obtained by segmenting it with the number of feature classes specified as two.
- With this general adaptation, segmentation scale varies over an image in a manner designed to maximize the quality of vignetting data. In tested embodiments, graph cut segmentation was employed with per-pixel feature vectors composed of six color/texture attributes. The color components are the RGB values, and the local texture descriptors are the polarity, anisotropy and normalized texture contrast.
- 2.5.1.2 Accounting for Vignetting
- Two pixels of the same scene radiance may exhibit significantly different image intensities due to variations in vignetting attenuation. In segmentation, a consequence of this vignetting is that a homogeneous scene area may be divided into separate image segments. Vignetting may also result in heterogeneous image areas being segmented together due to lower contrasts at greater radial distances. For better stability in vignetting estimation, the effects of vignetting on segmentation should be minimized.
- To address vignetting effects in segmentation, after each iteration, the estimated vignetting function is accounted for in segmentations during the subsequent iteration. Specifically, the vignetting corrected image computed with the currently estimated parameters is used in place of the original input image in determining segmentation boundaries. The corrected image is used only for segmentation purposes, and the colors in the original image are still used for vignetting estimation.
- As the segmentations improve from reduced vignetting effects, the estimated vignetting function also is progressively refined. This process is repeated until the difference between vignetting functions in consecutive iterations falls below a prescribed threshold, where the difference is measured as
θ(t) represents the global vignetting function at iteration t, and radial distances r are sampled at k uniform intervals. In tested embodiments, k was set to 100.
2.5.2 Segment Weighting - To guide the vignetting-based segmentation process and promote robust vignetting estimation, the reliability of data in each image segment is evaluated and used as a segment weight. A segment is considered to be reliable if it exhibits consistency with physical vignetting characteristics and conforms to vignetting observed elsewhere in the image.
- Initially, no vignetting estimates are known, so reliability is measured in the first iteration according to how closely the segment data can be represented by the physically-based vignetting model. For a given segment, an estimate θ′ of the vignetting function is computed similarly to the technique to be described in Section 2.5.3, and the weight for segment s is computed as
Each pixel is assigned the weight of its segment. - The presence of texture in a segment does not preclude it from having a high weight. In contrast to textures which typically exhibit high frequency variations, vignetting is a low frequency phenomenon with a wavelength on the order of the image width. This difference in frequency characteristics allows vignetting effects to be discerned in many textured segments.
- At the end of each iteration, an estimate of the vignetting function is determined and used as θ′ in the following iteration. As the vignetting parameters are progressively refined, computed weights will more closely reflect the quality of segment data. In cases where the texture or shading in a segment coincidentally approximates the characteristics of vignetting, it will be assigned a low weight if it is inconsistent with the vignetting observed in other parts of the image.
- 2.5.3 Vignetting Estimation
- For a collection of segments, the many unknown parameters create a complicated solution space. To simplify optimization, a stepwise method is used for parameter initialization prior to estimating the vignetting function. In the first step, initial values of relative scene radiances λs are determined for each segment without consideration of vignetting and tilt parameters. For pixels i and j at the same radius r but from different segments, their vignetting attenuation should be equal, so their image values zi and zj should differ only in scene radiance. Based on this property, relative scene radiance values are initialized by minimizing the
function
The λs values are solved in the least squares sense by singular value decomposition (SVD) on a system of equations
where
and
are unknowns. To expedite minimization of this function, a set of pixels at a given radius and within the same segment may be represented by a single pixel with the average color of the set. - With the initial values of λs, the second step initializes the parameters f, I0, and α1, . . . , αp, where p is the polynomial order used in the geometric factor of Eq. 4. Ignoring local tilt factors, this is computed with the
energy function
This function is iteratively solved by incrementally increasing the polynomial order from k=1 to k=p, and using the previously computed polynomial coefficients α1, . . . , αk-1 as initializations. In tested embodiments, a polynomial order of p=4 was employed. - In the third step, the local tilt parameters τs, χs are estimated by optimizing the energy function in Eq. 5 with the other parameters fixed to their initialization values. After this initialization stage, all the parameters are jointly optimized in Eq. 5 to finally estimate the vignetting function. The optimizations of Eq. 5 and Eq. 8 are computed using the Levenberg-Marquardt technique.
- Although the subject matter has been described in language specific to structural features and/or methodological acts, it is understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (20)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/384,063 US7548661B2 (en) | 2005-12-23 | 2006-03-17 | Single-image vignetting correction |
PCT/US2006/048935 WO2007114847A2 (en) | 2005-12-23 | 2006-12-20 | Single-image vignetting correction |
KR1020087014748A KR101330361B1 (en) | 2005-12-23 | 2006-12-20 | Single-image vignetting correction |
CN2006800481325A CN101341733B (en) | 2005-12-23 | 2006-12-20 | Single-image vignetting correction |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US75380205P | 2005-12-23 | 2005-12-23 | |
US11/384,063 US7548661B2 (en) | 2005-12-23 | 2006-03-17 | Single-image vignetting correction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070146506A1 true US20070146506A1 (en) | 2007-06-28 |
US7548661B2 US7548661B2 (en) | 2009-06-16 |
Family
ID=38193139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/384,063 Expired - Fee Related US7548661B2 (en) | 2005-12-23 | 2006-03-17 | Single-image vignetting correction |
Country Status (4)
Country | Link |
---|---|
US (1) | US7548661B2 (en) |
KR (1) | KR101330361B1 (en) |
CN (1) | CN101341733B (en) |
WO (1) | WO2007114847A2 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080284879A1 (en) * | 2007-05-18 | 2008-11-20 | Micron Technology, Inc. | Methods and apparatuses for vignetting correction in image signals |
US20090190006A1 (en) * | 2008-01-25 | 2009-07-30 | Huggett Anthony R | Methods, systems and apparatuses for pixel signal correction using elliptical hyperbolic cosines |
US20100110241A1 (en) * | 2008-11-04 | 2010-05-06 | Aptina Imaging Corporation | Multi illuminant shading correction using singular value decomposition |
WO2011010040A1 (en) * | 2009-07-21 | 2011-01-27 | Dxo Labs | Method for estimating a defect in an image-capturing system, and associated systems |
WO2012004764A1 (en) * | 2010-07-08 | 2012-01-12 | Yeda Research And Development Co. Ltd. | Geometric modelization of images and applications |
US8571343B2 (en) | 2011-03-01 | 2013-10-29 | Sharp Laboratories Of America, Inc. | Methods and systems for document-image correction |
US8823841B2 (en) | 2012-06-20 | 2014-09-02 | Omnivision Technologies, Inc. | Method and apparatus for correcting for vignetting in an imaging system |
US20150030258A1 (en) * | 2013-07-26 | 2015-01-29 | Qualcomm Incorporated | System and method of corner noise reduction in an image |
WO2015020958A3 (en) * | 2013-08-07 | 2015-04-09 | Qualcomm Incorporated | Dynamic color shading correction |
US9157801B2 (en) | 2011-06-21 | 2015-10-13 | Alakai Defense Systems, Inc. | Laser detection system having an output beam directed through a telescope |
US20160189353A1 (en) * | 2014-12-23 | 2016-06-30 | Postech Academy - Industry Foundation | Method for vignetting correction of image and apparatus therefor |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8842190B2 (en) | 2008-08-29 | 2014-09-23 | Adobe Systems Incorporated | Method and apparatus for determining sensor format factors from image metadata |
US8340453B1 (en) | 2008-08-29 | 2012-12-25 | Adobe Systems Incorporated | Metadata-driven method and apparatus for constraining solution space in image processing techniques |
US8724007B2 (en) | 2008-08-29 | 2014-05-13 | Adobe Systems Incorporated | Metadata-driven method and apparatus for multi-image processing |
US8368773B1 (en) | 2008-08-29 | 2013-02-05 | Adobe Systems Incorporated | Metadata-driven method and apparatus for automatically aligning distorted images |
KR101589310B1 (en) * | 2009-07-08 | 2016-01-28 | 삼성전자주식회사 | Lens shading correction method and apparatus |
US8577140B2 (en) | 2011-11-29 | 2013-11-05 | Microsoft Corporation | Automatic estimation and correction of vignetting |
DE102018115991B4 (en) * | 2018-07-02 | 2023-12-07 | Basler Ag | DIGITAL CIRCUIT FOR CORRECTING A VIGNETTING EFFECT IN PIXEL VALUES OF AN ELECTRONIC CAMERA IMAGE |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5108179A (en) * | 1989-08-09 | 1992-04-28 | Myers Stephen A | System and method for determining changes in fluorescence of stained nucleic acid in electrophoretically separated bands |
US5436980A (en) * | 1988-05-10 | 1995-07-25 | E. I. Du Pont De Nemours And Company | Method for determining quality of dispersion of glass fibers in a thermoplastic resin preform layer and preform layer characterized thereby |
US5602896A (en) * | 1994-12-22 | 1997-02-11 | U.S. Philips Corporation | Composing an image from sub-images |
US6610984B2 (en) * | 2000-03-17 | 2003-08-26 | Infrared Components Corporation | Method and apparatus for correction of microbolometer output |
US6919892B1 (en) * | 2002-08-14 | 2005-07-19 | Avaworks, Incorporated | Photo realistic talking head creation system and method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4135210A1 (en) * | 1991-10-25 | 1993-04-29 | Broadcast Television Syst | METHOD AND CIRCUIT FOR CORRECTING SHADOWS |
US6670988B1 (en) * | 1999-04-16 | 2003-12-30 | Eastman Kodak Company | Method for compensating digital images for light falloff and an apparatus therefor |
EP1447977A1 (en) * | 2003-02-12 | 2004-08-18 | Dialog Semiconductor GmbH | Vignetting compensation |
KR20040088830A (en) * | 2003-04-12 | 2004-10-20 | 엘지전자 주식회사 | Method for removing vignetting effect of charge coupled device |
KR100558330B1 (en) * | 2003-10-08 | 2006-03-10 | 한국전자통신연구원 | Method for compensating vignetting effect of imaging system and imaging apparatus using the same |
-
2006
- 2006-03-17 US US11/384,063 patent/US7548661B2/en not_active Expired - Fee Related
- 2006-12-20 WO PCT/US2006/048935 patent/WO2007114847A2/en active Application Filing
- 2006-12-20 CN CN2006800481325A patent/CN101341733B/en active Active
- 2006-12-20 KR KR1020087014748A patent/KR101330361B1/en active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5436980A (en) * | 1988-05-10 | 1995-07-25 | E. I. Du Pont De Nemours And Company | Method for determining quality of dispersion of glass fibers in a thermoplastic resin preform layer and preform layer characterized thereby |
US5108179A (en) * | 1989-08-09 | 1992-04-28 | Myers Stephen A | System and method for determining changes in fluorescence of stained nucleic acid in electrophoretically separated bands |
US5602896A (en) * | 1994-12-22 | 1997-02-11 | U.S. Philips Corporation | Composing an image from sub-images |
US6610984B2 (en) * | 2000-03-17 | 2003-08-26 | Infrared Components Corporation | Method and apparatus for correction of microbolometer output |
US6919892B1 (en) * | 2002-08-14 | 2005-07-19 | Avaworks, Incorporated | Photo realistic talking head creation system and method |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7920171B2 (en) * | 2007-05-18 | 2011-04-05 | Aptina Imaging Corporation | Methods and apparatuses for vignetting correction in image signals |
US20080284879A1 (en) * | 2007-05-18 | 2008-11-20 | Micron Technology, Inc. | Methods and apparatuses for vignetting correction in image signals |
US20090190006A1 (en) * | 2008-01-25 | 2009-07-30 | Huggett Anthony R | Methods, systems and apparatuses for pixel signal correction using elliptical hyperbolic cosines |
US20100110241A1 (en) * | 2008-11-04 | 2010-05-06 | Aptina Imaging Corporation | Multi illuminant shading correction using singular value decomposition |
US8089534B2 (en) | 2008-11-04 | 2012-01-03 | Aptina Imaging Corporation | Multi illuminant shading correction using singular value decomposition |
KR101666137B1 (en) | 2009-07-21 | 2016-10-13 | 디엑스오 랩스 | Method for estimating a defect in an image-capturing system, and associated systems |
WO2011010040A1 (en) * | 2009-07-21 | 2011-01-27 | Dxo Labs | Method for estimating a defect in an image-capturing system, and associated systems |
FR2948521A1 (en) * | 2009-07-21 | 2011-01-28 | Dxo Labs | METHOD OF ESTIMATING A DEFECT OF AN IMAGE CAPTURE SYSTEM AND ASSOCIATED SYSTEMS |
KR20120062722A (en) * | 2009-07-21 | 2012-06-14 | 디엑스오 랩스 | Method for estimating a defect in an image-capturing system, and associated systems |
CN102577355A (en) * | 2009-07-21 | 2012-07-11 | 德克索实验室 | Method for estimating a defect in an image-capturing system, and associated systems |
US8736683B2 (en) | 2009-07-21 | 2014-05-27 | Dxo Labs | Method for estimating a defect in an image-capturing system, and associated systems |
WO2012004764A1 (en) * | 2010-07-08 | 2012-01-12 | Yeda Research And Development Co. Ltd. | Geometric modelization of images and applications |
US8571343B2 (en) | 2011-03-01 | 2013-10-29 | Sharp Laboratories Of America, Inc. | Methods and systems for document-image correction |
US9157801B2 (en) | 2011-06-21 | 2015-10-13 | Alakai Defense Systems, Inc. | Laser detection system having an output beam directed through a telescope |
US8823841B2 (en) | 2012-06-20 | 2014-09-02 | Omnivision Technologies, Inc. | Method and apparatus for correcting for vignetting in an imaging system |
US20150030258A1 (en) * | 2013-07-26 | 2015-01-29 | Qualcomm Incorporated | System and method of corner noise reduction in an image |
WO2015020958A3 (en) * | 2013-08-07 | 2015-04-09 | Qualcomm Incorporated | Dynamic color shading correction |
US9270959B2 (en) | 2013-08-07 | 2016-02-23 | Qualcomm Incorporated | Dynamic color shading correction |
KR20160040596A (en) * | 2013-08-07 | 2016-04-14 | 퀄컴 인코포레이티드 | Dynamic color shading correction |
KR101688373B1 (en) | 2013-08-07 | 2016-12-20 | 퀄컴 인코포레이티드 | Dynamic color shading correction |
US20160189353A1 (en) * | 2014-12-23 | 2016-06-30 | Postech Academy - Industry Foundation | Method for vignetting correction of image and apparatus therefor |
US9740958B2 (en) * | 2014-12-23 | 2017-08-22 | Postech Academy-Industry Foundation | Method for vignetting correction of image and apparatus therefor |
Also Published As
Publication number | Publication date |
---|---|
WO2007114847A2 (en) | 2007-10-11 |
CN101341733A (en) | 2009-01-07 |
KR20080077987A (en) | 2008-08-26 |
CN101341733B (en) | 2011-04-20 |
KR101330361B1 (en) | 2013-11-15 |
WO2007114847A3 (en) | 2007-12-21 |
US7548661B2 (en) | 2009-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7548661B2 (en) | Single-image vignetting correction | |
Zheng et al. | Single-image vignetting correction | |
US11810272B2 (en) | Image dehazing and restoration | |
US9479754B2 (en) | Depth map generation | |
Artusi et al. | A survey of specularity removal methods | |
Finlayson et al. | Entropy minimization for shadow removal | |
US10013764B2 (en) | Local adaptive histogram equalization | |
US7986830B2 (en) | Radiometric calibration from a single image | |
US9087266B2 (en) | Illumination spectrum recovery | |
US9836855B2 (en) | Determining a depth map from images of a scene | |
US8351740B2 (en) | Correlatability analysis for sparse alignment | |
US11282216B2 (en) | Image noise reduction | |
US20170178297A1 (en) | Method and system for dehazing natural images using color-lines | |
US11227367B2 (en) | Image processing device, image processing method and storage medium | |
US9569699B2 (en) | System and method for synthesizing portrait sketch from a photo | |
Zheng et al. | Single-image vignetting correction | |
US9760997B2 (en) | Image noise reduction using lucas kanade inverse algorithm | |
US8577140B2 (en) | Automatic estimation and correction of vignetting | |
Guo et al. | Haze and thin cloud removal using elliptical boundary prior for remote sensing image | |
Drew et al. | The zeta-image, illuminant estimation, and specularity manipulation | |
Marukatat | Image enhancement using local intensity distribution equalization | |
AU2018202801A1 (en) | Method, apparatus and system for producing a foreground map | |
Barnard | Modeling scene illumination colour for computer vision and image reproduction: A survey of computational approaches | |
AU2015202072B2 (en) | Illumination spectrum recovery | |
CN114549374A (en) | De-noising an image rendered using Monte Carlo rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, STEPHEN;GUO, BAINING;KANG, SING BING;AND OTHERS;REEL/FRAME:017406/0928;SIGNING DATES FROM 20060306 TO 20060316 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034542/0001 Effective date: 20141014 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210616 |