|Numéro de publication||US20050213845 A1|
|Type de publication||Demande|
|Numéro de demande||US 10/708,771|
|Date de publication||29 sept. 2005|
|Date de dépôt||24 mars 2004|
|Date de priorité||24 mars 2004|
|Autre référence de publication||US7623728|
|Numéro de publication||10708771, 708771, US 2005/0213845 A1, US 2005/213845 A1, US 20050213845 A1, US 20050213845A1, US 2005213845 A1, US 2005213845A1, US-A1-20050213845, US-A1-2005213845, US2005/0213845A1, US2005/213845A1, US20050213845 A1, US20050213845A1, US2005213845 A1, US2005213845A1|
|Inventeurs||Gopal Avinash, Rakesh Lal|
|Cessionnaire d'origine||General Electric Company|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (26), Référencé par (12), Classifications (12), Événements juridiques (3)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
The present disclosure relates generally to a method and product for processing digital images, and particularly to a method and product for processing digital images with detection and suppression of background noise.
A digital image is a collection of digital information in the form of pixels that may be processed to provide a visual image representative of an object-of-interest. The digital image may be obtained from digital cameras, digital video, digital scanners, or the like, or may be digitized from an analog film. An exemplary object-of-interest is a biological object, and an exemplary digital scanner is a magnetic resonance imaging (MRI) scanner. While reference may be made herein to MRI as an exemplary scanning method, it will be appreciated that other scanning methods may be employed, such as computed tomography (CT), ultrasound, or X-ray, for example. Digital images of objects that are smaller than the field of view of the scan contain background regions where the signal intensity should be zero. However, variations in the intensities of these background pixels may arise due to noise and other artifacts such as ghosting and gradient warping correction for example. Though noise may be prominent throughout an image, it is most prominent in the background regions where the signal is expected to be zero. The presence of background intensity in an image unnecessarily distracts an observer who may be viewing the final image. Accordingly, there is a need in the art for a digital image processing method and product that improves image quality by detecting and suppressing background noise.
Embodiments of the invention include a method for processing a digital image. A foreground region relating to an imaged object is estimated, a background region relating to other than the imaged object is estimated, and by using the image, the estimated foreground region and the estimated background region, a transition region disposed between the foreground region and the background region is calculated. The estimated foreground region, the estimated background region, and the calculated transition region, each include a separate set of pixels that may each be processed separately for suppressing pixel intensities in the estimated background region and improving image quality.
Other embodiments of the invention include a computer program product for processing a digital image, the product including a storage medium, readable by a processing circuit, storing instructions for execution by the processing circuit for performing embodiments of the aforementioned method.
Referring to the exemplary drawings wherein like elements are numbered alike in the accompanying Figures:
Embodiments of the invention provide a method and product for processing digital images, and particularly to a method and product for processing digital images with detection and suppression of background noise. In an exemplary embodiment, a biological object is scanned using an MRI imaging technique, and the resulting image is digitally processed for the detection of background noise and suppression thereof, thereby improving the signal-to-noise-ratio (SNR) of the resulting image. The digital image is separated into foreground, background and transition regions, thereby enabling each region to be analyzed and processed separately. The processing techniques applied involve low-level image processing techniques, such as thresholding and binary image subtraction for example. While embodiments described herein may employ MRI as an exemplary imaging technique, it will be appreciated that the disclosed invention may also employ other imaging techniques, such as CT, Ultrasound, X-ray, and the like, for example.
Referring now to
Information from original image 110, background mask 130 and transition mask 140, is used by a background noise suppression method 300, discussed hereinafter in reference to
Background detection method 200 will now be discussed with reference to
From original image 110, a gradient magnitude image 208 is computed that provides a value for the gradient of the intensity of each pixel of original image 110. A gradient-constrained-hysteresis-threshold (GCHT) method 400 is applied to original image 110 using gradient magnitude image 208 to calculate an initial transition region 210. Initial transition region 210 is disposed between estimated foreground and background regions 204, 206, which may be seen by referring to the illustration depicted
In an embodiment, initial transition region 210 is calculated to be that region containing pixels having a morphological connection (that is, a connection made possible via a morphological operation) to a pixel of estimated foreground region 204, having an intensity greater than a low threshold t_low 404, and having a gradient magnitude that is within a gradient tolerance value g_tol 406 of the gradient magnitude of the foreground pixel to which it is connected, which is depicted in algorithm form in
In GCHT 400, the values for low threshold t_low 404 and gradient tolerance value g_tol 406 may be user adjusted, thereby making GCHT method 400 tunable.
Estimated foreground region 204, estimated background region 206, and calculated initial transition region 210, each comprise a separate set of pixels that may each be processed separately for suppressing background noise and improving image quality. For example, and as suggested by GCHT method 400 depicted in
An advantage associated with GCHT method 400 is that in an image having strong edges, the growing region grows along the edge and not across the edge. In this way, moderate intensity background artifacts that are connected to an edge will not get incorporated into the transition region. As a result, the t_low threshold 404 may be set quite low to capture low intensity foreground regions into the transition region without incorporating artifacts with higher intensities that are in the background. Also, this approach will capture the low intensity foreground regions in images that suffer from intensity variation as long as the image gradient in the regions of intensity variation is less than the gradient tolerance value g_tol.
At the beginning of the iterative process, the gradient tolerance value g_tol 406 is set 510 to a low gradient tolerance value tol_low 408, after which, GCHT method 400 is applied as discussed previously. Next, at 520, it is determined whether the foreground-plus-transition-region (for example, estimated foreground region 204 plus initial transition region 210 for the first pass in the iterative process) has grown by more than a defined number of pixels size_t. In response to the determination at 520 being yes, all pixels that may be added to the foreground since the last labeling are labeled as incremental transition region “i”, and the iteration counter is incremented by one (i=i+1). These actions are depicted at 530. In response to the determination at 520 being no, gradient tolerance value g_tol 406 is incremented by a gradient tolerance value step g_tol_step 410 until a high gradient tolerance value tol_high 412 condition is met or exceeded. These actions are depicted at 540 and 550, respectively, where 550 is labeled “repeat while: g_tol<tol_high”. With each iteration of do-loop 505, incremental transitions regions (i, i+1, i+2, etc.) are calculated, with each incremental transition region having an incrementally larger gradient tolerance value g_tol 406 that has a value between tol_low 408 and tol_high 412. The iterative do-loop 505 is continued until tol_high 412 is met or exceeded.
In an embodiment employing the iterative method of
Upon completion of do-loop 505, that is, where the condition “repeat while:g_tol<tol_high” 550 is no longer satisfied, all of the incremental transition regions (i, i+1, i+2, etc.) 530 are merged into a single transition region 560 using a focus parameter 570. In an embodiment, the focus parameter 570 is the percentage of the total number of transition regions that will be kept as transition mask 140. For example, where there are ten transition regions (i=1 through 10) and the focus parameter 570 is set to 80% (focus=0.8), then the first eight transitions regions (i=1 through 8) will be merged into transition mask 140 and the last two transition regions will be discarded. Accordingly, and in the presence of an iterative approach, the adjustment of single transition region 560 by focus parameter 570 results in transition mask 140. An advantage of using a focus parameter is that the internal parameters of method 100 may be fixed and the entire process may be made controllable or tunable by the single focus parameter 570. The higher the focus parameter 570, the more liberal method 100 is in accepting pixels into transition mask 140.
In an embodiment, the following parameters may be tunable:
However, some of these parameters may set to a defined value. For example, low gradient tolerance value tol_low 408 and high gradient tolerance value tol_high 412 may be set to a specific percentage of the average gradient magnitude of the entire original image 110, and gradient tolerance value step g_tol_step 410 may be set according to a desired number of iterations, such as (tol_high-tol_low)/10 where ten iterations are desired for generating ten incremental transition regions 530, for example.
Once transition mask 140 is defined, an object region, also referred to as the object image, may be defined as being the union of the estimated foreground region 204 and the transition mask 140, which may then undergo morphological operations for improving the image quality. Here, the term object region refers to those pixels of the original image that are determined via method 100 to be representative of the biological object under observation. Such morphological operations may include erosion of the image and dilation back to the transition mask to remove small objects and bridges that may connect background noise to the main object, and dilation of the image and erosion back to the original size to fill small cracks in the edges of the image 212. The zero mask may then be combined with the object image, which may then undergo a connected-components morphological operation to fill small holes in the resulting image 214. Upon completion of the hole filling process 214, the zero mask is removed from the image resulting in the final object mask 216. The final object mask 216 is then separated into a final foreground region and a final transition region 218. In accordance with embodiments of the invention disclosed herein, the initial estimated foreground region 204 is labeled the final foreground mask 152, the difference between the object region and the final foreground mask is labeled the final transition mask 154, and the remainder of the image is labeled the final background mask 156, which collectively make up the filtered image 150, best seen by referring back to
Regarding the suppression of background noise, and referring now to
Embodiments of the invention may be provided in executable instruction form on a storage medium, such as memory 605 or in the form of a hard drive, a floppy disk or a CD-ROM 610 for example, that is readable by a processing circuit, such as processor 615 for example, the processing circuit being in signal communication via application software with a graphical user interface at a computer, such as computer 600 for example, whereby a user may execute the embedded instructions for practicing the disclosed invention. The instructions may be loaded into and/or executed by computer 600, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein when the computer program code is loaded into and executed by computer 600, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits. The technical effect of the executable instructions is to enhance the quality of a digital image, of a scanned biological object for example, by detecting and suppressing background noise in the image.
As disclosed, some embodiments of the invention may include some of the following advantages: use of low level image processing tools (no statistical models, no neural network based methods, for example) for improving image quality; use of method parameters that may be tunable or predefined; the ability to capture low intensity foreground regions into the transition mask without incorporating artifacts with higher intensities that are in the background; and, the ability to execute morphological and filtering operations on discrete regions of the image thereby providing greater control over the generation and quality of the final object image.
While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to a particular embodiment disclosed as the best or only mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another. Furthermore, the use of the terms a, an, etc. do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US4951140 *||22 févr. 1989||21 août 1990||Kabushiki Kaisha Toshiba||Image encoding apparatus|
|US5157740 *||7 févr. 1991||20 oct. 1992||Unisys Corporation||Method for background suppression in an image data processing system|
|US5268967 *||29 juin 1992||7 déc. 1993||Eastman Kodak Company||Method for automatic foreground and background detection in digital radiographic images|
|US5368033 *||20 avr. 1993||29 nov. 1994||North American Philips Corporation||Magnetic resonance angiography method and apparatus employing an integration projection|
|US5631975 *||19 oct. 1994||20 mai 1997||Koninkl Philips Electronics Nv||Image segmentation device|
|US5694478 *||15 déc. 1994||2 déc. 1997||Minnesota Mining And Manufacturing Company||Method and apparatus for detecting and identifying microbial colonies|
|US5825910 *||2 oct. 1996||20 oct. 1998||Philips Electronics North America Corp.||Automatic segmentation and skinline detection in digital mammograms|
|US5915044 *||13 avr. 1998||22 juin 1999||Intel Corporation||Encoding video images using foreground/background segmentation|
|US6061476 *||24 nov. 1997||9 mai 2000||Cognex Corporation||Method and apparatus using image subtraction and dynamic thresholding|
|US6081626 *||9 mai 1995||27 juin 2000||International Business Machines Corporation||Method and system for background removal in electronically scanned images|
|US6088392 *||30 mai 1997||11 juil. 2000||Lucent Technologies Inc.||Bit rate coder for differential quantization|
|US6173083 *||14 avr. 1998||9 janv. 2001||General Electric Company||Method and apparatus for analyzing image structures|
|US6240215 *||23 sept. 1998||29 mai 2001||Xerox Corporation||Method and apparatus for digital image processing with selectable background suppression data acquisition modes|
|US6243070 *||13 nov. 1998||5 juin 2001||Microsoft Corporation||Method and apparatus for detecting and reducing color artifacts in images|
|US6275304 *||22 déc. 1998||14 août 2001||Xerox Corporation||Automated enhancement of print quality based on feature size, shape, orientation, and color|
|US6337925 *||8 mai 2000||8 janv. 2002||Adobe Systems Incorporated||Method for determining a border in a complex scene with applications to image masking|
|US6453069 *||17 nov. 1997||17 sept. 2002||Canon Kabushiki Kaisha||Method of extracting image from input image using reference image|
|US6507618 *||25 avr. 2000||14 janv. 2003||Hewlett-Packard Company||Compressed video signal including independently coded regions|
|US6580812 *||21 déc. 1998||17 juin 2003||Xerox Corporation||Methods and systems for automatically adding motion lines representing motion to a still image|
|US6661918 *||3 déc. 1999||9 déc. 2003||Interval Research Corporation||Background estimation and segmentation based on range and color|
|US7391895 *||24 juil. 2003||24 juin 2008||Carestream Health, Inc.||Method of segmenting a radiographic image into diagnostically relevant and diagnostically irrelevant regions|
|US20010055421 *||26 mars 2001||27 déc. 2001||Martin Baatz||Method of iterative segmentation of a digital picture|
|US20020037103 *||26 déc. 2000||28 mars 2002||Hong Qi He||Method of and apparatus for segmenting a pixellated image|
|US20030044045 *||4 juin 2001||6 mars 2003||University Of Washington||Video object tracking by estimating and subtracting background|
|US20030152285 *||25 janv. 2003||14 août 2003||Ingo Feldmann||Method of real-time recognition and compensation of deviations in the illumination in digital color images|
|US20050055658 *||9 sept. 2003||10 mars 2005||International Business Machines Corporation||Method for adaptive segment refinement in optical proximity correction|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US7734089 *||23 août 2005||8 juin 2010||Trident Microsystems (Far East) Ltd.||Method for reducing mosquito noise|
|US7899271||16 juil. 2007||1 mars 2011||Raytheon Company||System and method of moving target based calibration of non-uniformity compensation for optical imagers|
|US8331695 *||12 févr. 2009||11 déc. 2012||Xilinx, Inc.||Integrated circuit having a circuit for and method of updating parameters associated with a background estimation portion of a video frame|
|US8416986 *||29 oct. 2009||9 avr. 2013||Raytheon Company||Methods and systems for processing data using non-linear slope compensation|
|US8738678||14 sept. 2011||27 mai 2014||Raytheon Company||Methods and systems for determining an enhanced rank order value of a data set|
|US8792711||2 déc. 2009||29 juil. 2014||Hewlett-Packard Development Company, L.P.||System and method of foreground-background segmentation of digitized images|
|US20110103692 *||29 oct. 2009||5 mai 2011||Raytheon Company||Methods and systems for processing data using non-linear slope compensation|
|US20110150317 *||23 juin 2011||Electronics And Telecommunications Research Institute||System and method for automatically measuring antenna characteristics|
|US20120063656 *||13 sept. 2011||15 mars 2012||University Of Southern California||Efficient mapping of tissue properties from unregistered data with low signal-to-noise ratio|
|WO2009048660A2 *||15 juil. 2008||16 avr. 2009||Raytheon Co||System and method of moving target based calibration of non-uniformity compensation for optical imagers|
|WO2011068508A1 *||2 déc. 2009||9 juin 2011||Hewlett-Packard Development Company, Lp||System and method of foreground-background segmentation of digitized images|
|WO2013070132A1 *||5 nov. 2012||16 mai 2013||Flir Systems Ab||Image processing method for dynamic auto-adjustment of an ir image|
|Classification aux États-Unis||382/275, 382/173|
|Classification internationale||G06K9/40, G06K9/34, G06T5/00|
|Classification coopérative||G06T2207/10088, G06T2207/30004, G06T2207/20012, G06T5/002, G06T2207/20144|
|Classification européenne||G06T5/00D1, G06T5/00D|
|24 mars 2004||AS||Assignment|
Owner name: GENERAL ELECTRIC COMPANY, NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AVINASH, GOPAL B.;LAL, RAKESH MOHAN;REEL/FRAME:014444/0451
Effective date: 20040317
|26 oct. 2010||CC||Certificate of correction|
|14 mars 2013||FPAY||Fee payment|
Year of fee payment: 4