WO2004027710A1 - Method of color image processing to eliminate shadows and reflections. - Google Patents
Method of color image processing to eliminate shadows and reflections. Download PDFInfo
- Publication number
- WO2004027710A1 WO2004027710A1 PCT/EP2003/050403 EP0350403W WO2004027710A1 WO 2004027710 A1 WO2004027710 A1 WO 2004027710A1 EP 0350403 W EP0350403 W EP 0350403W WO 2004027710 A1 WO2004027710 A1 WO 2004027710A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- component
- components
- color image
- functions
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 230000009466 transformation Effects 0.000 claims description 25
- 238000001914 filtration Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 abstract description 6
- 230000003071 parasitic effect Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the present invention relates to a method for the processing of color images to eliminate shadows. It can be applied especially to the heightening of contrast in remote surveillance, and to image processing before segmentation in mobile robotics.
- An object of the invention is a method of color image processing, comprising the following steps:
- the color image is converted into an intermediate image comprising components that depend solely on the H and S components of the original image in an HSV or HLS representation;
- the intermediate image comprising at least two components X and Y, determined by functions especially of the H component, these functions taking the same value when the H component is zero or equal to one;
- FIG. 3 is a graph showing the steps of implementation of the invention according to an advantageous mode using texture attributes
- - Figure 5 shows an alternative implementation of the example shown in figure 3
- - Figure 6 is an exemplary histogram of a new image obtained by the implementation of the invention to which a filtering of pixels and an adjustment of the dynamic range is applied;
- RGB red, green, blue space.
- the color of each pixel is represented therein by three components R (red), G (green), B (blue). These components may be represented on three axes 11 , 12, 13 in Cartesian coordinates.
- the primary colors, cyan, blue, magenta, red, yellow and green, form the corners of a cube 10 in this space. Black and white form opposite corners of this cube.
- the segment connecting these points is a main diagonal of this cube on which only the luminance L varies.
- HSV hue, saturation, value
- S, and V color is represented by three components H, S, and V.
- H, S, and V components may be represented in a referential system of cylindrical coordinates.
- the H component is represented by an angle 21.
- the S and V components are represented on two axes 22 and 23.
- the H component is the hue. This component makes it possible for example to distinguish red and yellow.
- the S component is saturation, namely the purity of the color. This component makes it possible for example to distinguish red and pink.
- the V component is the value or intensity, namely the quantity of light. This component makes it possible, for example, to distinguish between dark gray and light gray, or again between dark red and light red. It is assumed hereinafter in the description that the components H, S, V are normalized between 0 and 1. The following explanations may be transposed in an obvious way if these components are not normalized between 0 and 1.
- the passage from the RGB representation to the HSV representation is done according to known analytical formulae.
- the primary colors and black form a six-faced pseudo-cone (hexcone) 20.
- the plane with a constant value of V corresponds to the projection of the RGB cube parallel to its main diagonal.
- the value or intensity V does not carry a great deal of useful information because the elements of the image to be segmented are distinguished far more by their spectral properties than by their reflection or their total absorption. Furthermore, the value or intensity V carries information that leads the segmentation algorithms into error, since shadowed and non-shadowed zones of a same object (a road for example) have very different intensities (with variation of intensity at the source). Similarly, a water film on the road gives rise to a strong reflection and, therefore, a very different intensity from that of a dry or simply wet portion on the same road (with variation of intensity by modification of the properties of reflection).
- the color image is converted into an intermediate image in which the V component is absent.
- This intermediate image includes components that depend solely on the H and S components of the original image in an HSV representation.
- components X and Y are used in the intermediate image. These components are determined by functions especially of the H component, and take the same value when the H component is zero or equal to one. Thus, the similarities induced by the H component, proper to the system of cylindrical coordinates, are eliminated.
- G y (H) sm(2 ⁇ H - ⁇ ) where ⁇ is a constant.
- a third component can be added to this intermediate image: saturation S.
- hue information H and saturation information S while at the same time the singularities have been eliminated.
- the components X and Y are determined by functions not only of the H component but also of the S component. In the same way as above, these functions take the same value when the H component is zero or equal to one. Furthermore, these functions tend toward zero when the S component tends toward zero.
- G X ' (0,S) G X ' ⁇ 1,S)
- G ⁇ ' ⁇ ,S) G r ' (l, S)
- G X (H, S) F X (S)X G X (H)
- an intermediate image 34 is generated.
- This intermediate image furthermore comprises texture attributes.
- the texture attributes are determined from an image 32 comprising the components X and Y for example, and not from the initial color image 30 encoded in the RGB space. Indeed, it is sought first of all to eliminate the shadows and the reflections.
- the initial color image 30 is converted into an image 31 with two components, the H and S components.
- the image 31 is converted into an image 32 that comprises the two components X and Y defined here above.
- texture attributes are determined for each pixel of the image 32. These attributes may be, for example, the attributes defined by Haralick. Of course, other attributes may be used, through a fractal approach for example.
- An image 33 is obtained with texture attributes.
- the components X and Y and the texture attributes are combined to form the intermediate image 34, to which the remaining processing operations are applied. It is thus possible to apply a multitude of transformations to the intermediate image without being preoccupied by shadows and reflections.
- a new image is generated from the intermediate image obtained.
- a transformation is applied to the intermediate image so as to keep only one component.
- the single component of the new image is a function of the components of the intermediate image.
- the new image is generated by keeping only the first component of the Karhunen-Loeve transformation.
- This transformation is applied in the referential system D1 , D2 of the eigen vectors are associated with the covariance matrix of the pixels of the intermediate image. There is thus a passage from the X, Y referential system to the D1 , D2 referential system. As can be seen in the exemplary distribution of the pixels 40 of the image in these referential systems, this transformation enables the decorrelation of the information present in the image.
- the first component D1 is the most discriminatory component, i.e. it is this component that maximizes the information. This facilitates discrimination in the segmentation processing operations applied to the new image.
- a linear approximation of the Karhunen-Loeve transformation is applied, not the Karhunen-Loeve transformation itself.
- the Karhunen-Loeve transformation is applied to a representative set of images (for example eight images). The images of this set have characteristics similar to the images to which it is sought to apply the linear approximation. An average is then taken of the changes in referential system obtained by the exact Karhunen-Loeve transformation. A average change of referential system is obtained, and this referential will be used as an approximation of the Karhunen-Loeve transformation.
- the new image is generated by projecting the components of the intermediate image in the plane in which the dynamic range or the mean standard deviation is the greatest.
- FIG 5 represents an alternative mode of implementation with respect to the example shown in figure 3.
- an image with two components, the H and S components is generated.
- an image 52 comprising the two components X and Y is generated.
- the Karhunen- Loeve transformation (or a linear approximation of this transformation) is applied to the components X and Y.
- the first component of the result of this transformation is kept to form an image 53 comprising a single component. Texture attributes of this image 53 are determined.
- These texture attributes form an image 54.
- the single component of the image 53 and the texture attributes (image 54) are combined to form the intermediate image 55 comprising color information and texture information.
- the remaining processing operations are applied to this intermediate image 55.
- the Karhunen-Loeve transformation can thus be applied a second time, for example to obtain a monochromatic image.
- figure 6 represents an exemplary histogram 60 of the component of the monochromatic image.
- the x-axis 61 represents the values taken by the pixels.
- the y-axis 62 represents the number of pixels in the image having a determined value. A non-negligible range of the values is taken up by pixels that are very small in number at the two extreme ends 63a, 63b of the histogram.
- the lightest and the darkest pixels which are present in very small numbers, are filtered.
- the darkest and the lightest pixels which represent a determined fraction of the total number of pixels of the image, are filtered.
- a number of pixels to be eliminated from the image is determined. This number of pixels may be expressed as a percentage of the total number of pixels of the image.
- a minimum value VMIN and a maximum value VMAX of the pixels are determined, so that the number of pixels of the image whose value is below VMIN or above VMAX is as close as possible to the number of pixels to be eliminated. For all the pixels whose value is beyond the values VMIN and VMAX, the value of the corresponding limit is assigned.
- the values VMIN and VMAX may be determined iteratively. During an iteration, a value 64 of the y-axis 62 is selected in going through the axis from the bottom to the top. The smallest value VMIN and the greatest value VMAX of the y-axis 61 , corresponding to the intersections 64a, 64b of the histogram 60 with the selected value 64, are determined. The operation is stopped when the number of pixels to be eliminated for the selected value is greater than the determined fraction of the total number of pixels of the image. The values taken by the component of the monochromatic image (the most discriminatory component) are distributed in a non-normalized range.
- the parameters of the Karhunen-Loeve transformation are determined according to the selected zone of the image only. This transformation is then applied to the entire image. The pixels of the selected zone occupy only a part 70 of the histogram 60.
- a linear transformation can be applied to bring the values of the pixels into a normalized interval.
- the parameters of the linear transformation are determined so as to bring the minimum value VMIN and the maximum value VMAX of the selected zone between two predetermined levels NMIN and NMAX. These levels NMIN and NMAX are strictly contained in the normalized interval. For example, for an image encoded on 8 bits, the normalized interval being [0.255 ], a value of 32 can be taken for NMIN and 224 for NMAX.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2003271758A AU2003271758A1 (en) | 2002-09-20 | 2003-09-12 | Method of color image processing to eliminate shadows and reflections. |
US10/528,447 US7613353B2 (en) | 2002-09-20 | 2003-09-12 | Method of color image processing to eliminate shadows and reflections |
CA2504567A CA2504567C (en) | 2002-09-20 | 2003-09-12 | Method of color image processing to eliminate shadows and reflections |
EP03753590A EP1543473A1 (en) | 2002-09-20 | 2003-09-12 | Method of color image processing to eliminate shadows and reflections. |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0211684 | 2002-09-20 | ||
FR0211684A FR2844898B1 (en) | 2002-09-20 | 2002-09-20 | COLOR IMAGE PROCESSING METHOD FOR REMOVING SHADES AND REFLECTIONS. |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004027710A1 true WO2004027710A1 (en) | 2004-04-01 |
Family
ID=31970866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2003/050403 WO2004027710A1 (en) | 2002-09-20 | 2003-09-12 | Method of color image processing to eliminate shadows and reflections. |
Country Status (6)
Country | Link |
---|---|
US (1) | US7613353B2 (en) |
EP (1) | EP1543473A1 (en) |
AU (1) | AU2003271758A1 (en) |
CA (1) | CA2504567C (en) |
FR (1) | FR2844898B1 (en) |
WO (1) | WO2004027710A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100334593C (en) * | 2006-03-07 | 2007-08-29 | 上海大学 | Shadow zone cutting method of two-dimensional color scene |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7751594B2 (en) | 2003-04-04 | 2010-07-06 | Lumidigm, Inc. | White-light spectral biometric sensors |
US8229185B2 (en) * | 2004-06-01 | 2012-07-24 | Lumidigm, Inc. | Hygienic biometric sensors |
US8787630B2 (en) * | 2004-08-11 | 2014-07-22 | Lumidigm, Inc. | Multispectral barcode imaging |
US8175346B2 (en) * | 2006-07-19 | 2012-05-08 | Lumidigm, Inc. | Whole-hand multispectral biometric imaging |
US7995808B2 (en) | 2006-07-19 | 2011-08-09 | Lumidigm, Inc. | Contactless multispectral biometric capture |
US8355545B2 (en) * | 2007-04-10 | 2013-01-15 | Lumidigm, Inc. | Biometric detection using spatial, temporal, and/or spectral techniques |
WO2008134135A2 (en) * | 2007-03-21 | 2008-11-06 | Lumidigm, Inc. | Biometrics based on locally consistent features |
TWI387354B (en) * | 2008-06-18 | 2013-02-21 | Novatek Microelectronics Corp | Method for adjusting color saturation and image displaying system using the method |
US20100246902A1 (en) * | 2009-02-26 | 2010-09-30 | Lumidigm, Inc. | Method and apparatus to combine biometric sensing and other functionality |
US8731250B2 (en) * | 2009-08-26 | 2014-05-20 | Lumidigm, Inc. | Multiplexed biometric imaging |
US8570149B2 (en) | 2010-03-16 | 2013-10-29 | Lumidigm, Inc. | Biometric imaging using an optical adaptive interface |
EP2821967A1 (en) * | 2013-07-03 | 2015-01-07 | Kapsch TrafficCom AB | Shadow detection in a multiple colour channel image |
CN109919963B (en) * | 2019-03-14 | 2023-03-24 | 吉林大学 | Vehicle paint defect position detection method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08186727A (en) * | 1994-12-28 | 1996-07-16 | Canon Inc | Device and method for processing picture |
JP3607059B2 (en) * | 1997-09-19 | 2005-01-05 | コニカミノルタビジネステクノロジーズ株式会社 | Color feature extraction apparatus, color feature extraction method, and program storage medium |
US7194128B1 (en) * | 2000-07-26 | 2007-03-20 | Lockheed Martin Corporation | Data compression using principal components transformation |
US7133069B2 (en) * | 2001-03-16 | 2006-11-07 | Vision Robotics, Inc. | System and method to increase effective dynamic range of image sensors |
JP3943973B2 (en) * | 2002-03-20 | 2007-07-11 | キヤノン株式会社 | Image processing apparatus and method |
-
2002
- 2002-09-20 FR FR0211684A patent/FR2844898B1/en not_active Expired - Lifetime
-
2003
- 2003-09-12 WO PCT/EP2003/050403 patent/WO2004027710A1/en not_active Application Discontinuation
- 2003-09-12 US US10/528,447 patent/US7613353B2/en active Active
- 2003-09-12 CA CA2504567A patent/CA2504567C/en not_active Expired - Lifetime
- 2003-09-12 AU AU2003271758A patent/AU2003271758A1/en not_active Abandoned
- 2003-09-12 EP EP03753590A patent/EP1543473A1/en not_active Ceased
Non-Patent Citations (3)
Title |
---|
CHINDARO S ET AL: "Directional properties of colour co-occurrence features for lip location and segmentation", AUDIO- AND VIDEO-BASED BIOMETRIC PERSON AUTHENTICATION. THIRD INTERNATIONAL CONFERENCE, AVBPA 2001. PROCEEDINGS (LECTURE NOTES IN COMPUTER SCIENCE VOL.2091), AUDIO- AND VIDEO-BASED BIOMETRIC PERSON AUTHENTICATION. THIRD INTERNATIONAL CONFERENCE, AVBP, 2001, Berlin, Germany, Springer-Verlag, Germany, pages 84 - 89, XP002242333, ISBN: 3-540-42216-1 * |
CUCCHIARA R ET AL: "Improving shadow suppression in moving object detection with HSV color information", ITSC 2001. 2001 IEEE INTELLIGENT TRANSPORTATION SYSTEMS. PROCEEDINGS (CAT. NO.01TH8585), 2001 IEEE INTELLIGENT TRANSPORTATION SYSTEMS. PROCEEDINGS, OAKLAND, CA, USA, 25-29 AUG. 2001, 2001, Piscataway, NJ, USA, IEEE, USA, pages 334 - 339, XP010555793, ISBN: 0-7803-7194-1 * |
GAMBA P ET AL: "A fast algorithm for target shadow removal in monocular colour sequences", PROCEEDINGS. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (CAT. NO.97CB36144), PROCEEDINGS OF INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, SANTA BARBARA, CA, USA, 26-29 OCT. 1997, 1997, Los Alamitos, CA, USA, IEEE Comput. Soc, USA, pages 436 - 447 vol.1, XP010254201, ISBN: 0-8186-8183-7 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100334593C (en) * | 2006-03-07 | 2007-08-29 | 上海大学 | Shadow zone cutting method of two-dimensional color scene |
Also Published As
Publication number | Publication date |
---|---|
AU2003271758A1 (en) | 2004-04-08 |
CA2504567A1 (en) | 2004-04-01 |
CA2504567C (en) | 2016-05-31 |
FR2844898B1 (en) | 2005-03-11 |
EP1543473A1 (en) | 2005-06-22 |
FR2844898A1 (en) | 2004-03-26 |
US7613353B2 (en) | 2009-11-03 |
US20060045330A1 (en) | 2006-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kahu et al. | Review and evaluation of color spaces for image/video compression | |
EP0723364B1 (en) | Real-time image enhancement techniques | |
Plataniotis et al. | Color image processing and applications | |
US7593570B2 (en) | Image processing method and apparatus and storage medium | |
US7613353B2 (en) | Method of color image processing to eliminate shadows and reflections | |
Al-Dwairi et al. | Optimized true-color image processing | |
US7468812B2 (en) | Image processing apparatus and its method for color correction | |
US6519362B1 (en) | Method of extracting text present in a color image | |
US8494297B2 (en) | Automatic detection and mapping of symmetries in an image | |
US8526731B2 (en) | Hough transform method for linear ribbon and circular ring detection in the gradient domain | |
Yang et al. | Reduction of color space dimensionality by moment-preserving thresholding and its application for edge detection in color images | |
JP2003125423A (en) | Method for digital compression of color image | |
US20040057623A1 (en) | Method for automated processing of digital image data | |
JP2002245446A (en) | Color image processor and its method | |
GB2381689A (en) | Finding and filtering smaller and larger noise areas in an image | |
US6882449B2 (en) | Space varying gamut mapping | |
WO2001043076A1 (en) | Color conversion matrix based on minimal surface theory | |
KR100307822B1 (en) | Method for quantizing color using color coordinates | |
Hague et al. | Histogram equalization of the saturation component for true-color images using the CY color space | |
CN115168629B (en) | Image data compression storage method based on block chain | |
Nalini et al. | Performances of Different Color Representations in Image Retrieval and Classification: A Comparative Analysis | |
He et al. | Image Highlight Elimination Method Based on the Combination of YCbCr Spatial Conversion and Pixel Filling | |
Weeks et al. | Color morphological operators using conditional and reduced ordering | |
JPH01168167A (en) | Noise rejection method in multi-color picture | |
Santillan et al. | Color morphological image segmentation on a new opponent color space based on the notion of critical functions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA IL JP NO SG US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006045330 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10528447 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 167550 Country of ref document: IL |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003753590 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2504567 Country of ref document: CA |
|
WWP | Wipo information: published in national office |
Ref document number: 2003753590 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10528447 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |