US20080279417A1 - Method and system for embedding an image into two other images - Google Patents

Method and system for embedding an image into two other images Download PDF

Info

Publication number
US20080279417A1
US20080279417A1 US11/751,885 US75188507A US2008279417A1 US 20080279417 A1 US20080279417 A1 US 20080279417A1 US 75188507 A US75188507 A US 75188507A US 2008279417 A1 US2008279417 A1 US 2008279417A1
Authority
US
United States
Prior art keywords
images
image
color
digital halftoning
halftoning process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/751,885
Inventor
Mikel J. Stanich
Gerhard R. Thompson
Chai Wah Wu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/751,885 priority Critical patent/US20080279417A1/en
Publication of US20080279417A1 publication Critical patent/US20080279417A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32256Spatial or amplitude domain methods in halftone data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/405Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6058Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain

Definitions

  • the present invention generally relates to a method and system to embed an image. More particularly, the present invention relates to a method and system to embed one or more images into two or more other images.
  • the two halftone images are created using dither masks, while another conventional method creates the two halftone images using an error diffusion technique.
  • the watermark images in these schemes are binary images and any correlation between the binary pixels of the two halftone images depends upon whether corresponding pixels in the watermark are black or white.
  • the watermark images must be binary and cannot be grayscale or color.
  • the watermark is embedded in the two images based upon a correlation, the average intensity of a region of pixels is needed to identify whether the watermark pixel is black or white. Therefore, the watermark images that may be extracted using these conventional methods may only be simple graphics. In other words, the watermarks are limited to simple images such as logos or simple graphics and cannot contain detailed features.
  • the watermark images that are extracted using these conventional methods contain residual patterns and features from the two halftone images in which the watermark was embedded. These residual patterns and features reduce the fidelity of the watermark image that is extracted. Therefore, this is another reason why a watermark image cannot have fine details and be successfully processed using a conventional watermarking method.
  • the embedding by these conventional methods is only capable of generating binary halftone images, rather than grayscale or multi-bit images. These binary halftone images also limit the fidelity of the extracted watermark.
  • an image G includes pixels G(i, j) where G(i, j), the (i, j)-th pixel of image G, is a vector in a color space.
  • the color space may be one-dimensional whereas for RGB pixels, the color space may be 3-dimensional.
  • a set of n images G 1 , G 2 , . . . , G n of the same size can be considered as a matrix of n-tuples of pixels. This is denoted as G 1 ⁇ G 2 ⁇ . . . ⁇ G n , i.e. the (i, j)-th element of G 1 ⁇ G 2 ⁇ . . .
  • ⁇ G n is the n-tuple (G 1 (i, j), . . . , G n (i, j)).
  • G 1 (i, j) the n-tuple
  • G 2 the n color spaces of the images
  • n images plus the m extracted images can be considered as how a single image is perceived under different viewing conditions.
  • Yet another conventional watermarking scheme constructs two binary halftone images A and B, which when overlaid on top of each other reveals a watermark image C.
  • the pixels that are not correlated are darker than the correlated pixels only when averaged over a region of pixels. Therefore, the watermark cannot contain fine details. Furthermore, the pixels in the watermark image still reveal residual features of images A and B when overlaid.
  • an exemplary feature of the present invention is to provide a method and structure in which an image is embedded into two other images.
  • a method of embedding an image into two images includes performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • a method of extracting an image from two images includes extracting the image from the two images using a binary operation on each pair of pixels from the two images.
  • a method of embedding a color image into two color images includes decomposing the color images into separate images in their color planes, and for each color plane, performing a digital halftoning process on the Cartesian product of pixel value spaces to embed the image into the two images, and combining the halftone images of the color planes into a single color image.
  • a method of embedding a multi-bit image into two multi-bit images includes performing a digital halftoning process on the Cartesian product of color spaces to embed the multi-bit image into the two multi-bit images.
  • a signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processor.
  • the program includes instructions for performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • a system for embedding an image into two images includes means for providing an image to be embedded into two images and means for performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • a system for embedding an image into two images includes an image input device, and a digital halftoning device that performs a digital halftoning process on a Cartesian product of color spaces to embed the image received by the image input device into the two images.
  • An exemplary embodiment of the present invention embeds a grayscale image into two images that have a substantially similar level of detail as the image being embedded.
  • an exemplary embodiment of the invention ensures that there is no residual image on the extracted watermark, thereby enabling extraction of a very high fidelity watermark.
  • FIGS. 1A-1C illustrate the results of an exemplary embedding method of one image into two other images in accordance with the present invention
  • FIGS. 2A-2C illustrate the results of another exemplary embedding method of another image into two other images in accordance with the present invention
  • FIGS. 3A-3C illustrate the results of yet another exemplary embedding method of another image into two other images in accordance with the present invention
  • FIGS. 4A-4C illustrate the results of a further exemplary embedding method of another image into two other images in accordance with the present invention
  • FIGS. 5A-5E illustrate the results of a further exemplary embedding method of two images into three other images in accordance with the present invention
  • FIG. 6 illustrates a flowchart of an exemplary control routine in accordance with the present invention
  • FIG. 7 illustrates an exemplary image embedding system 700 for incorporating the present invention therein;
  • FIG. 8 illustrates an exemplary signal bearing medium 800 (e.g., storage medium) for storing steps of a program of a method according to the present invention.
  • exemplary signal bearing medium 800 e.g., storage medium
  • FIG. 9 illustrates another exemplary image embedding system 900 for incorporating the present invention.
  • FIGS. 1-9 there are shown exemplary embodiments of the method and structures of the present invention.
  • an exemplary embodiment of the present invention embeds and/or extracts an image (e.g., a watermark) using at least two other images.
  • an image e.g., a watermark
  • An exemplary embodiment of the present invention also differs from the conventional art of using conventional halftoning to embed a watermark into two images, as referenced in U.S. Pat. Nos. 5,734,753 and 5,790,703 and in M. Fu et al, “Data hiding in halftone image by stochastic error diffusion,” Proc. IEEE Int. Conf. ASSP, pp. 1965-1968, 2001, M. Fu et a al, “A novel method to embed watermark in different halftone images: data hiding by conjugate error diffusion (dhced),” Proc. IEEE Int. Conf. ASSP, pp. 111-529-532, 2003 and M. Fu et al., “A novel self-conjugate halftone image watermarking technique,” Proc.
  • an exemplary embodiment of the present invention utilizes a halftoning algorithm on the Cartesian product of pixel spaces, i.e., halftoning is done on n-tuples of pixels. In this manner, an exemplary embodiment of the present invention overcomes the shortcomings of the conventional methods described above.
  • grayscale images A 1 ′, A 2 ′, and A 3 ′ two halftone images A 1 and A 2 may be constructed.
  • the image A 3 ′ represents a watermark which is to be embedded into the two images A 1 ′, A 2 ′.
  • the binary operator is OR in the case of overlaying images, but with the present invention can be any (not necessarily symmetric) binary operator such as Exclusive-OR, AND, etc., without limitation.
  • a binary operation or operator means that the operation has two input arguments. Therefore, an exemplary embodiment of the present invention is not limited to any specific type of binary operation.
  • An aspect of an exemplary embodiment of the present invention is to construct A 1 and A 2 such that they resemble A 1 ′ and A 2 ′ respectively, and such that A 3 resembles A 3 ′.
  • An exemplary embodiment of the present invention constructs images A 1 and A 2 via digital half-toning on a Cartesian product of color spaces.
  • the output halftone images A 1 and A 2 are binary, and, thus, the possible values for each pixel pair (A 1 (i, j), A 2 (i, j)) are (0, 0), (1, 0), (0, 1), and (1, 1).
  • a 3 (i, j) A 1 (i, j)) A 2 (i, j)
  • a possible vector in the space of triples of pixels for A 1 ⁇ A 2 ⁇ A 3 is one the following: (0, 0, 0) 0), (0, 1, 0) 1), (1, 0, 1) 0), and (1, 1, 1) 1) which will be called “output vectors.”
  • An exemplary digital half-toning algorithm may operate in a space of triples of pixels to halftone the images (A 1 ′ ⁇ A 2 ′ ⁇ A 3 ′) using these 4 output vectors, and the resulting output halftone image (A 1 ⁇ A 2 ⁇ A 3 ) is such that A 1 ⁇ A 1 ′, A 2 ⁇ A 2 ′ and A 3 ⁇ A 3 ′, where A 1 ⁇ A 1 ′ indicates that A 1 looks like A 1 ′ when viewed at a distance.
  • the desired output images are A 1 and A 2 which, by construction, combine via an exemplary embodiment of the inventive watermark extraction operation to produce A 3 .
  • the selection of the digital halftoning method may be determined by the desired tradeoff between processing speed and the quality of the halftone generated.
  • VED vector error diffusion method
  • R. Adler et al “Error bounds for error diffusion and other mathematical problems arising in digital halftoning,” Proc. of SPIE, vol. 3963, pp. 437-443, 2000 and in R. Adler et al, “Error bounds for error diffusion and related digital halftoning algorithms,” Proc. IEEE Int. Symp. Circ. Syst., vol. 11, pp. 513-516, 2001, which alleviates some of the problems that may be due to large errors in the vector error diffusion method.
  • an exemplary embodiment of the present invention uses an iterative isotropic halftoning method in order to embed the images.
  • This halftoning method is described in C. W. Wu, “Multimedia Data Hiding and Authentication via Halftoning and Coordinate Projection”, EURASIP Journal on Applied Signal Processing, vol. 2002, no. 2, pp. 143-151, 2002, where it was used to embed a single image inside another image.
  • the present invention embeds one or more images into two or more images.
  • a 1 ′, A 2 ′ and A 3 ′ are input images
  • P is a set of output vectors
  • Output: set (A 1 ,A 2 ,A 3 ) Outimage which resembles (A 1 ′, A 2 ′, A 3 ′).
  • the two halftone images are A 1 and A 2 .
  • L is a linear space-invariant model of the human vision system.
  • the input are three images (A 1 ′, A 2 ′, A 3 ′) and the output are 2 halftone images A 1 and A 2
  • the constant v k determines how strongly the error in each corresponding image is minimized.
  • the algorithm loops through each pixel of A 1 ′ ⁇ A 2 ′ ⁇ A 3 ′ and selects the output vector from P that when put in the corresponding position in Outimage minimizes the error measure “Error”. The pixel in Outimage at that position is then set as this output vector.
  • the error measure “Error” may be calculated as follows: for each of the images, the norm of the low pass filtered version of the difference between A k ′ and Outimage k is calculated and the weighted sum of these norms form the error measure. Thus, minimizing this error measure allows the low pass filtered version of the components of Outimage to resemble each A k ′.
  • the low pass filter (expressed as the linear operator L) may be taken from a HVS filter.
  • the 5 by 5 filter shown below may be used.
  • the filter is given in K. R. Crounse, “Image halftoning with cellular neural networks”, IEEE Trans. Circ. Syst-11, vol. 40, no. 4, pp. 267-283, 1993, normalized to have the sum of the coefficients equal to 1:
  • the filter L may be chosen to be the same for each of the images A 1 ′, A 2 ′ and A 3 ′. In some applications, for example, when the original images and the watermark images are intended to be viewed at different distances, the filter L can be different for each image.
  • the binary operator may be applied to each pair of pixels (A 1 (i,j), A 2 (i,j)) resulting in the extracted watermark image W.
  • the computation of the variable “Error” in the iterative isotropic halftoning method may be sped up by updating the change in the variable “Error” due to changing a single pixel of “Outimage.”
  • changing a single pixel of “Outimage” only changes a small number (on the order of the size of the support of L) of entries of L(Outimage k ⁇ A k ′).
  • This technique has been used in dither mask construction when the output is binary (see e.g. C. W. Wu, G. Thompson and M.
  • the output image “Outimage” may be initialized with an image having pixels having random values from a set of output vectors or from a uniform image of a single output vector. To reduce the number of iterations, “Outimage” may also be initialized by performing vector (or modified) error diffusion and using the output from the vector (or modified) error diffusion as the initial “Outimage.”
  • the error in the digital halftoning algorithm should be small or at least bounded for arbitrarily large images.
  • the pixels of the input image (A 1 ′ ⁇ A 2 ′ ⁇ A 3 ′) should be within a convex hull of the output vectors.
  • This analysis is also applicable to general halftoning algorithms, and the low pass behavior of the human vision system can be viewed as an averaging behavior and the original image is approximated by a convex sum of output pixels.
  • an exemplary embodiment of this invention uses scaling or gamut mapping of the images.
  • M ( p ) ( s 1 p 1 +d 1 ,s 2 p 2 +d 2 ,s 3 p 3 +d 3 )
  • s i are real numbers denoting scaling factors
  • d i are offset vectors in the color space.
  • a commonly used algorithm for finding the convex hull of a set of points is the Qhull algorithm described in C. B. Barber et al., “The quickhull algorithm for convex hulls,” ACM Trans. Math. Software, vol. 22, no. 4, pp. 469-483, 1996.
  • An optimization problem used to find the (optimal) gamut mapping may be formulated as follows:
  • the set of parameters ⁇ s i , d i ⁇ which solves the above optimization problem will be used as the gamut mapping M, i.e. every pixel of the input composite image A 1 ′ ⁇ A 2 ′ ⁇ A 3 ′ is scaled by M before they are halftoned by the exemplary halftoning algorithm in accordance with the present invention.
  • Equation (2) determines, relatively, the “penalty” of scaling each image.
  • ⁇ 3 may be smaller than ⁇ 1 and ⁇ 2 .
  • ⁇ i 1 3 ⁇ ⁇ i ⁇ ( s i ⁇ p i + d i ) ⁇ b ( 4 )
  • A [ ⁇ 1 ⁇ 2 ⁇ 3 ] is decomposed as a concatenation of the matrices ⁇ 1 ⁇ 2 and ⁇ 3 and A, b are the matrices describing the convex hull H ⁇ x:Ax ⁇ b ⁇ .
  • V i be the set of extreme points of the convex hull of S i .
  • the convex hull of S′ is then larger than the convex hull of S, i.e. the feasible region ⁇ (s i ,d i ):M(S′) H ⁇ is smaller than ⁇ (s i ,d i ):M(S) H ⁇ .
  • the gamut mapping may be computed using the pixel values of the specific images A 1 ′, A 2 ′ and A 3 ′.
  • This gamut mapping in general is not appropriate for another set of images.
  • S 1 , S 2 , and S 3 with the extreme points (which by abuse of notation denoted herein as V 1 , V 2 , and V 3 , respectively) of the possible gamut (i.e. the possible range of the pixel values) of the images A 1 ′, A 2 ′ and A 3 ′, respectively, a gamut mapping obtained using the resulting M(S′) H can be used for any possible image A 1 ′, A 2 ′ and A 3 ′.
  • the pixel in the image A 2 ′ can take on values between 0 and 1.
  • the resulting gamut mapping can be used with images A 1 ′ and A 3 ′ and any image A 2 ′ whose pixels lie in the interval [0, 1].
  • the OR operation is used for extracting the watermark, i.e.
  • gamut mappings that cause more distortion to the original images are obtained, but they are applicable to a larger set of images.
  • the error calculation in the digital halftoning algorithm can also be weighted according to the desired fidelity. For instance, if a low fidelity watermark is adequate, the error calculation can be skewed to put more emphasis in reducing the error in the two main images. This is reflected in the parameters v i in the exemplary pseudo code shown above. In this case V 3 would be smaller than v 1 and v 2 .
  • the condition that the input pixels lie in the convex hull of the output vectors may be violated mildly without much loss in the quality of the halftone images. This allows the gamut mapping to distort the image less.
  • FIGS. 1A , 1 B, and 1 C illustrate the results of an exemplary embedding process in accordance with the present invention of embedding an image entitled “Lena” shown in FIG. 1C into two other images (i.e. those shown in FIGS. 1A and 1B ).
  • the first image shown in FIG. 1A is entitled “Baboon” and the second image shown in FIG. 1B is entitled “Boat.”
  • FIG. 1C is the extracted watermark image obtained by performing the OR operation on the pixels of the images of FIGS. 1A and 1B .
  • FIGS. 2A , 2 B, and 2 C illustrate the results of an embedding process in an exemplary embodiment in accordance with the present invention as shown in FIGS. 1A , 1 B, and 1 C, except that the “Lena” image shown in FIG. 2C is embedded into two images that are uniform gray images ( FIGS. 2A and 2B ).
  • FIGS. 3A , 3 B, and 3 C illustrate the results of yet another exemplary embedding of the “Lena” image of FIG. 3C into the “Baboon” image of FIG. 3A and the “Baboon” image of FIG. 3B where the binary operation is the Exclusive-OR operation in accordance with the present invention.
  • the two “Baboon” images FIGS. 3A and 3B ) appear to be the same, but the halftone dots are arranged differently so that when an Exclusive-OR is applied, the watermark image (“Lena”) of FIG. 3C emerges.
  • the extracted watermark images in these examples are of high fidelity. This is in contrast with the conventional methods, where a residual of the two halftone images is evident in the extracted watermark image.
  • two images may be generated from reoriented versions of the same image.
  • the same image may be rotated 180 degrees to create a second image.
  • These two images then form the images A 1 ′, and A 2 ′.
  • the watermark image A 3 ′ is then embedded into these images to create the halftone images A 1 , A 2 where A 2 is A 1 rotated 180 degrees.
  • the idea is that the watermark can be extracted from a single image A 1 and its reoriented version A 2 .
  • the image A 1 contains all the information needed to extract the watermark.
  • the pseudo code described above may be modified as follows: For an R-pixels by C-pixels image, when computing “Error(o)”, in addition to replacing Outimage k (i, j) with o k , Outimage 1 (R ⁇ i+1,C ⁇ j+1) may be replaced with o 2 , Outimage 2 (R ⁇ i+1,C ⁇ j+1) may be replaced with o 1 , and Outimage 3 (R ⁇ i+1,C ⁇ j+1) may be replaced with o 2 ⁇ o 1 . Note that the (R ⁇ i+1,C ⁇ j+1)-th pixel is the (i,j)-th pixel rotated 180 degrees.
  • the watermark image A 3 ′ must be such that it is invariant under 180 degree rotation.
  • an exemplary embodiment of the present invention may be used for more complicated applications of image watermarking, data hiding, and viewing images under different viewing conditions.
  • A an image A that is represented as a matrix of pixels the (k, l)-th pixel of A may be denoted as A(k, 1).
  • Each pixel may be a vector in a color space T (which may vary from image to image), i.e. A(k, 1) T.
  • a n (B 1 , . . . , B m )
  • B j ⁇ j (A 1 , . . . , A n )
  • i.e. B j are images created from the sets of images A 1 , . . . , A n .
  • an exemplary embodiment of the present invention may create n images (A 1 , . . . , A n ) that resemble A′ 1 , . . . , A′ n such that the m images B 1 , . . . , B m that are extracted from (A 1 , . . . , A n ) using transform (may resemble B′ 1 , . . . , B′ m .
  • This exemplary embodiment uses a digital halftoning algorithm.
  • Digital halftoning can be described in the following general form: given an image I a digital halftoning process creates a halftone image I′ such that each pixel of I′ is a member of a restricted set of output pixels 0 and, furthermore, I′ resembles I under some metric d, i.e. d(I, I′) is small.
  • This exemplary embodiment of the present invention specifies the set of output pixels in order to use a digital halftoning algorithm.
  • the pixels of A i are chosen from the set O i .
  • R ⁇ 1 ⁇ . . . ⁇ O n ⁇ (p 1 , . . . , p n ): p i ⁇ O i ⁇ be the possible output vectors for the combined image A 1 ⁇ . . . ⁇ A n .
  • the subset R can be a strictly smaller subset of O 1 ⁇ . . . ⁇ O n in order to express additional relationships between pixels in different images.
  • From R the set of extended output vectors P ⁇ (p, ⁇ 1 (p), . . .
  • p R ⁇ is formed and (A′ 1 ⁇ . . . ⁇ A′ n ⁇ B′ 1 ⁇ . . . ⁇ B′ m ) may be halftoned using P as the set of possible output vectors.
  • a 1 ′, . . . , A n ′, B 1 ′, . . . , B m ′ are the input images
  • P is a set of output vectors
  • Output: set (A 1 , . . . , A n , B 1 , . . . , B m ) Outimage which resembles (A 1 ′, . . . , A n ′, B 1 ′, . . . , B m ′).
  • the output halftone images are A 1 , . . . , A n ;
  • ⁇ k A and ⁇ k B determines how strongly the error in each image is minimized.
  • the pixels of (A′ 1 ⁇ . . . ⁇ A′ n ⁇ B′ 1 ⁇ . . . ⁇ B′ m ) are first scaled via the gamut mapping M.
  • the gamut mapping M is calculated by solving the optimization problem
  • Equation (5) the objective function in Equation (5) may be replaced by:
  • an exemplary method of the invention may be generalized with reference to the flowchart shown in FIG. 6 .
  • the flowchart starts at step 600 and continues to step 602 where the control routine inputs images.
  • the control routine may input images A 1 ′, . . . , A n ′, and B 1 ′, . . . , B m ′ at step 602 .
  • control routine may then continue to step 604 , where gamut mapping, as described above, is applied to the input images.
  • the control routine may then continue to step 606 where the digital halftoning method described above is applied to the gamut mapped images.
  • step 608 the control routine ends and may, optionally, return to the process that called the control routine of FIG. 6 .
  • FIGS. 5A , 5 B, and 5 C show halftone images of a “Baboon,” a “Boat,” and a “Boat on a lake,” respectively.
  • the “Peppers” image shown in FIG. 5D is obtained by overlaying the images of FIG. 5A and FIG. 5B
  • the “Lena” image shown in FIG. 5E is obtained by overlaying the images of FIG. 5A and FIG. 5C .
  • Yet another exemplary embodiment of the present invention may also provide a multi-bit output.
  • gray levels of two images are simply added (with clamping) to produce the gray level of an overlaid image.
  • the number of output vectors is large when q is large.
  • the innermost loop of the exemplary pseudo code described earlier that searches for the output vector that minimizes “Error” may take many computations. Since o min is the output vector that gives the minimal value of “Error” among all choices of the output vectors at location (i, j), the computation of
  • o min arg ⁇ ⁇ min o ⁇ P ⁇ ⁇ Error ⁇ ( o ) ⁇
  • This exemplary embodiment may also be used for color images.
  • each pixel in a color image lies in a multi-dimensional color space such as CIELab, RGB, CMYK, or the like, and digital halftoning may be performed in the Cartesian product of these color spaces.
  • the images may also be decomposed into their color planes and each plane may be processed independently as a grayscale image and the results may then be combined afterwards. This alternative approach appears to work well in the RGB color space.
  • an exemplary embodiment of the present invention may be used to watermark audio data and/or signals.
  • FIG. 7 illustrates a typical hardware configuration of an information handling/computer system for use with the invention and which preferably has at least one processor or central processing unit (CPU) 711 .
  • processors central processing unit
  • the CPUs 711 are interconnected via a system bus 712 to a random access memory (RAM) 714 , read-only memory (ROM) 716 , input/output (I/O) adapter 718 (for connecting peripheral devices such as disk units 721 and tape drives 740 to the bus 712 ), user interface adapter 722 (for connecting a keyboard 724 , mouse 726 , speaker 728 , microphone 732 , and/or other user interface device to the bus 712 ), a communication adapter 734 for connecting an information handling system to a data processing network, the Internet, an Intranet, a personal area network (PAN), etc., and a display adapter 736 for connecting the bus 712 to a display device 738 and/or printer 740 .
  • RAM random access memory
  • ROM read-only memory
  • I/O input/output
  • I/O input/output
  • user interface adapter 722 for connecting a keyboard 724 , mouse 726 , speaker 728 , microphone 732 , and
  • a different aspect of the invention includes a computer-implemented method for performing the above method. As an example, this method may be implemented in the particular environment discussed above.
  • Such a method may be implemented, for example, by operating a computer, as embodied by a digital data processing apparatus, to execute a sequence of machine-readable instructions. These instructions may reside in various types of signal-bearing media.
  • This signal-bearing media may include, for example, a RAM contained within the CPU 711 , as represented by the fast-access storage for example.
  • the instructions may be contained in another signal-bearing media, such as a magnetic data storage diskette 800 ( FIG. 8 ), directly or indirectly accessible by the CPU 711 .
  • the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing media including transmission media such as digital and analog and communication links and wireless.
  • DASD storage e.g., a conventional “hard drive” or a RAID array
  • magnetic tape e.g., magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing media including transmission media such as digital and analog and communication links and wireless.
  • the machine-readable instructions may comprise software object code,
  • FIG. 9 illustrates yet another exemplary embodiment of a system 900 for embedding an image into two other images in accordance with the present invention.
  • the system 900 includes an image input device 902 , a gamut mapping device 904 and a digital halftoning device 906 .
  • the image input device 902 inputs the image to be embedded as well as the two other images in which the one image will be embedded.
  • the gamut mapping device 904 receives the images input by the image input device 902 and performs the gamut mapping process described above on the images.
  • the digital halftoning device 906 performs a digital halftoning process on a Cartesian product of color spaces to embed the image into the two images.

Abstract

A method and system embeds an image into two images by performing a digital halftoning process on a Cartesian product of color spaces.

Description

    RELATED APPLICATIONS
  • The present application is a Continuation application of U.S. patent application Ser. No. 10/758,536, filed on Jan. 16, 2004.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention generally relates to a method and system to embed an image. More particularly, the present invention relates to a method and system to embed one or more images into two or more other images.
  • 2. Description of the Related Art
  • Recently, several watermarking schemes have been proposed where a watermark is embedded in two halftone images such that the watermark can be extracted by simply overlaying these halftone images.
  • In one conventional method, the two halftone images are created using dither masks, while another conventional method creates the two halftone images using an error diffusion technique. The watermark images in these schemes are binary images and any correlation between the binary pixels of the two halftone images depends upon whether corresponding pixels in the watermark are black or white.
  • There are several limitations to these conventional methods of extracting watermarks. First, the watermark images must be binary and cannot be grayscale or color.
  • Second, because the watermark is embedded in the two images based upon a correlation, the average intensity of a region of pixels is needed to identify whether the watermark pixel is black or white. Therefore, the watermark images that may be extracted using these conventional methods may only be simple graphics. In other words, the watermarks are limited to simple images such as logos or simple graphics and cannot contain detailed features.
  • Third, the watermark images that are extracted using these conventional methods contain residual patterns and features from the two halftone images in which the watermark was embedded. These residual patterns and features reduce the fidelity of the watermark image that is extracted. Therefore, this is another reason why a watermark image cannot have fine details and be successfully processed using a conventional watermarking method.
  • Fourth, the embedding by these conventional methods is only capable of generating binary halftone images, rather than grayscale or multi-bit images. These binary halftone images also limit the fidelity of the extracted watermark.
  • Consider an image as a matrix of pixels, i.e. an image G includes pixels G(i, j) where G(i, j), the (i, j)-th pixel of image G, is a vector in a color space. For grayscale pixels, the color space may be one-dimensional whereas for RGB pixels, the color space may be 3-dimensional. A set of n images G1, G2, . . . , Gn of the same size can be considered as a matrix of n-tuples of pixels. This is denoted as G1×G2× . . . ×Gn, i.e. the (i, j)-th element of G1×G2× . . . ×Gn is the n-tuple (G1(i, j), . . . , Gn(i, j)). Equivalently, a set of n images can be considered as a single image whose color space is the Cartesian product of the n color spaces of the images G1, G2, . . . , Gn.
  • Consider an image transform
    Figure US20080279417A1-20081113-P00001
    which acts on an n-tuple of pixels and produces an m-tuple of pixels, i.e.
    Figure US20080279417A1-20081113-P00001
    (p1, . . . , pn)=(q1, . . . , qm) where pi and qi are pixels. By applying the transform
    Figure US20080279417A1-20081113-P00001
    to each of the n-tuples of pixels in G1×G2× . . . ×Gn, this transform acts on a set of n images and produces m images. The m auxiliary images may be watermark images that are extracted by the watermark extraction transform
    Figure US20080279417A1-20081113-P00001
    .
  • Another aspect to consider is how images are perceived under different viewing conditions. In other words, the n images plus the m extracted images can be considered as how a single image is perceived under different viewing conditions. Such an interpretation for the case n=1 can be found in C. W. Wu et al, “Multiple images viewable on twisted-nematic mode liquid-crystal displays,” IEEE Signal Processing Letters, vol. 10, no. 8, pp. 225-227, 2003.
  • If a goal is to produce a set of n images, such that these images plus the additional m images that are generated by the transform
    Figure US20080279417A1-20081113-P00001
    matches another set of n+m predefined images, perfect matching is not possible because there are more sets of n+m images than there are sets of n images.
  • Instead of perfect matching, conventional watermarking methods utilizing halftoning take advantage of the fact that the images only need to look similar when viewed at an appropriate distance. Because of the “low pass behavior” of the human vision system, only low-pass versions of the images need to match. In this manner, the human visual system reduces the amount of information in images, which allows the use of a digital half-toning algorithm to provide a solution to the problem.
  • Yet another conventional watermarking scheme constructs two binary halftone images A and B, which when overlaid on top of each other reveals a watermark image C. Assuming that each pixel of image A and image B are either 0 (denoting a white pixel) or 1 (denoting a black pixel), the overlay operation can be expressed as C(i, j)=A(i, j)
    Figure US20080279417A1-20081113-P00002
    B(i, j) where function
    Figure US20080279417A1-20081113-P00002
    is the OR operation.
  • These conventional methods embed the watermark image based on a correlation of the pixels between the two images and the ability to extract a watermark is based upon whether corresponding pixels in each of the two images vary together. For example, for a watermark image W that is a binary image, when W(i, j)=0, the corresponding pixels in A and B are correlated and when W(i, j)γ0 the corresponding pixels in A and B are not correlated. This implies that when W(i, j)=1 the overlaid pixels C(i, j) will be darker on average than when W(i, j)=0 and, thus, C will reveal the watermark image W. However, as explained above, the pixels that are not correlated are darker than the correlated pixels only when averaged over a region of pixels. Therefore, the watermark cannot contain fine details. Furthermore, the pixels in the watermark image still reveal residual features of images A and B when overlaid.
  • SUMMARY OF THE INVENTION
  • In view of the foregoing and other exemplary problems, drawbacks, and disadvantages of the conventional methods and structures, an exemplary feature of the present invention is to provide a method and structure in which an image is embedded into two other images.
  • In a first exemplary aspect of the present invention, a method of embedding an image into two images includes performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • In a second exemplary aspect of the present invention, a method of extracting an image from two images includes extracting the image from the two images using a binary operation on each pair of pixels from the two images.
  • In a third exemplary aspect of the present invention, a method of embedding a color image into two color images includes decomposing the color images into separate images in their color planes, and for each color plane, performing a digital halftoning process on the Cartesian product of pixel value spaces to embed the image into the two images, and combining the halftone images of the color planes into a single color image.
  • In a fourth exemplary aspect of the present invention, a method of embedding a multi-bit image into two multi-bit images includes performing a digital halftoning process on the Cartesian product of color spaces to embed the multi-bit image into the two multi-bit images.
  • In a fifth exemplary aspect of the present invention, a signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processor. The program includes instructions for performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • In a sixth exemplary aspect of the present invention, a system for embedding an image into two images includes means for providing an image to be embedded into two images and means for performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
  • In a seventh exemplary aspect of the present invention a system for embedding an image into two images includes an image input device, and a digital halftoning device that performs a digital halftoning process on a Cartesian product of color spaces to embed the image received by the image input device into the two images.
  • An exemplary embodiment of the present invention embeds a grayscale image into two images that have a substantially similar level of detail as the image being embedded.
  • Additionally, an exemplary embodiment of the invention ensures that there is no residual image on the extracted watermark, thereby enabling extraction of a very high fidelity watermark.
  • These and many other advantages may be achieved with the present invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other exemplary purposes, aspects and advantages will be better understood from the following detailed description of an exemplary embodiment of the invention with reference to the drawings, in which:
  • FIGS. 1A-1C illustrate the results of an exemplary embedding method of one image into two other images in accordance with the present invention;
  • FIGS. 2A-2C illustrate the results of another exemplary embedding method of another image into two other images in accordance with the present invention;
  • FIGS. 3A-3C illustrate the results of yet another exemplary embedding method of another image into two other images in accordance with the present invention;
  • FIGS. 4A-4C illustrate the results of a further exemplary embedding method of another image into two other images in accordance with the present invention;
  • FIGS. 5A-5E illustrate the results of a further exemplary embedding method of two images into three other images in accordance with the present invention;
  • FIG. 6 illustrates a flowchart of an exemplary control routine in accordance with the present invention;
  • FIG. 7 illustrates an exemplary image embedding system 700 for incorporating the present invention therein;
  • FIG. 8 illustrates an exemplary signal bearing medium 800 (e.g., storage medium) for storing steps of a program of a method according to the present invention; and
  • FIG. 9 illustrates another exemplary image embedding system 900 for incorporating the present invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
  • Referring now to the drawings, and more particularly to FIGS. 1-9, there are shown exemplary embodiments of the method and structures of the present invention.
  • In contrast to conventional methods as referenced in the above referenced C. W. Wu et al paper, an exemplary embodiment of the present invention embeds and/or extracts an image (e.g., a watermark) using at least two other images.
  • An exemplary embodiment of the present invention also differs from the conventional art of using conventional halftoning to embed a watermark into two images, as referenced in U.S. Pat. Nos. 5,734,753 and 5,790,703 and in M. Fu et al, “Data hiding in halftone image by stochastic error diffusion,” Proc. IEEE Int. Conf. ASSP, pp. 1965-1968, 2001, M. Fu et a al, “A novel method to embed watermark in different halftone images: data hiding by conjugate error diffusion (dhced),” Proc. IEEE Int. Conf. ASSP, pp. 111-529-532, 2003 and M. Fu et al., “A novel self-conjugate halftone image watermarking technique,” Proc. IEEE Int. Symp. Circ. Syst., pp. 111-790-793, 2003, as an exemplary embodiment of the present invention utilizes a halftoning algorithm on the Cartesian product of pixel spaces, i.e., halftoning is done on n-tuples of pixels. In this manner, an exemplary embodiment of the present invention overcomes the shortcomings of the conventional methods described above.
  • A description of one exemplary embodiment of the present invention follows. Given grayscale images A1′, A2′, and A3′, two halftone images A1 and A2 may be constructed. The image A3′ represents a watermark which is to be embedded into the two images A1′, A2′.
  • From the two images A1 and A2, a watermark image A3 can be extracted by the operation A3(i, j)=A1(i, j)) A2(i, j). The binary operator) is OR in the case of overlaying images, but with the present invention can be any (not necessarily symmetric) binary operator such as Exclusive-OR, AND, etc., without limitation. A binary operation or operator means that the operation has two input arguments. Therefore, an exemplary embodiment of the present invention is not limited to any specific type of binary operation.
  • An aspect of an exemplary embodiment of the present invention is to construct A1 and A2 such that they resemble A1′ and A2′ respectively, and such that A3 resembles A3′.
  • An exemplary embodiment of the present invention constructs images A1 and A2 via digital half-toning on a Cartesian product of color spaces.
  • The input to an exemplary digital half-toning method in accordance with the present invention are pixels of an image U=A1′×A2′×A3′ that is a composite of the three images A1′, A2′ and A3′. The pixels of U are given by U(i, j)=(A1′ (i, j), A2′ (i, j), A3′ (i, j)). In this exemplary embodiment the output halftone images A1 and A2 are binary, and, thus, the possible values for each pixel pair (A1 (i, j), A2 (i, j)) are (0, 0), (1, 0), (0, 1), and (1, 1).
  • Thus, since A3 (i, j)=A1 (i, j)) A2 (i, j), a possible vector in the space of triples of pixels for A1×A2×A3 is one the following: (0, 0, 0) 0), (0, 1, 0) 1), (1, 0, 1) 0), and (1, 1, 1) 1) which will be called “output vectors.”
  • An exemplary digital half-toning algorithm may operate in a space of triples of pixels to halftone the images (A1′×A2′×A3′) using these 4 output vectors, and the resulting output halftone image (A1×A2×A3) is such that A1λA1′, A2λA2′ and A3λA3′, where A1λA1′ indicates that A1 looks like A1′ when viewed at a distance. The desired output images are A1 and A2 which, by construction, combine via an exemplary embodiment of the inventive watermark extraction operation to produce A3.
  • There are several choices for a digital halftoning method that may be used in conjunction with an exemplary embodiment of the present invention. The selection of the digital halftoning method may be determined by the desired tradeoff between processing speed and the quality of the halftone generated.
  • For example, a vector error diffusion method (VED), as described in H. Haneishi et al, “Color digital halftoning taking calorimetric color reproduction into account”, Journal of Electronic Imaging, vol. 5, pp. 97-106, 1996, is a one-pass method that is fast and produces good halftones. Another example is a modified error diffusion method described in R. Adler et al, “Error bounds for error diffusion and other mathematical problems arising in digital halftoning,” Proc. of SPIE, vol. 3963, pp. 437-443, 2000 and in R. Adler et al, “Error bounds for error diffusion and related digital halftoning algorithms,” Proc. IEEE Int. Symp. Circ. Syst., vol. 11, pp. 513-516, 2001, which alleviates some of the problems that may be due to large errors in the vector error diffusion method.
  • These one-pass methods analyze and process each n-tuple of pixels once in a specific order. This ordering of pixels to be processed, and the causality of the error diffusion filter (such as the Jarvis filter or the Stucki filter) may cause anisotropic artifacts in the halftone image. For more information on error diffusion, see the books “Digital Halftoning” by R. Ulichney, MIT Press, 1987 and “Digital Color Halftoning” by H. Kang, SPIE Press monograph vol. PM68, 1999.
  • Further, in some digital watermarking or data hiding applications there could be constraints on the relationships between pixels in an image that are far apart and, for these applications, error diffusion may not be appropriate. An example of such a watermarking application is described below.
  • Therefore, an exemplary embodiment of the present invention uses an iterative isotropic halftoning method in order to embed the images. This halftoning method is described in C. W. Wu, “Multimedia Data Hiding and Authentication via Halftoning and Coordinate Projection”, EURASIP Journal on Applied Signal Processing, vol. 2002, no. 2, pp. 143-151, 2002, where it was used to embed a single image inside another image. In contrast, the present invention embeds one or more images into two or more images.
  • An exemplary pseudo code for this halftoning method follows:
  • for each iteration /* Loop through iterations */
    for each i /* Loop through each row */
    for each j /* Loop through each column */
    for each output vector o = (o1, o2, o3)
    Figure US20080279417A1-20081113-P00003
     P
    /*Loop through all possible output vectors */
    replace Outimagek(i, j) with ok for k = 1, 2, 3.
    set Error ( o ) = k = 1 3 v k L ( Outimage k - A k )
    endfor
    find output vector omin
    Figure US20080279417A1-20081113-P00003
     P that minimizes Error, i.e.
    o min = arg min o P Error ( o ) .
    set Outimage(i, j) = omin.
    endfor (j)
    endfor (i)
  • if Outimage has not changed between two iterations or maximum number of iterations reached, exit iterations loop.
  • endfor
  • Where:
  • A1′, A2′ and A3′ are input images;
  • P is a set of output vectors;
  • Output: set (A1,A2,A3)=Outimage which resembles (A1′, A2′, A3′).
  • The two halftone images are A1 and A2.
  • vk determines how strongly the error in each image is minimized; and
  • L is a linear space-invariant model of the human vision system.
  • Outimagek denotes the k-th component of Outimage, i.e. if Outimage (A1, A2, A3), then Outimagek=Ak. The input are three images (A1′, A2′, A3′) and the output are 2 halftone images A1 and A2 The constant vk determines how strongly the error in each corresponding image is minimized.
  • In an exemplary embodiment of the present invention, for each iteration, the algorithm loops through each pixel of A1′×A2′×A3′ and selects the output vector from P that when put in the corresponding position in Outimage minimizes the error measure “Error”. The pixel in Outimage at that position is then set as this output vector.
  • The error measure “Error” may be calculated as follows: for each of the images, the norm of the low pass filtered version of the difference between Ak′ and Outimagek is calculated and the weighted sum of these norms form the error measure. Thus, minimizing this error measure allows the low pass filtered version of the components of Outimage to resemble each Ak′. The low pass filter (expressed as the linear operator L) may be taken from a HVS filter. Some examples of such filters can be found in R. Näsänen, “Visibility of halftone dot textures”, IEEE Trans. Syst. Man, Cybernetics, vol. 14, no. 6, pp. 920-924, 1984 and in J. Sullivan et al, “Design of minimum visual modulation halftone patterns,” IEEE Trans. Syst. Man, Cybernetics, vol. 21, no. 1, pp. 33-38, 1991.
  • In an exemplary embodiment of the present invention, the 5 by 5 filter shown below may be used. The filter is given in K. R. Crounse, “Image halftoning with cellular neural networks”, IEEE Trans. Circ. Syst-11, vol. 40, no. 4, pp. 267-283, 1993, normalized to have the sum of the coefficients equal to 1:
  • [ 0.00492 0.01391 0.02100 0.01391 0.00492 0.01391 0.05810 0.09772 0.05810 0.01391 0.02100 0.09772 0.01618 0.09772 0.02100 0.01391 0.05810 0.09772 0.05810 0.01391 0.00492 0.01391 0.02100 0.01391 0.00492 ]
  • In this exemplary embodiment, the filter L may be chosen to be the same for each of the images A1′, A2′ and A3′. In some applications, for example, when the original images and the watermark images are intended to be viewed at different distances, the filter L can be different for each image.
  • In accordance with an exemplary embodiment of the present invention, to extract the watermark from the two halftone images A1 and A2, the binary operator) may be applied to each pair of pixels (A1(i,j), A2(i,j)) resulting in the extracted watermark image W. In other words, the (i,j)-th pixel of W is given by W(i,j)=A1(i,j)) A2(i,j).
  • Because of the linearity, space-invariance, and a relatively small support of L, the computation of the variable “Error” in the iterative isotropic halftoning method may be sped up by updating the change in the variable “Error” due to changing a single pixel of “Outimage.” In other words, changing a single pixel of “Outimage” only changes a small number (on the order of the size of the support of L) of entries of L(Outimagek−Ak′). This technique has been used in dither mask construction when the output is binary (see e.g. C. W. Wu, G. Thompson and M. Stanich, “A unified framework for digital halftoning and dither mask construction: variations on a theme and implementation issues,” Proc. IS&T's NIP19: Int. Conf. on Digital Printing Tech., pp. 793-796, 2003), but is equally useful in an exemplary embodiment of the present invention.
  • The output image “Outimage” may be initialized with an image having pixels having random values from a set of output vectors or from a uniform image of a single output vector. To reduce the number of iterations, “Outimage” may also be initialized by performing vector (or modified) error diffusion and using the output from the vector (or modified) error diffusion as the initial “Outimage.”
  • Gamut Mapping
  • For the halftone image to resemble the original image, the error in the digital halftoning algorithm should be small or at least bounded for arbitrarily large images. In order for the error in the error diffusion halftoning method to be bounded, it was shown in the references by Adler et al. cited above that the pixels of the input image (A1′×A2′×A3′) should be within a convex hull of the output vectors. This analysis is also applicable to general halftoning algorithms, and the low pass behavior of the human vision system can be viewed as an averaging behavior and the original image is approximated by a convex sum of output pixels. In order to satisfy this convex hull condition, an exemplary embodiment of this invention uses scaling or gamut mapping of the images.
  • In an exemplary embodiment of the present invention, the gamut mapping is as follows: Let S be the set of 3-tuples of pixels in (A1′×A2′×A3′), i.e. S={(A1′ (i, j), A2′ (i, j), A3′ (i, j))}. Furthermore, let S1, S2, and S3 be the sets of pixels of images A1′, A2′, and A3′ respectively, i.e. S1={A1′ (i, j)}, S2={A2′ (i, j)}, and S3={A3′ (i, j)}. For simplicity, consider only a gamut mapping M that maps a pixel p into a pixel M(p) of the following form:

  • For p=(p 1 ,p 2 ,p 3S,  (1)

  • M(p)=(s 1 p 1 +d 1 ,s 2 p 2 +d 2 ,s 3 p 3 +d 3)
  • where:
  • si are real numbers denoting scaling factors; and
  • di are offset vectors in the color space.
  • Let H be the (closed) convex hull of the output vectors. Then H can be expressed by a set of linear inequalities: H={x:Ax [b}. A commonly used algorithm for finding the convex hull of a set of points is the Qhull algorithm described in C. B. Barber et al., “The quickhull algorithm for convex hulls,” ACM Trans. Math. Software, vol. 22, no. 4, pp. 469-483, 1996. An optimization problem used to find the (optimal) gamut mapping may be formulated as follows:
  • max s i , d i min ( s 1 α 1 , s 2 α 2 , s 3 α 3 ) under the constraint that M ( S ) χ H ( 2 )
  • The set of parameters {si, di} which solves the above optimization problem will be used as the gamut mapping M, i.e. every pixel of the input composite image A1′×A2′×A3′ is scaled by M before they are halftoned by the exemplary halftoning algorithm in accordance with the present invention.
  • Other forms of the objective function may be used for other exemplary embodiments of the present invention. For example:
  • max s i , d i s 1 s 2 s 3 under the constraint that M ( S ) χ H ( 3 )
  • The coefficients αi in Equation (2) determine, relatively, the “penalty” of scaling each image. The smaller each αi is, the more the corresponding image will be scaled. For instance, in a watermarking application, the two images A1 and A2 should retain most of the fidelity of the original images, while in contrast the watermark image, which in many applications is assumed to be a less complex image, may accept more distortion due to scaling. Thus, in this case, α3 may be smaller than α1 and α2.
  • It is clear that the constraints in Eq. (2) are linear, i.e. the inequality constraints M(S)
    Figure US20080279417A1-20081113-P00004
    H is linear in the variables si, di. This is true since M(S)
    Figure US20080279417A1-20081113-P00004
    H for each pixel p=(p1, p2, p3)
    Figure US20080279417A1-20081113-P00004
    S can be written as:
  • i = 1 3 Λ i ( s i p i + d i ) b ( 4 )
  • Where:
  • A=[Λ1→Λ2→Λ3] is decomposed as a concatenation of the matrices Λ1 Λ2 and Λ3 and A, b are the matrices describing the convex hull H {x:Ax≦b}.
  • If the number of pixels in the images is large, the constraints M(S)
    Figure US20080279417A1-20081113-P00004
    H can be time consuming to compute. One way to speed up the computation is to not use all the pixels to compute the gamut, i.e. replace S with a subset S″S. This results in a larger feasible region as the set of parameters (si,di) satisfying the constraint M(S″)
    Figure US20080279417A1-20081113-P00004
    H (i.e. the set {(si,di):M(S″)
    Figure US20080279417A1-20081113-P00004
    H}) is larger than the set of parameters satisfying M(S)
    Figure US20080279417A1-20081113-P00004
    H. This means that the convex hull condition may not be strictly satisfied. However, if S″ is close to S, then the gamut mapping obtained using S″ is close to the gamut mapping obtained using S. In other words, the convex hull condition violation is mild. In experiments by the inventor, S″ being 5 percent of the pixels of S still gave good results.
  • Another way to speed up the computation is the following simplification which produces a smaller feasible region for the optimization problem in Eq. (2), and is easier to compute than M(S)
    Figure US20080279417A1-20081113-P00004
    H. However, since the feasible region is smaller, the gamut mapping is also less optimal.
  • Let Vi be the set of extreme points of the convex hull of Si. This means that the convex hull of Vi is equal to the convex hull of Si. Next replace the constraint M(S)
    Figure US20080279417A1-20081113-P00004
    H with M(S′)
    Figure US20080279417A1-20081113-P00004
    H, where S′={(p1, p2, p3):pi
    Figure US20080279417A1-20081113-P00004
    Vi}. The convex hull of S′ is then larger than the convex hull of S, i.e. the feasible region {(si,di):M(S′)
    Figure US20080279417A1-20081113-P00004
    H} is smaller than {(si,di):M(S)
    Figure US20080279417A1-20081113-P00004
    H}.
  • The gamut mapping may be computed using the pixel values of the specific images A1′, A2′ and A3′. This gamut mapping in general is not appropriate for another set of images. By replacing S1, S2, and S3 with the extreme points (which by abuse of notation denoted herein as V1, V2, and V3, respectively) of the possible gamut (i.e. the possible range of the pixel values) of the images A1′, A2′ and A3′, respectively, a gamut mapping obtained using the resulting M(S′)
    Figure US20080279417A1-20081113-P00004
    H can be used for any possible image A1′, A2′ and A3′.
  • For instance, suppose that the pixel in the image A2′ can take on values between 0 and 1. Then by replacing S2 with the set {0, 1} (which is the set of extreme points of the interval [0,1]) and solving the optimization problem using the constraints M(S′)
    Figure US20080279417A1-20081113-P00004
    H where S′={(A1′ (i, j), p, A3′(i, j)) p
    Figure US20080279417A1-20081113-P00004
    {0,1}}, the resulting gamut mapping can be used with images A1′ and A3′ and any image A2′ whose pixels lie in the interval [0, 1]. If all three Si's are replaced with the set {0, 1}, and the OR operation is used for extracting the watermark, i.e. the output vectors are (0, 0, 0), (0, 1, 1), (1, 0, 1), and (1, 1, 1), then the following gamut mapping: s1=s2=s3=0.25, d1=d2=0.5, and d3=0.75 is obtained, which can be used for all grayscale images A1′, A2′ and A3′. Thus, by restricting the feasible region this way, sub-optimal gamut mappings that cause more distortion to the original images are obtained, but they are applicable to a larger set of images.
  • In accordance with an exemplary embodiment of the invention, the error calculation in the digital halftoning algorithm can also be weighted according to the desired fidelity. For instance, if a low fidelity watermark is adequate, the error calculation can be skewed to put more emphasis in reducing the error in the two main images. This is reflected in the parameters vi in the exemplary pseudo code shown above. In this case V3 would be smaller than v1 and v2.
  • In an exemplary embodiment of the present invention, the condition that the input pixels lie in the convex hull of the output vectors may be violated mildly without much loss in the quality of the halftone images. This allows the gamut mapping to distort the image less.
  • EXAMPLES
  • FIGS. 1A, 1B, and 1C illustrate the results of an exemplary embedding process in accordance with the present invention of embedding an image entitled “Lena” shown in FIG. 1C into two other images (i.e. those shown in FIGS. 1A and 1B). The first image shown in FIG. 1A is entitled “Baboon” and the second image shown in FIG. 1B is entitled “Boat.” The gamut mapping was obtained using α1=1, α2=1, and α3=0.5. FIG. 1C is the extracted watermark image obtained by performing the OR operation on the pixels of the images of FIGS. 1A and 1B.
  • FIGS. 2A, 2B, and 2C illustrate the results of an embedding process in an exemplary embodiment in accordance with the present invention as shown in FIGS. 1A, 1B, and 1C, except that the “Lena” image shown in FIG. 2C is embedded into two images that are uniform gray images (FIGS. 2A and 2B). The gamut mapping here was computed using α123=1.
  • FIGS. 3A, 3B, and 3C illustrate the results of yet another exemplary embedding of the “Lena” image of FIG. 3C into the “Baboon” image of FIG. 3A and the “Baboon” image of FIG. 3B where the binary operation is the Exclusive-OR operation in accordance with the present invention. Note that the two “Baboon” images (FIGS. 3A and 3B) appear to be the same, but the halftone dots are arranged differently so that when an Exclusive-OR is applied, the watermark image (“Lena”) of FIG. 3C emerges. The gamut mapping here was computed using α123=1.
  • Note that the extracted watermark images in these examples are of high fidelity. This is in contrast with the conventional methods, where a residual of the two halftone images is evident in the extracted watermark image.
  • In yet another exemplary embodiment of the present invention, as shown in FIGS. 4A-4C, two images may be generated from reoriented versions of the same image. For example, the same image may be rotated 180 degrees to create a second image. These two images then form the images A1′, and A2′. The watermark image A3′ is then embedded into these images to create the halftone images A1, A2 where A2 is A1 rotated 180 degrees. The idea is that the watermark can be extracted from a single image A1 and its reoriented version A2. In other words the image A1 contains all the information needed to extract the watermark. Note that for this exemplary embodiment, there is an acausal relationship between the pixels of A1 and A2 and, thus, error diffusion may not be appropriate in this setting.
  • For this exemplary embodiment, the pseudo code described above may be modified as follows: For an R-pixels by C-pixels image, when computing “Error(o)”, in addition to replacing Outimagek(i, j) with ok, Outimage1(R−i+1,C−j+1) may be replaced with o2, Outimage2(R−i+1,C−j+1) may be replaced with o1, and Outimage3(R−i+1,C−j+1) may be replaced with o2∘o1. Note that the (R−i+1,C−j+1)-th pixel is the (i,j)-th pixel rotated 180 degrees. Then when omin=(o1′,o2′,o3′) is found, set Outimage(i,j)=omin, and set Outimage(R−i+1,C−j+1)=(o2′,o1′,o2′∘o1′). Assuming that the binary operation ∘ is symmetric, then the watermark image A3′ must be such that it is invariant under 180 degree rotation. The gamut mapping may be chosen using α12 with the additional constraints that d1=d2, s1=s2. Again, an example of this is shown in FIGS. 4A-4C where FIG. 4A is the halftone image “Lena” and FIG. 4A is the watermark that is revealed when the OR operation is applied to FIG. 4A and a copy of itself that is rotated 180 degrees (FIG. 4B). The gamut mapping is obtained using α12=1, α3=0.7.
  • In general, an exemplary embodiment of the present invention may be used for more complicated applications of image watermarking, data hiding, and viewing images under different viewing conditions. For an image A that is represented as a matrix of pixels the (k, l)-th pixel of A may be denoted as A(k, 1). Each pixel may be a vector in a color space T (which may vary from image to image), i.e. A(k, 1)
    Figure US20080279417A1-20081113-P00004
    T. Given n images A1, . . . , An, an exemplary watermark extraction transform Φ=(φ1, . . . , φm) constructs m images as follows: Φ(A1, . . . , An)=(B1, . . . , Bm) where Bjj(A1, . . . , An), i.e. Bj are images created from the sets of images A1, . . . , An. The transform φj operates pixel wise on the images A1, . . . , An, i.e. Bj(k, l)=φj(A1(k, l), . . . , An(k, l)).
  • Given a set of n+m images A′1, . . . , A′n, and B′1, . . . , B′m, an exemplary embodiment of the present invention may create n images (A1, . . . , An) that resemble A′1, . . . , A′n such that the m images B1, . . . , Bm that are extracted from (A1, . . . , An) using transform (may resemble B′1, . . . , B′m.
  • This exemplary embodiment uses a digital halftoning algorithm. Digital halftoning can be described in the following general form: given an image I a digital halftoning process creates a halftone image I′ such that each pixel of I′ is a member of a restricted set of output pixels 0 and, furthermore, I′ resembles I under some metric d, i.e. d(I, I′) is small.
  • This exemplary embodiment of the present invention specifies the set of output pixels in order to use a digital halftoning algorithm. The pixels of Ai are chosen from the set Oi. Let R Ō1× . . . ×On={(p1, . . . , pn): piχOi} be the possible output vectors for the combined image A1× . . . ×An. The subset R can be a strictly smaller subset of O1× . . . ×On in order to express additional relationships between pixels in different images. From R the set of extended output vectors P={(p, φ1(p), . . . , φm(p)): p
    Figure US20080279417A1-20081113-P00004
    R} is formed and (A′1× . . . ×A′n×B′1× . . . ×B′m) may be halftoned using P as the set of possible output vectors.
  • An exemplary pseudocode of the iterative isotropic halftoning algorithm for this general case is as follows:
  • for each iteration /* Loop through iterations */
    for each i /* Loop through each row */
    for each j /* Loop through each column */
    for each output vector o = (o1, . . . , on+m)
    Figure US20080279417A1-20081113-P00003
     P
    /*Loop through all possible output vectors */
    replace Outimagek(i, j) with ok for k = 1, . . . , n+m.
    set
    Error ( o ) = k = 1 n v k A L ( Outimage k - A k ) + k = 1 m v k B L ( Outimage n + k - B k )
    endfor
    find output vector omin
    Figure US20080279417A1-20081113-P00003
     P that minimizes Error, i.e.
    o min = arg min o P Error ( o ) .
    set Outimage(i, j) = omin.
    endfor (j)
    endfor (i)
  • if Outimage has not changed between two iterations or maximum number of iterations reached, exit iterations loop.
  • endfor
  • Where:
  • A1′, . . . , An′, B1′, . . . , Bm′ are the input images;
  • P is a set of output vectors;
  • Output: set (A1, . . . , An, B1, . . . , Bm)=Outimage which resembles (A1′, . . . , An′, B1′, . . . , Bm′). The output halftone images are A1, . . . , An; and
  • νk A and νk B determines how strongly the error in each image is minimized.
  • To satisfy the convex hull condition, the pixels of (A′1× . . . ×A′n×B′1× . . . ×B′m) are first scaled via the gamut mapping M. The gamut mapping M is calculated by solving the optimization problem
  • max s i , d i min i s i α i ( 5 )
  • under the constraint that (s1p1+d1, . . . sn+mpn+m+dn+m)εH for all pixels p=(p1, . . . , pn+m) of (A′1× . . . ×A′n×B′1× . . . ×B′m) where H is the convex hull of the output vectors in P. The gamut mapping is then given by: M:(p1, . . . , pn+m)→(s1p1+d1, . . . , sn+mpn+m+dn+m)
  • In another exemplary embodiment, the objective function in Equation (5) may be replaced by:
  • max s i , d i i s i ( 6 )
  • With the above in mind, an exemplary method of the invention may be generalized with reference to the flowchart shown in FIG. 6. The flowchart starts at step 600 and continues to step 602 where the control routine inputs images. For example, the control routine may input images A1′, . . . , An′, and B1′, . . . , Bm′ at step 602.
  • The control routine may then continue to step 604, where gamut mapping, as described above, is applied to the input images. The control routine may then continue to step 606 where the digital halftoning method described above is applied to the gamut mapped images. Then, at step 608, the control routine ends and may, optionally, return to the process that called the control routine of FIG. 6.
  • In another exemplary embodiment of the present invention different combinations of images produce different watermarks. For example, for the case (n=3, m=2), given three images A1, A2, and A3, combining A1 and A2 may output one watermark B1, and combining A1 and A3 may output another watermark B2.
  • An example of the results of using this exemplary embodiment is shown in FIGS. 5A-5C where the gamut mapping was computed using α123=1, and α45=1/2. FIGS. 5A, 5B, and 5C show halftone images of a “Baboon,” a “Boat,” and a “Boat on a lake,” respectively. The “Peppers” image shown in FIG. 5D is obtained by overlaying the images of FIG. 5A and FIG. 5B, and the “Lena” image shown in FIG. 5E is obtained by overlaying the images of FIG. 5A and FIG. 5C.
  • Multibit Images, Color Images and Other Extensions
  • Yet another exemplary embodiment of the present invention may also provide a multi-bit output. In this case, gray levels of two images are simply added (with clamping) to produce the gray level of an overlaid image. When each component of the output vector is from a q-bit quantizer (e.g. the binary case is when q=1), the number of output vectors is large when q is large. In this case, the innermost loop of the exemplary pseudo code described earlier that searches for the output vector that minimizes “Error” may take many computations. Since omin is the output vector that gives the minimal value of “Error” among all choices of the output vectors at location (i, j), the computation of
  • o min = arg min o P Error ( o )
  • is a discrete optimization problem. For large q this may be relaxed to a continuous optimization problem and solved using nonlinear programming and the result may then be quantized by a q-bit quantizer.
  • This exemplary embodiment may also be used for color images. In this case each pixel in a color image lies in a multi-dimensional color space such as CIELab, RGB, CMYK, or the like, and digital halftoning may be performed in the Cartesian product of these color spaces. Instead of processing in the Cartesian product of multi-dimensional color spaces, the images may also be decomposed into their color planes and each plane may be processed independently as a grayscale image and the results may then be combined afterwards. This alternative approach appears to work well in the RGB color space.
  • Similarly, since the human aural system exhibits a low pass behavior, similar to the human visual system, an exemplary embodiment of the present invention may be used to watermark audio data and/or signals.
  • FIG. 7 illustrates a typical hardware configuration of an information handling/computer system for use with the invention and which preferably has at least one processor or central processing unit (CPU) 711.
  • The CPUs 711 are interconnected via a system bus 712 to a random access memory (RAM) 714, read-only memory (ROM) 716, input/output (I/O) adapter 718 (for connecting peripheral devices such as disk units 721 and tape drives 740 to the bus 712), user interface adapter 722 (for connecting a keyboard 724, mouse 726, speaker 728, microphone 732, and/or other user interface device to the bus 712), a communication adapter 734 for connecting an information handling system to a data processing network, the Internet, an Intranet, a personal area network (PAN), etc., and a display adapter 736 for connecting the bus 712 to a display device 738 and/or printer 740.
  • In addition to the hardware/software environment described above, a different aspect of the invention includes a computer-implemented method for performing the above method. As an example, this method may be implemented in the particular environment discussed above.
  • Such a method may be implemented, for example, by operating a computer, as embodied by a digital data processing apparatus, to execute a sequence of machine-readable instructions. These instructions may reside in various types of signal-bearing media.
  • This signal-bearing media may include, for example, a RAM contained within the CPU 711, as represented by the fast-access storage for example. Alternatively, the instructions may be contained in another signal-bearing media, such as a magnetic data storage diskette 800 (FIG. 8), directly or indirectly accessible by the CPU 711.
  • Whether contained in the diskette 800, the computer/CPU 711, or elsewhere, the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., ROM, EPROM, or EEPROM), an optical storage device (e.g. CD-ROM, WORM, DVD, digital optical tape, etc.), paper “punch” cards, or other suitable signal-bearing media including transmission media such as digital and analog and communication links and wireless. In an illustrative embodiment of the invention, the machine-readable instructions may comprise software object code, compiled from a language such as “C”, etc.
  • FIG. 9 illustrates yet another exemplary embodiment of a system 900 for embedding an image into two other images in accordance with the present invention. The system 900 includes an image input device 902, a gamut mapping device 904 and a digital halftoning device 906. The image input device 902 inputs the image to be embedded as well as the two other images in which the one image will be embedded. The gamut mapping device 904 receives the images input by the image input device 902 and performs the gamut mapping process described above on the images. The digital halftoning device 906 performs a digital halftoning process on a Cartesian product of color spaces to embed the image into the two images.
  • While the invention has been described in terms of several exemplary embodiments, those skilled in the art will recognize that the invention can be practiced with modification.
  • Further, it is noted that, Applicants' intent is to encompass equivalents of all claim elements, even if amended later during prosecution.

Claims (20)

1. A method of embedding an image into two images, comprising:
performing a digital halftoning process on a Cartesian product of color spaces to embed the image into the two images.
2. The method of claim 1, wherein the digital halftoning process comprises a vector error diffusion method.
3. The method of claim 1, wherein the digital halftoning process comprises a modified error diffusion method.
4. The method of claim 1, wherein the digital halftoning process comprises an iterative isotropic halftoning process.
5. The method of claim 1, wherein one of said two images is a rotated version of the other of said two images.
6. The method of claim 1, wherein more than one image is embedded into said two images.
7. The method of claim 1, wherein said image is embedded into more than said two images.
8. The method of claim 1, wherein said images comprise color images.
9. The method of claim 1, wherein said images comprise black and white images.
10. The method of claim 1, wherein said images comprise multi-bit images.
11. A method of deploying computing infrastructure, comprising integrating computer-readable code into a computing system, wherein the code in combination with the computing system is capable of performing the method of claim 1.
12. A method of extracting an image from two images, comprising:
extracting the image from the two images using a binary operation on each pair of pixels from the two images.
13. The method of claim 12, wherein extracting the image from the two images comprises extracting the image from more than two images.
14. The method of claim 12, wherein extracting the image comprises extracting more than one image from the two images.
15. A method of embedding a color image into two color images comprising:
decomposing the color images into separate images in their color planes;
for each color plane, performing a digital halftoning process on a Cartesian product of pixel value spaces to embed the image into the two images; and
combining the halftone images of the color planes into a single color image.
16. A method of embedding a multi-bit image into two multi-bit images, comprising:
performing a digital halftoning process on a Cartesian product of pixel value spaces to embed the image into the two images.
17. A signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processor, the program comprising:
instructions for performing a digital halftoning process on a Cartesian product of color spaces to embed the image into the two images.
18. A system for embedding an image into two images comprising:
means for providing said image to be embedded into said two images; and
means for performing a digital halftoning process on the Cartesian product of color spaces to embed the image into the two images.
19. A system for embedding an image into two images comprising:
an image input device; and
a digital halftoning device that performs a digital halftoning process on a Cartesian product of color spaces to embed the image received by the image input device into the two images.
20. The system of claim 19, further comprising a gamut mapping device that performs gamut mapping on the image received by the image input device.
US11/751,885 2004-01-16 2007-05-22 Method and system for embedding an image into two other images Abandoned US20080279417A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/751,885 US20080279417A1 (en) 2004-01-16 2007-05-22 Method and system for embedding an image into two other images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/758,536 US7333244B2 (en) 2004-01-16 2004-01-16 Method and system for embedding an image into two other images
US11/751,885 US20080279417A1 (en) 2004-01-16 2007-05-22 Method and system for embedding an image into two other images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/758,536 Continuation US7333244B2 (en) 2004-01-16 2004-01-16 Method and system for embedding an image into two other images

Publications (1)

Publication Number Publication Date
US20080279417A1 true US20080279417A1 (en) 2008-11-13

Family

ID=34749528

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/758,536 Active - Reinstated 2025-12-21 US7333244B2 (en) 2004-01-16 2004-01-16 Method and system for embedding an image into two other images
US11/751,885 Abandoned US20080279417A1 (en) 2004-01-16 2007-05-22 Method and system for embedding an image into two other images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/758,536 Active - Reinstated 2025-12-21 US7333244B2 (en) 2004-01-16 2004-01-16 Method and system for embedding an image into two other images

Country Status (1)

Country Link
US (2) US7333244B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070104349A1 (en) * 2005-10-17 2007-05-10 Kddi Corporation Tally image generating method and device, tally image generating program, and confidential image decoding method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7006254B2 (en) * 2001-05-03 2006-02-28 International Business Machines Corporation Method and system for data hiding and authentication via halftoning and coordinate projection
US7965861B2 (en) * 2006-04-26 2011-06-21 The Board Of Regents Of The University Of Texas System Methods and systems for digital image security
US8325969B2 (en) * 2006-04-28 2012-12-04 Hewlett-Packard Development Company, L.P. Methods for making an authenticating system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5561751A (en) * 1994-12-13 1996-10-01 Microsoft Corporation System and method for displaying a color image using vector error diffusion
US5734752A (en) * 1996-09-24 1998-03-31 Xerox Corporation Digital watermarking using stochastic screen patterns
US5734753A (en) * 1995-08-31 1998-03-31 Hewlett-Packard Company Partial pixel encoding and decoding method
US5790703A (en) * 1997-01-21 1998-08-04 Xerox Corporation Digital watermarking using conjugate halftone screens
US5982992A (en) * 1997-09-05 1999-11-09 Xerox Corporation Error diffusion in color printing where an intra-gamut colorant is available
US6304333B1 (en) * 1998-08-19 2001-10-16 Hewlett-Packard Company Apparatus and method of performing dithering in a simplex in color space
US6426802B1 (en) * 1999-01-19 2002-07-30 Xerox Corporation Complementary halftone screens for highlight printing
US6483606B1 (en) * 1998-08-26 2002-11-19 Xerox Corporation Error diffusion on moderate numbers of output colors
US20020171853A1 (en) * 2001-05-03 2002-11-21 International Business Machines Corporation Method and system for data hiding and authentication via halftoning and coordinate projection
US20030117653A1 (en) * 2001-03-09 2003-06-26 Velde Koen Vande Adequate quantisation in multilevel halftoning
US6603573B1 (en) * 1998-10-30 2003-08-05 International Business Machines Corporation Constrained digital halftoning
US7027191B1 (en) * 1998-03-02 2006-04-11 Hewlett-Packard Development Company, L.P. Expanded color space

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5561751A (en) * 1994-12-13 1996-10-01 Microsoft Corporation System and method for displaying a color image using vector error diffusion
US5734753A (en) * 1995-08-31 1998-03-31 Hewlett-Packard Company Partial pixel encoding and decoding method
US5734752A (en) * 1996-09-24 1998-03-31 Xerox Corporation Digital watermarking using stochastic screen patterns
US5790703A (en) * 1997-01-21 1998-08-04 Xerox Corporation Digital watermarking using conjugate halftone screens
US5982992A (en) * 1997-09-05 1999-11-09 Xerox Corporation Error diffusion in color printing where an intra-gamut colorant is available
US7027191B1 (en) * 1998-03-02 2006-04-11 Hewlett-Packard Development Company, L.P. Expanded color space
US6304333B1 (en) * 1998-08-19 2001-10-16 Hewlett-Packard Company Apparatus and method of performing dithering in a simplex in color space
US6483606B1 (en) * 1998-08-26 2002-11-19 Xerox Corporation Error diffusion on moderate numbers of output colors
US6603573B1 (en) * 1998-10-30 2003-08-05 International Business Machines Corporation Constrained digital halftoning
US6426802B1 (en) * 1999-01-19 2002-07-30 Xerox Corporation Complementary halftone screens for highlight printing
US20030117653A1 (en) * 2001-03-09 2003-06-26 Velde Koen Vande Adequate quantisation in multilevel halftoning
US7006254B2 (en) * 2001-05-03 2006-02-28 International Business Machines Corporation Method and system for data hiding and authentication via halftoning and coordinate projection
US20020171853A1 (en) * 2001-05-03 2002-11-21 International Business Machines Corporation Method and system for data hiding and authentication via halftoning and coordinate projection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070104349A1 (en) * 2005-10-17 2007-05-10 Kddi Corporation Tally image generating method and device, tally image generating program, and confidential image decoding method

Also Published As

Publication number Publication date
US20050157906A1 (en) 2005-07-21
US7333244B2 (en) 2008-02-19

Similar Documents

Publication Publication Date Title
Kacker et al. Joint halftoning and watermarking
US6798537B1 (en) Digital color halftoning with generalized error diffusion vector green-noise masks
EP0560285B1 (en) Method and apparatus for generating simultaneously derived correlated digital halftone patterns
US7257271B2 (en) Noise reduction in color digital images using pyramid decomposition
US7570829B2 (en) Selection of alternative image processing operations to maintain high image quality
US7057773B2 (en) Error diffusion using next scanline error impulse response
US6731823B1 (en) Method for enhancing the edge contrast of a digital image independently from the texture
US20050083545A1 (en) Image processing
US6721458B1 (en) Artifact reduction using adaptive nonlinear filters
US20080279417A1 (en) Method and system for embedding an image into two other images
US7006254B2 (en) Method and system for data hiding and authentication via halftoning and coordinate projection
Xia et al. Deep halftoning with reversible binary pattern
Son Inverse halftoning through structure-aware deep convolutional neural networks
US20070002378A1 (en) Image forming apparatus and method forming a latent image for embedding information in an image
US6025930A (en) Multicell clustered mask with blue noise adjustments
US20050281458A1 (en) Noise-reducing a color filter array image
US6721457B1 (en) Method for enhancing digital images
US20060215186A1 (en) Method and system for scaling with dot gain control
Wu et al. Digital watermarking and steganography via overlays of halftone images
US7292728B1 (en) Block quantization method for color halftoning
US20050063018A1 (en) Image processor unit, image processing method, and storage medium
Guo et al. Visually Encrypted Watermarking for Ordered-Dithered Clustered-Dot Halftones
JP2002084415A (en) Method for spreading adaptive message imbedding error
JP2001119558A (en) Image processor and method and storage medium
US7085015B2 (en) Inverse halftoning for multi-level halftones

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION