WO2000078032A2 - Data encoding and decoding - Google Patents

Data encoding and decoding Download PDF

Info

Publication number
WO2000078032A2
WO2000078032A2 PCT/US2000/016099 US0016099W WO0078032A2 WO 2000078032 A2 WO2000078032 A2 WO 2000078032A2 US 0016099 W US0016099 W US 0016099W WO 0078032 A2 WO0078032 A2 WO 0078032A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
pixels
encoding
groups
Prior art date
Application number
PCT/US2000/016099
Other languages
French (fr)
Other versions
WO2000078032A3 (en
Inventor
Joshua R. Smith
Original Assignee
The Escher Group, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Escher Group, Ltd. filed Critical The Escher Group, Ltd.
Priority to EP00939795A priority Critical patent/EP1203348A2/en
Priority to AU54824/00A priority patent/AU5482400A/en
Publication of WO2000078032A2 publication Critical patent/WO2000078032A2/en
Publication of WO2000078032A3 publication Critical patent/WO2000078032A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • G06T1/0071Robust watermarking, e.g. average attack or collusion attack resistant using multiple or alternating watermarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32352Controlling detectability or arrangements to facilitate detection or retrieval of the embedded information, e.g. using markers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0052Embedding of the watermark in the frequency domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3233Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of authentication information, e.g. digital signature, watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3269Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs
    • H04N2201/327Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs which are undetectable to the naked eye, e.g. embedded codes

Definitions

  • the present invention relates generally to techniques for data encoding and de- coding, and more specifically, to techniques for encoding a data stream in an image, and for decoding such an encoded data stream.
  • image should be viewed broadly as encompassing various types of data sets that may be used in connection with encoding an input data stream. Such data sets may, e.g., represent random or pseudorandom noise phenomena, brightness intensity values of pixels of a digitized visual image, etc.
  • the "image" that is used in connection with encoding and decoding of an input data stream actually consists of a set of brightness intensity values of pixels of a digitized visual image
  • the present invention is not limited to use solely with this type of image data.
  • Steganographic techniques exist in the prior art for modifying a first image (hereinafter termed a "cover image”) so as to generate a second image (hereinafter termed a "stego-image") in which an input data stream (hereinafter termed a “mes- sage”) has been recoverably encoded.
  • the cover image consists of a set of brightness intensity values of pixels of a digitized visual image, and the data values of this cover image are modulated based upon a set of two-dimensionally-varying carrier functions to generate the stego-image. Each of these functions may have either a value of plus or minus unity.
  • each data bit of the message is assigned a value of either plus or minus unity, depending upon whether the respective data bit is logically true or false, respectively, the respective bit values comprised in the message resulting from this assignment are multiplied by the respective values of the respective carrier functions, and the resulting products are added to respective predetermined sets of pixel brightness intensity values in the cover image to generate the stego-image.
  • the products may first be multiplied by an empirically-determined scaling or gain factor to improve signal to noise ratio of the encoded data.
  • image regions These respective sets of pixel values are hereinafter termed “image regions,” and the size of the image regions is herein- after termed the “carrier size" of the cover image.
  • the image regions tile the entire stego-image.
  • the density of data that can be successfully encoded in the stego-image according to this technique is inversely proportional to the carrier size.
  • the carrier size cannot be made arbitrarily small.
  • the scaling factor and carrier size should be selected so as to ensure that the density of data encoded, and distortion in, the stego-image are both acceptable given the particular application.
  • each of the respective pixel intensities in a respective image region is multiplied by the respective values of the respective associated carrier func- tions (i.e., the respective carrier functions that were used to generate the respective pixel intensity values in the stego-image) to produce a series of products; these products are then summed to produce a respective summation value for the respective image region. If the respective summation value is negative, the respective message bit value encoded in the respective image region is decoded as a "false" data bit, and vice versa.
  • the values of the carrier functions in the image regions are random or pseudorandom.
  • An advantage of this technique is that, unless the original carrier function is known, it is relatively difficult to decode a message encoded using this technique.
  • a further advantage is that since the encoded data is distributed throughout the spatial frequency spectrum, the distortion in the stego-image resembles grainy "noise” and lacks sharp discontinuities, thereby making detection of such distortion relatively difficult.
  • Another prior art steganographic technique is the so-called "frequency hopping spread spectrum" technique.
  • each message bit value is encoded in the stego-image in accordance with particular spatial frequency bands specified by a pseudo-randomly-generated key.
  • the mathematical operations required to implement this technique are computationally intensive, and depending upon the particular computational device used to implement the technique in a particular appli- cation, it may take significantly longer to perform the computations necessary to implement this technique than the computations necessary to implement other steganographic techniques, including other spread spectrum steganographic techniques.
  • the set of carrier functions that is used to encode the message bit values must be explicitly known in or- der to be able to decode the encoded message from the stego-image.
  • Explicit knowledge of the carrier functions not only permits the encoded message to be decoded from the stego-image, but also permits the stego-image to be reconverted to the cover image from which the stego-image was generated.
  • a first function e.g., a carrier function
  • the message is encoded, based upon the first function, in a second portion of the image.
  • the first function may de- scribe a bitwise modulation to be applied to the message.
  • the first and second portions of the image may each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
  • the encoding of the message in the image is carried out in such a way that the message may be decoded from the image based, at least in part, upon respective correlations and anti-correlations between pixels in corresponding image regions in the first and second portions of the image, so as to permit the message to be decoded from the image without explicit knowledge of the carrier function or functions used to encode the message in the image, and such that the ability to decode the encoded message in the image does not by itself also grant the ability to generate the original (i.e., cover) image from the image (i.e., stego-image) containing the encoded message.
  • this permits the carrier function or functions to serve essentially as a type of private authentication key that may be held by a certifying authority so as to grant only to the authority the ability to generate apparently authorized stego-images, while permitting others the ability to ob- tain decoded messages from those stego-images.
  • a cover image is upsampled in one or more dimensions of the first image so as to generate an upsampled image of higher resolution or larger size than the first image.
  • the upsampled image includes a plurality of respective groups of respectively identical pixels in the one or more directions of the one or more dimensions of upsampling of the first image.
  • the message is encoded in the upsampled image.
  • At least one respective pixel in each of the groups of respectively identical pixels is unaltered as a result of the message being encoded in the upsampled image.
  • the respective identical pixels in each respective group of respectively identical pixels may be changed as a result of the encoding of the message in the upsampled image such that, after the encoding of the message in the upsampled image, respective summations of respective intensity values of the respective identical pixels in each re- spective group of respectively identical pixels are equal to respective intensity values of respective corresponding pixels in the first image.
  • the encoding of the message in the upsampled image may be based, at least in part, upon a bitwise modulation of the message.
  • the technique of the second aspect of the present invention permits a much higher density of message data to be encoded in an image (i.e., the upsampled image) than is possible in the prior art, and also permits substantial reduction in the appreciability of distortion in the upsampled stego-image.
  • the density of message data that may be effectively encoded into an image using the second technique of the present invention may be ten or more times greater than the density of such data that may be encoded into an image according to the prior art.
  • Apparatus and methods are also provided that permit messages to be encoded/decoded in and from, respectively, images in accordance with the principles of the techniques of the present invention.
  • Figure 1 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the first aspect of the present invention for the purpose of generating a stego-image containing an encoded message.
  • Figure 2 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the first aspect of the present invention for the purpose of decoding an encoded message from a stego-image.
  • Figure 3 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the second aspect of the present invention for the purpose of generating a stego-image containing an encoded message.
  • Figure 4 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the second aspect of the present invention for the purpose of decoding an encoded message from a stego-image.
  • Figure 5 is a symbolic illustration of a stego-image generated by the apparatus of Figure 1, and processed by the apparatus of Figure 2.
  • Figure 6 is a symbolic illustration of a stego-image generated by the apparatus of Figure 3, and processed by the apparatus of Figure 4.
  • Figure 1 is a highly schematic diagram illustrating the construction of an apparams 10 that implements the principles of an embodiment of the first aspect of the present invention for the purpose of generating a stego-image 26 containing an encoding of a data message 28.
  • apparatus 10 includes controller 18.
  • Controller 18 in- eludes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 10.
  • the software programs and data structures When executed by the one or more processors in apparatus 10, the software programs and data structures cause the controller 18 and other elements of apparatus 10 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 10.
  • controller 18 may comprise one or more Intel 80X86-type processors and associated memory.
  • Apparatus 10 also comprises an imaging device 14 that includes a conventional imaging, digital camera, one- or two-dimensional array of photoelements (e.g., charge- coupled photosensing elements, etc.) or scanning system that generates, in response to commands received from the controller 18, a digitized image 16 and supplies image 16 to controller 18.
  • Image 16 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual cover image 12 that is input to the device 14 when it is desired to commence generation of the stego-image 26.
  • Printing device/CRT device 24 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 10, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the ap- paratus 10 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data.
  • Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 10, and to receive from controller 18 indication of receipt and progress of apparatus 10 in executing the input commands.
  • device 24 may include a printing-press or similar printing mechanism controlled by controller 18.
  • controller 18 when controller 18 re- ceives the digital image 16 from the device 14, controller 18 generates therefrom a digitalized version 22 of stego-image 26, in accordance with the principles of one embodiment of the first aspect of the present invention, based upon a set of carrier functions (symbolically referred to by numeral 30), which image 26 contains an encoding of message 28.
  • the re- spective values of these functions 30 may be either plus or minus unity.
  • the functions 30 and or message 28 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network).
  • the digitized stego-image 22 may be essentially of the same format and may comprise essentially the same types of data as the digitized cover image 16, and when supplied to the device 24 together with appropriate commands from the controller 18, causes device 24 to generate therefrom the visual stego-image 26.
  • FIG 2 is a highly schematic diagram illustrating the construction of an apparatus 50 that implements the principles of an embodiment of the first aspect of the pres- ent invention for the purpose of recovering from the stego-image 26 the digitized cover image 16 and message 28.
  • apparatus 50 includes controller 18.
  • Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 50.
  • computer-readable memory 20 e.g., comprising random access, read-only, and/or mass storage memory
  • controller 18 of apparatus 50 When executed by the one or more processors in apparatus 50, the software programs and data structures cause the controller 18 and other elements of apparatus 50 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 50. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 50 without departing from the present invention.
  • controller 18 of apparatus 50 may comprise one or more Intel 80X86-type processors and associated memory.
  • Apparatus 50 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10.
  • Device 14 of apparatus 50 generates, in response to commands received from the controller 18, digitized image 22 from image 108 and supplies image 22 to controller 18.
  • Image 22 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual stego-image 26 that is input to the device 14 when it is desired to commence recovery of the cover image 16 and or message 28 from the stego-image 26.
  • Printing device/CRT device 24 of Figure 2 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 50, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the apparatus 50 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data.
  • Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which to- gether permit the human user to input commands to controller 18 to be performed by apparatus 50, and to receive from controller 18 indication of receipt and progress of apparatus 50 in executing the input commands.
  • controller 18 when controller 18 re- ceives the digital image 22 from the imaging device 14, controller 18 recovers therefrom the message 28 encoded in image 22. Additionally, if controller 18 has been supplied with the functions 30 that were used to encode message 28 into stego-image 22, the controller 18 of apparatus 50 may also recover from the image 28 the digital cover image 16. However, as will be described more fully below, it is a feature and advan- tage of the technique according to the first aspect of the present invention that unless the controller 18 of apparatus 50 is supplied with these functions 30, it would be very difficult, in practical implementation of apparatus 50, for controller 18 to be programmed in such a way as to recover the image 16 from the image 22.
  • the con- troller 18 of apparatus 50 need not be supplied with the functions 30 in order to be able to recover the message 28 from the image 22.
  • controller 18 may be stored in memory 20 of apparatus 50 or may be supplied to the controller 18 of apparatus 50 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network).
  • a source external thereto e.g., a not shown certifying authority connected to the controller 18 via a communications network.
  • the image 16 and/or message 28 cause device 24 to generate therefrom the visual cover image 12 and/or a visual (i.e., symbolic written) representation 38 of the message 28, respectively.
  • FIG. 3 is a highly schematic diagram illustrating the construction of an apparatus 100 that implements the principles of an embodiment of the second aspect of the present invention for the purpose of generating a stego-image 108 containing an encoding of a data message 28.
  • apparatus 100 includes controller 18.
  • Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 100.
  • computer-readable memory 20 e.g., comprising random access, read-only, and/or mass storage memory
  • controller 18 of apparatus 100 When executed by the one or more processors in apparatus 100, the software programs and data structures cause the controller 18 and other elements of apparatus 100 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 100. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 100 without departing from the present invention.
  • controller 18 of apparatus 100 may comprise one or more Intel 80X86- type processors and associated memory.
  • Apparatus 100 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10.
  • Device 14 of apparatus 100 generates, in response to commands received from the controller 18, a digitized image 16 and sup- plies image 16 to upsampler 102.
  • Image 16 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual cover image 12 that is input to the device 14 when it is desired to commence generation of the stego- image 108.
  • upsampler 102 may be thought of as being a separate logical component of apparatus 100, it should be understood that, in practical implementation of apparatus 100, upsampler 102 may be comprised in controller 18 or imaging device 14.
  • Upsampler 102 generates from image 16 another image 104 that is of higher resolution than image 16 but otherwise is identical to image 16. Upsampler 102 accomplishes this by upsampling the image 16 in one or more dimensional directions (i.e., mutually orthogonal dimensional directions 105, 107) of the image 16. As will be described more fully below, the upsampled image 104 that is generated by upsampler 102 as a result of this process contains groups or clusters of pixels (symbolically shown in Figure 3 and collectively referred to by numeral 109) whose respective brightness values are identical to those of corresponding pixels in image 16 from which the higher resolution image 104 was generated.
  • upsampler 102 is configured to upsample the image 16 by a factor of two in each of the length and width dimensions 105, 107 to generate the upsampled image 104, then the resulting resolution of image 104 is four times greater than that of image 16, and image 104 is composed of contiguous pixel clusters 109 that each comprise four respective contiguous pixels.
  • the brightness values of the four pixels in each respective cluster are identical to each other, but may vary among pixels of different clusters.
  • Printing device/CRT device 24 comprises a conventional mechanism for inter- facing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 100, for providing the user physical (i.e., hardcopy printed output) representations of visual images and or other data generated by the apparatus 100 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data.
  • Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 100, and to receive from controller 18 indication of receipt and progress of apparatus 100 in exe- cuting the input commands.
  • controller 18 when controller 18 receives the upsampled digital image 104 from upsampler 102, controller 18 generates therefrom a digitalized version 106 of stego-image 108, in accordance with the principles of one embodiment of the second aspect of the present invention, based upon a set of carrier functions (symbolically referred to by numeral 30), which image 108 contains an encoding of message 28.
  • the functions 30 and/or message 28 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network).
  • the digitized stego-image 106 may be essentially of the same format and may comprise essentially the same types of data as the digitized images 16, 104 and when supplied to the device 24 together with appropriate commands from the controller 18, causes device 24 to generate therefrom the visual stego-image 108.
  • FIG 4 is a highly schematic diagram illustrating the construction of an apparatus 200 that implements the principles of an embodiment of the second aspect of the present invention for the purpose of recovering from the stego-image 108 the digitized cover image 16 and/or the message 28 encoded in image 108.
  • apparatus 200 includes controller 18.
  • Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 200.
  • computer-readable memory 20 e.g., comprising random access, read-only, and/or mass storage memory
  • controller 18 of apparatus 200 When executed by the one or more processors in apparatus 200, the software programs and data structures cause the controller 18 and other elements of apparatus 200 to carry out and/or implement the techniques, functions, and operations described herein as be- ing carried out and/or implemented by controller 18 and other elements of apparams 200. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 100 without departing from the present invention.
  • controller 18 of apparatus 200 may comprise one or more Intel 80X86-type processors and associated memory.
  • Apparams 200 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10.
  • Device 14 of apparams 200 generates, in response to commands received from the controller 18, digitized image 106 from image 108 and supplies image 106 to controller 18.
  • Image 106 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual stego-image 108 that is input to the device 14 when it is desired to commence recovery of the cover image 16 and/or message 28 from the stego-image 108.
  • Printing device/CRT device 24 of Figure 4 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 200, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the apparatus 200 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data.
  • Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 200, and to receive from controller 18 indication of receipt and progress of apparatus 200 in executing the input commands.
  • controller 18 when controller 18 re- ceives the digital image 106 from the imaging device 14, controller 18 recovers therefrom, using the functions 30, the encoded message 28 and the image 16.
  • the functions 30 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network).
  • a source external thereto e.g., a not shown certifying authority connected to the controller 18 via a communications network.
  • the image 16 and message 28 When supplied to the device 24 together with appro- priate commands from the controller 18, the image 16 and message 28 cause device 24 to generate therefrom the visual cover image 12 and a visual (i.e., written symbolic) representation 38 of the message 28, respectively.
  • FIG. 5 is a symbolic illustration of a digitized stego-image 22 generated by the apparatus 10 of Figure 1, and processed by the apparatus 50 of Figure 2.
  • the man- ner in which the stego-image 22 is generated by the controller 18 of apparatus 10 will now be described.
  • controller 18 of apparatus 10 After controller 18 of apparatus 10 receives the digitized cover image 16, controller 18 initially processes the image 16 by logically associating together respective pluralities of mutually-continuous pixels in the image 16 into two respective disjoint sets of image regions (collectively referred to in Figure 5 by the numerals 300 and 302, respectively) of equal size (i.e., each of the sets 300, 302 contains the same number of image regions and each of the image regions contains the same number of pixels); each of the image regions 304, 306, 308, and 310 in the first set 300 is associated with a respective image region 312, 314, 316, and 318 in the second set 302.
  • region 304 is associated with region 312
  • region 306 is associated with region 314
  • region 308 is associated with region 316
  • region 310 is associated with region 318, respectively.
  • each of the image regions 304, 306, 308, 310, 312, 314, 316, and 318 has a size of four pixels. It should be noted that the number and size of the image regions in each set 300, 302 described herein is merely for illustrative purposes and may vary depending upon the number of pixels in the image 16 and the number of data bits in the message 28 to be encoded.
  • the number of image regions in each set 300, 302 should be equal to the number of data bits in the message 28 to be en- coded.
  • the message 28 is four bits in length.
  • the locations of the image regions in the image 22 are predetermined and preprogrammed into the controller 18 of apparatus 10.
  • the message 28 is encoded into the image 22 based upon respective correlations and anti-correlations between the respective image regions in the sets 300, 302 that are associated with each other. More specifically, as will be described more fully below, the brightness intensity values of the pixels in each of the image regions of sets 300, 302 are treated as specifying coordinate values of respective vectors, and the data bits of the message 28 are encoded in the image 22 based upon correlations and anti-correlations between these vectors.
  • the intensity values of the pixels in image region 304 are given by variables a, b, c, and d, respectively;
  • the intensity values of the pixels in image region 306 are given by variables i, j, k, and 1, respectively;
  • the intensity values of the pixels in image region 308 are given by vari- ables q, r, s, and t, respectively;
  • the intensity values of the pixels in image region 310 are given by variables y, z, aa, and bb, respectively.
  • the intensity values of the pixels in image region 312 are given by variables e, f, g, and h, respectively; the intensity values of the pixels in image region 314 are given by variables m, n, o, and p, respectively; the intensity values of the pixels in image region 316 are given by variables u, v, w, and x, respectively; the intensity values of the pixels in image region 318 are given by variables cc, dd, ee, and ff, respectively.
  • the vectors that may be generated from regions 304, 306, 308, 310, 312, 314, 316, and 318 are ⁇ a, b, c, d>, ⁇ i, j, k, 1>, ⁇ q, r, s, t> ⁇ y, z, aa, bb>, ⁇ e, f, g, h>, ⁇ m, n, o, p>, ⁇ u, v, w, x>, and ⁇ cc, dd, ee, ff>, respectively.
  • the respective vectors (e.g., vectors ⁇ a, b, c, d> and ⁇ e, f, g, h>) generated from two associated image regions (e.g., regions 304 and 312) are considered to be correlated if the inner product of the respective vectors (calculated after background or DC components have been filtered out from the pixel brightness values from which the respective vectors are generated) is positive. Conversely, the respective vectors are considered to be anti-correlated if the inner product of the respective vectors (calculated after background of DC components have been filtered out from the pixel brightness values from which the respective vectors are generated) is non-positive.
  • the back- ground or DC component may be filtered out from the pixel brightness values from which a respective vector is generated by subtracting from the respective vector the mean of the respective vector (i.e., the inner product of the respective vector with itself).
  • the respective vectors may each also be scaled so as to have the same magnitude.
  • Two associated image regions e.g., 304 and 312 encode a logically true mes- sage data bit if the respective vectors generated from the regions' pixel brightness values are correlated.
  • two associated image regions encode a logically false message data bit if the respective vectors generated from the regions' pixel brightness values are anti-correlated.
  • Controller 18 of apparatus 10 is configured to generate the image 22 so as to encode the message 28 therein based upon such correlations and anti-correlations between respective associated image regions of the image 22. That is, controller 18 generates the image 22 from the image 16 by modulating the pixel brightness intensity values of corresponding associated image regions in image 16 such that the resultant asso- ciated image regions in image 22 encode message 28 based upon respective correlations and anti-correlations between respective associated image regions in image 22. Each pair of associated image regions in image 22 encodes a single respective bit of the message 28 based upon whether the respective vectors generated from their pixel brightness values are correlated or anti-correlated with each other.
  • controller 18 of apparatus 10 generates the respective pixel brightness intensity values a, b, c, . . . ff will now be described. Controller 18 of apparatus 10 first assigns each data bit of the message 28 a value of either plus or minus unity, depending upon whether the respective data bit is logically true or false, respectively. The respective message data bit values resulting from this assignment are then multiplied by respective values of respective carrier functions 30, to generate re- spective products.
  • the respective image regions in the image 16 that correspond to regions 304, 306, 308, 310 are then associated with respective products, in accordance with a predetermined association algorithm, and the respective product associated with each respective corresponding image region in image 16 is added to each of the pixel brightness intensity values in that corresponding image region to generate the pixel intensity values in the image regions 304, 306, 308, 310 of the stego-image 22.
  • the products may first be multiplied by an empirically-determined scaling or gain factor for the purpose of improving encoding signal to noise ratio.
  • Controller 18 of apparatus 10 adds the respective values of the respective carrier functions 30 that were used to encode the message 28 in image regions 304, 306, 308, and 310 to each of the pixel brightness intensity values in respective corresponding image regions in image 16 to generate the pixel intensity values in the image regions 312, 314, 316, and 318.
  • the respective data message bit values are encoded in respective correlations and anti-correlations between respective associated image regions in the stego-image 22.
  • the message 28 in the image 22 it is possible to decode the message 28 from the image 22 without explicit knowledge of the carrier functions 30; however, without explicit knowledge of the carrier functions 30, it is also relatively difficult to generate from the image 22 the image 16, since, assuming that the decoder is not provided with the cover image 16, it is relatively difficult to discern without explicit knowledge of the carrier function values 30, the absolute modulations of the pixel brightness intensity values that generated the stego-image 22 from the cover image 16.
  • the controller 18 of apparatus 50 In order to decode the message 28 from the stego-image 22, the controller 18 of apparatus 50 first analyzes the stego-image 22 to determine the pixel brightness intensity values of the pixels in the image regions 304, 306, 308, 310, 312, 314, 316, and 318. Controller 18 of apparatus 50 is preprogrammed with the locations and carrier sizes of these image regions in stego-image 22, as well as, which images regions in the sets 300, 302, are respectively associated with each other, and the predetermined algorithm that was used to associate the message data bits with the image regions.
  • Controller 18 of apparatus 50 determines, based upon this preprogrammed information and the respective correlations and anti-correlations between respective associated im- age regions, the logical values of the message bits, and assembles these logical values to decode the message 28. If the controller 18 of the apparatus 50 is supplied with the functions 30, the controller may also be programmed to use the knowledge embodied in the functions 30 (i.e., of the absolute modulations of the pixel brightness intensity values that were used to generate the stego-image 22 from the cover image 16) to generate from the image 22 the image 16.
  • FIG. 6 is a symbolic illustration of a digitized stego-image 106 generated by the apparatus 100 of Figure 3, and processed by the apparatus 200 of Figure 4. The manner in which the stego-image 106 is generated by the apparatus 100 will now be described.
  • controller 18 in apparatus 100 receives the image 104 from the upsampler
  • the controller 18 analyzes the image 104 to detect and locate therein the clusters 109 of respectively identical pixels therein.
  • each of the clusters 109 contains a respective set of four respectively identical contiguous pixels.
  • the controller 18 of apparatus 100 groups the clusters 109 into blocks which correspond to the image regions (symbolically referred to by numerals 400, 402) in the image 106.
  • Each of the blocks contains a predetermined number of the clusters 109 (e.g., the blocks may each contain 100 of the clusters 109, and be in the form of a 10 cluster by 10 cluster square).
  • the number and size of the blocks, and thus also of the corresponding image regions 400, 402 described herein is merely for illustrative purposes and may vary depending upon the number of pixels in the image 104 and the number of data bits in the message 28 to be encoded.
  • the number of blocks and image regions 400, 402 should be equal to the number of data bits in the message 28 to be encoded, since each image region 400, 402 encodes a single respective data bit from the message 28.
  • the assignment of bits of message 28 to be encoded in image regions 400, 402 is in accordance with a predetermined algorithm that may be preprogrammed into controller 18 of apparatus 100.
  • the message 28 is 2 bits in length.
  • the locations of the blocks and corresponding image regions 400, 402 in the image 22 are predetermined and preprogrammed into the controller 18 of apparatus 100.
  • Controller 18 of apparatus 100 assigns to the respective data bit values of the message 28 either plus or minus unity, depending upon whether the respective data bit value undergoing such assignment is logically true or false.
  • Carrier functions 30 in this embodiment of the second aspect of the present invention comprise a plurality of sets of subcarrier functions. Each set of subcarrier functions comprises four respective values; each of these values may be either plus or minus unity, and subject to this constraint, and the further constraint that the values of the subcarrier functions in each set of subcarrier functions sum to zero, the particular values of the subcarrier functions in each such set may be selected randomly or pseudo-randomly. The number of sets of
  • subcarrier functions is equal to the number of the clusters 109 in the regions 400, 402 in
  • each of the sets of subcarrier functions is randomly as ⁇
  • spective subcarrier function values in each respective set of subcarrier functions is as ⁇
  • ratus 100 multiplies each of the respective assigned values of the message data bits by
  • controller 18 by adding the respective resulting products to the respective
  • the products may first be multiplied by an empirically-value
  • carrier functions 30 may be modified such that each
  • set of subcarrier functions contains only three subcarrier values and these values do not
  • the one respective pixel in each of the clusters 109 used to encode message data remains unchanged, in all respects, including brightness intensity value, from its condition in image 104.
  • Various conventional schemes may be used to encode the message data into the remaining pixels of the image 106 (i.e., other than pixels 111).
  • the respective brightness intensity values of these respective pixels 111 may be subtracted from the brightness intensity values of the other pixels in each of the clusters 109 comprising the respective pixels 111 prior to attempting to decode the message data from the image 106 in order to achieve substantial immunity to the effects of noise that may exist in the upsampled cover image 104 in decoding the message 28.
  • Controller 18 may be preprogrammed with information such as the respective carrier function values, the associations of these values with the respective pixels whose respective brightness intensity values were modulated therewith to encode the message 28 in image 106, the manner in which the respective encoded bits of the message 28 was assigned to the respective image regions 400, 402, and the locations and sizes of the clusters 109 and regions 400, 402.
  • controller 18 in apparatus 200 receives the stego-image 106 from device 14, the controller 18 decodes the respective message bits from each respective image region 400, 402 by multiplying each of the respective pixel brightness intensity values in each respective image region by the respective value of the respective associated carrier function (i.e., the respective carrier function value that was used to generate the respective pixel intensity value in the stego-image 106) to produce a series of products; these products are then summed in each respective region 400, 402 to produce a respective summation value for the respective image region 400, 402. If the respective summation value is negative, the re- spective message bit value encoded in the respective image region is decoded as a "false" data bit, and vice versa.
  • the controller 18 also utilizes the aforesaid information in conventional techniques to generate from the image 106 the image 104.
  • the controller 18 of apparatus 200 then appropriately downsamples the image 104 to gener- ate therefrom the image 16.
  • the apparatus 10, 100 may generate the image 16 using a computer- executed application program.
  • the apparatus 10, 100 may be configured to upload the images 22, 106 to a local or remote server (e.g., a world wide web internet server) for access by others via a computer network (e.g., the internet).
  • a local or remote server e.g., a world wide web internet server
  • the pixels and/or image regions in the image 22 of Figure 5 may be mutually interleaved among each other and need not be clustered among each other in their respective sets 300, 302. Additional modifications are also possible. Accordingly, the present invention should be viewed quite broadly, and as being defined only as set forth in the hereinafter appended claims.

Abstract

Techniques are disclosed for encoding data in, and decoding encoded data from, images. In an embodiment of a technique according to a first aspect of the invention, a function used to encode data in one region of an image is itself encoded in another region of the image. This technique of the present invention may be used to advantage in image-based watermarking applications.

Description

DATA ENCODING AND DECODING
CROSS-REFERENCE TO RELATED APPLICATION
The subject application claims the priority of commonly-owned, copending U.S. Provisional Patent Application Serial No. 60/139,758, filed June 15, 1999, entitled "In- formation Hiding." The entirety of said copending application is hereby incorporated herein by reference.
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates generally to techniques for data encoding and de- coding, and more specifically, to techniques for encoding a data stream in an image, and for decoding such an encoded data stream. As used herein, the term "image" should be viewed broadly as encompassing various types of data sets that may be used in connection with encoding an input data stream. Such data sets may, e.g., represent random or pseudorandom noise phenomena, brightness intensity values of pixels of a digitized visual image, etc. Thus, although in illustrative embodiments of the present invention, the "image" that is used in connection with encoding and decoding of an input data stream actually consists of a set of brightness intensity values of pixels of a digitized visual image, the present invention is not limited to use solely with this type of image data. Brief Description of Related Prior Art
Steganographic techniques exist in the prior art for modifying a first image (hereinafter termed a "cover image") so as to generate a second image (hereinafter termed a "stego-image") in which an input data stream (hereinafter termed a "mes- sage") has been recoverably encoded. In one conventional steganographic technique, the cover image consists of a set of brightness intensity values of pixels of a digitized visual image, and the data values of this cover image are modulated based upon a set of two-dimensionally-varying carrier functions to generate the stego-image. Each of these functions may have either a value of plus or minus unity. In the modulation scheme used in this technique, each data bit of the message is assigned a value of either plus or minus unity, depending upon whether the respective data bit is logically true or false, respectively, the respective bit values comprised in the message resulting from this assignment are multiplied by the respective values of the respective carrier functions, and the resulting products are added to respective predetermined sets of pixel brightness intensity values in the cover image to generate the stego-image. Alternatively, prior to adding the resulting products to the respective pixel intensity values of the cover image, the products may first be multiplied by an empirically-determined scaling or gain factor to improve signal to noise ratio of the encoded data. These respective sets of pixel values are hereinafter termed "image regions," and the size of the image regions is herein- after termed the "carrier size" of the cover image. By so distributing multiple respective modulations of respective message bit values among the pixel intensity values in respective image regions, it becomes possible to improve message data encoding redundancy, and thereby, to improve the effective signal to noise ratio of the encoded message. In this prior art technique, the image regions tile the entire stego-image. Thus, the density of data that can be successfully encoded in the stego-image according to this technique is inversely proportional to the carrier size. In order to ensure that the encoded message has an acceptable signal to noise ratio, however, the carrier size cannot be made arbitrarily small.
By sufficiently increasing the magnitude of the scaling factor, it is possible to increase the signal to noise ratio of the encoded message in the stego-image. However, if the magnitude of the scaling factor is made too large, then the distortion (i.e., the modification that is made to the cover image to encode the message therein) reflected in the stego-image may become readily ascertainable from casual analysis of the stego- image. This is undesirable since, ideally, such distortion should be very difficult to detect, in order to ensure maximal security for the encoded message and, if the cover image comprises pixel intensity values of a digitized visual image, to permit the stego- image to have maximal esthetic value. Therefore, in practice, the scaling factor and carrier size should be selected so as to ensure that the density of data encoded, and distortion in, the stego-image are both acceptable given the particular application.
In this prior art technique, in order to decode a respective message bit value encoded in the stego-image, each of the respective pixel intensities in a respective image region is multiplied by the respective values of the respective associated carrier func- tions (i.e., the respective carrier functions that were used to generate the respective pixel intensity values in the stego-image) to produce a series of products; these products are then summed to produce a respective summation value for the respective image region. If the respective summation value is negative, the respective message bit value encoded in the respective image region is decoded as a "false" data bit, and vice versa. In one version of the aforesaid conventional steganographic technique, the so- called "direct sequence spread spectrum technique," the values of the carrier functions in the image regions are random or pseudorandom. An advantage of this technique is that, unless the original carrier function is known, it is relatively difficult to decode a message encoded using this technique. A further advantage is that since the encoded data is distributed throughout the spatial frequency spectrum, the distortion in the stego-image resembles grainy "noise" and lacks sharp discontinuities, thereby making detection of such distortion relatively difficult.
Another prior art steganographic technique is the so-called "frequency hopping spread spectrum" technique. In this technique, each message bit value is encoded in the stego-image in accordance with particular spatial frequency bands specified by a pseudo-randomly-generated key. Unfortunately, the mathematical operations required to implement this technique are computationally intensive, and depending upon the particular computational device used to implement the technique in a particular appli- cation, it may take significantly longer to perform the computations necessary to implement this technique than the computations necessary to implement other steganographic techniques, including other spread spectrum steganographic techniques.
In each of the aforesaid prior art steganographic techniques, the set of carrier functions that is used to encode the message bit values must be explicitly known in or- der to be able to decode the encoded message from the stego-image. Explicit knowledge of the carrier functions, however, not only permits the encoded message to be decoded from the stego-image, but also permits the stego-image to be reconverted to the cover image from which the stego-image was generated. This is unfortunate, since it would be desirable in certain applications (e.g., applications in which the encoded mes- sage in the stego-image serves a "watermarking" role) to provide a steganographic technique wherein it is not necessary to explicitly know the carrier functions in order to be able to decode the encoded message from the stego-image, and also wherein the ability to decode the encoded message does not by itself also grant the ability to gener- ate the cover image from the stego-image, in order to permit the carrier functions to serve essentially as a type of private (i.e., secret) authentication key to be held by a certifying authority (e.g., copyright owner of cover image, government agency, financial institution, etc.). This would be desirable in these applications since this would effectively grant only the certifying authority the ability to generate the cover image from the stego-image, and thereby only permit the authority the ability to generate apparently authorized stego-images, while permitting others the ability to obtain the decoded messages from those stego-images. It would also be desirable to increase the density of the message data that can be encoded in a stego-image, while reducing the degree to which distortion in the stego-image is readily appreciable.
Other examples of prior art steganographic techniques are disclosed in e.g.,
Smith and Comiskey, "Modulation and Information Hiding In Images," Proceedings of the First Information Hiding Workshop, Isaac Newton Institute, Cambridge, U.K., May 1996, Springer- Verlag Lecture Notes in Computer Science Volume 1174. Unfortunately, each of these other examples of prior art steganographic techniques suffers from the aforesaid and/or other disadvantages and drawbacks.
SUMMARY OF THE INVENTION
In accordance with the present invention, steganographic techniques are provided that are able to overcome the aforesaid and other disadvantages and drawbacks of the prior art. In one embodiment of a technique in accordance with a first aspect of the present invention, a first function (e.g., a carrier function) to be used in encoding a message is encoded in a first portion of an image, and the message is encoded, based upon the first function, in a second portion of the image. The first function may de- scribe a bitwise modulation to be applied to the message. The first and second portions of the image may each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
In the technique of this first aspect of the present invention, the encoding of the message in the image is carried out in such a way that the message may be decoded from the image based, at least in part, upon respective correlations and anti-correlations between pixels in corresponding image regions in the first and second portions of the image, so as to permit the message to be decoded from the image without explicit knowledge of the carrier function or functions used to encode the message in the image, and such that the ability to decode the encoded message in the image does not by itself also grant the ability to generate the original (i.e., cover) image from the image (i.e., stego-image) containing the encoded message. Advantageously, this permits the carrier function or functions to serve essentially as a type of private authentication key that may be held by a certifying authority so as to grant only to the authority the ability to generate apparently authorized stego-images, while permitting others the ability to ob- tain decoded messages from those stego-images.
In a technique according to a second aspect of the present invention, a cover image is upsampled in one or more dimensions of the first image so as to generate an upsampled image of higher resolution or larger size than the first image. The upsampled image includes a plurality of respective groups of respectively identical pixels in the one or more directions of the one or more dimensions of upsampling of the first image. The message is encoded in the upsampled image.
In an embodiment of the technique of the second aspect of the present invention, at least one respective pixel in each of the groups of respectively identical pixels is unaltered as a result of the message being encoded in the upsampled image. Alternatively, the respective identical pixels in each respective group of respectively identical pixels may be changed as a result of the encoding of the message in the upsampled image such that, after the encoding of the message in the upsampled image, respective summations of respective intensity values of the respective identical pixels in each re- spective group of respectively identical pixels are equal to respective intensity values of respective corresponding pixels in the first image. In either of these two embodiments of the technique according to the second aspect of the present invention, the encoding of the message in the upsampled image may be based, at least in part, upon a bitwise modulation of the message.
Advantageously, it has been found that the technique of the second aspect of the present invention permits a much higher density of message data to be encoded in an image (i.e., the upsampled image) than is possible in the prior art, and also permits substantial reduction in the appreciability of distortion in the upsampled stego-image. Indeed, empirical results indicate that the density of message data that may be effectively encoded into an image using the second technique of the present invention may be ten or more times greater than the density of such data that may be encoded into an image according to the prior art. Apparatus and methods are also provided that permit messages to be encoded/decoded in and from, respectively, images in accordance with the principles of the techniques of the present invention. These and other features and advantages of the present invention will become apparent as the following Detailed Description proceeds and upon reference to the drawings, in which like numerals depict like parts, and wherein:
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the first aspect of the present invention for the purpose of generating a stego-image containing an encoded message.
Figure 2 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the first aspect of the present invention for the purpose of decoding an encoded message from a stego-image.
Figure 3 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the second aspect of the present invention for the purpose of generating a stego-image containing an encoded message.
Figure 4 is a highly schematic diagram illustrating the construction of an apparatus that implements the principles of an embodiment of a technique according to the second aspect of the present invention for the purpose of decoding an encoded message from a stego-image.
Figure 5 is a symbolic illustration of a stego-image generated by the apparatus of Figure 1, and processed by the apparatus of Figure 2.
Figure 6 is a symbolic illustration of a stego-image generated by the apparatus of Figure 3, and processed by the apparatus of Figure 4.
Although the following Detailed Description will proceed with reference being made to illustrative embodiments and methods of use, it should be understood that the present invention is not limited to these illustrative embodiments and methods of use. Instead, the present invention should be viewed broadly, as being defined only as set forth in the hereinafter appended claims.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
With reference being made to Figures 1-6, illustrative embodiments of techniques of the first and second aspects of the present invention will now be described. Figure 1 is a highly schematic diagram illustrating the construction of an apparams 10 that implements the principles of an embodiment of the first aspect of the present invention for the purpose of generating a stego-image 26 containing an encoding of a data message 28.
As shown in Figure 1, apparatus 10 includes controller 18. Controller 18 in- eludes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 10. When executed by the one or more processors in apparatus 10, the software programs and data structures cause the controller 18 and other elements of apparatus 10 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 10. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 without departing from the present invention. For example, controller 18 may comprise one or more Intel 80X86-type processors and associated memory.
Apparatus 10 also comprises an imaging device 14 that includes a conventional imaging, digital camera, one- or two-dimensional array of photoelements (e.g., charge- coupled photosensing elements, etc.) or scanning system that generates, in response to commands received from the controller 18, a digitized image 16 and supplies image 16 to controller 18. Image 16 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual cover image 12 that is input to the device 14 when it is desired to commence generation of the stego-image 26.
Printing device/CRT device 24 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 10, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the ap- paratus 10 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data. Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 10, and to receive from controller 18 indication of receipt and progress of apparatus 10 in executing the input commands. Alternatively, or addition thereto, device 24 may include a printing-press or similar printing mechanism controlled by controller 18.
As will be described more fully below, in apparatus 10, when controller 18 re- ceives the digital image 16 from the device 14, controller 18 generates therefrom a digitalized version 22 of stego-image 26, in accordance with the principles of one embodiment of the first aspect of the present invention, based upon a set of carrier functions (symbolically referred to by numeral 30), which image 26 contains an encoding of message 28. As is true in this and the other apparatus shown in Figures 2-4, the re- spective values of these functions 30 may be either plus or minus unity. The functions 30 and or message 28 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network). The digitized stego-image 22 may be essentially of the same format and may comprise essentially the same types of data as the digitized cover image 16, and when supplied to the device 24 together with appropriate commands from the controller 18, causes device 24 to generate therefrom the visual stego-image 26.
Figure 2 is a highly schematic diagram illustrating the construction of an apparatus 50 that implements the principles of an embodiment of the first aspect of the pres- ent invention for the purpose of recovering from the stego-image 26 the digitized cover image 16 and message 28. As shown in Figure 4, apparatus 50 includes controller 18. Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 50. When executed by the one or more processors in apparatus 50, the software programs and data structures cause the controller 18 and other elements of apparatus 50 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 50. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 50 without departing from the present invention. For example, controller 18 of apparatus 50 may comprise one or more Intel 80X86-type processors and associated memory.
Apparatus 50 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10. Device 14 of apparatus 50 generates, in response to commands received from the controller 18, digitized image 22 from image 108 and supplies image 22 to controller 18. Image 22 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual stego-image 26 that is input to the device 14 when it is desired to commence recovery of the cover image 16 and or message 28 from the stego-image 26.
Printing device/CRT device 24 of Figure 2 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 50, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the apparatus 50 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data. Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which to- gether permit the human user to input commands to controller 18 to be performed by apparatus 50, and to receive from controller 18 indication of receipt and progress of apparatus 50 in executing the input commands.
As will be described more fully below, in apparatus 50, when controller 18 re- ceives the digital image 22 from the imaging device 14, controller 18 recovers therefrom the message 28 encoded in image 22. Additionally, if controller 18 has been supplied with the functions 30 that were used to encode message 28 into stego-image 22, the controller 18 of apparatus 50 may also recover from the image 28 the digital cover image 16. However, as will be described more fully below, it is a feature and advan- tage of the technique according to the first aspect of the present invention that unless the controller 18 of apparatus 50 is supplied with these functions 30, it would be very difficult, in practical implementation of apparatus 50, for controller 18 to be programmed in such a way as to recover the image 16 from the image 22. However, it is also a feature and advantage of the first aspect of the present invention that the con- troller 18 of apparatus 50 need not be supplied with the functions 30 in order to be able to recover the message 28 from the image 22. If controller 18 is given the functions 30, they may be stored in memory 20 of apparatus 50 or may be supplied to the controller 18 of apparatus 50 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network). When sup- plied to the device 24 together with appropriate commands from the controller 18 of apparatus 50, the image 16 and/or message 28 cause device 24 to generate therefrom the visual cover image 12 and/or a visual (i.e., symbolic written) representation 38 of the message 28, respectively. Figure 3 is a highly schematic diagram illustrating the construction of an apparatus 100 that implements the principles of an embodiment of the second aspect of the present invention for the purpose of generating a stego-image 108 containing an encoding of a data message 28. As shown in Figure 3, apparatus 100 includes controller 18. Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 100. When executed by the one or more processors in apparatus 100, the software programs and data structures cause the controller 18 and other elements of apparatus 100 to carry out and/or implement the techniques, functions, and operations described herein as being carried out and/or implemented by controller 18 and other elements of apparatus 100. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 100 without departing from the present invention. For example, controller 18 of apparatus 100 may comprise one or more Intel 80X86- type processors and associated memory.
Apparatus 100 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10. Device 14 of apparatus 100 generates, in response to commands received from the controller 18, a digitized image 16 and sup- plies image 16 to upsampler 102. Image 16 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual cover image 12 that is input to the device 14 when it is desired to commence generation of the stego- image 108. Although upsampler 102 may be thought of as being a separate logical component of apparatus 100, it should be understood that, in practical implementation of apparatus 100, upsampler 102 may be comprised in controller 18 or imaging device 14. Upsampler 102 generates from image 16 another image 104 that is of higher resolution than image 16 but otherwise is identical to image 16. Upsampler 102 accomplishes this by upsampling the image 16 in one or more dimensional directions (i.e., mutually orthogonal dimensional directions 105, 107) of the image 16. As will be described more fully below, the upsampled image 104 that is generated by upsampler 102 as a result of this process contains groups or clusters of pixels (symbolically shown in Figure 3 and collectively referred to by numeral 109) whose respective brightness values are identical to those of corresponding pixels in image 16 from which the higher resolution image 104 was generated. For example, if, as is true in apparatus 100, upsampler 102 is configured to upsample the image 16 by a factor of two in each of the length and width dimensions 105, 107 to generate the upsampled image 104, then the resulting resolution of image 104 is four times greater than that of image 16, and image 104 is composed of contiguous pixel clusters 109 that each comprise four respective contiguous pixels. The brightness values of the four pixels in each respective cluster are identical to each other, but may vary among pixels of different clusters.
Printing device/CRT device 24 comprises a conventional mechanism for inter- facing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 100, for providing the user physical (i.e., hardcopy printed output) representations of visual images and or other data generated by the apparatus 100 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data. Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 100, and to receive from controller 18 indication of receipt and progress of apparatus 100 in exe- cuting the input commands.
As will be described more fully below, in apparatus 100, when controller 18 receives the upsampled digital image 104 from upsampler 102, controller 18 generates therefrom a digitalized version 106 of stego-image 108, in accordance with the principles of one embodiment of the second aspect of the present invention, based upon a set of carrier functions (symbolically referred to by numeral 30), which image 108 contains an encoding of message 28. The functions 30 and/or message 28 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network). The digitized stego-image 106 may be essentially of the same format and may comprise essentially the same types of data as the digitized images 16, 104 and when supplied to the device 24 together with appropriate commands from the controller 18, causes device 24 to generate therefrom the visual stego-image 108.
Figure 4 is a highly schematic diagram illustrating the construction of an apparatus 200 that implements the principles of an embodiment of the second aspect of the present invention for the purpose of recovering from the stego-image 108 the digitized cover image 16 and/or the message 28 encoded in image 108. As shown in Figure 4, apparatus 200 includes controller 18. Controller 18 includes computer-readable memory 20 (e.g., comprising random access, read-only, and/or mass storage memory) for storing software programs and associated data structures for execution by one or more processors also comprised in controller 18 and/or other elements of apparatus 200. When executed by the one or more processors in apparatus 200, the software programs and data structures cause the controller 18 and other elements of apparatus 200 to carry out and/or implement the techniques, functions, and operations described herein as be- ing carried out and/or implemented by controller 18 and other elements of apparams 200. It will be apparent to those skilled in the art that many types of computer processors and memories may be used in controller 18 of apparatus 100 without departing from the present invention. For example, controller 18 of apparatus 200 may comprise one or more Intel 80X86-type processors and associated memory.
Apparams 200 also comprises an imaging device 14 that may be of the same construction as the device 14 of apparatus 10. Device 14 of apparams 200 generates, in response to commands received from the controller 18, digitized image 106 from image 108 and supplies image 106 to controller 18. Image 106 comprises a set of values that represent pixel brightnesses along the two-dimensional surface of a physical, visual stego-image 108 that is input to the device 14 when it is desired to commence recovery of the cover image 16 and/or message 28 from the stego-image 108.
Printing device/CRT device 24 of Figure 4 comprises a conventional mechanism for interfacing a human user (not shown) to the controller 18 so as to permit the user to control and monitor operation of apparatus 200, for providing the user physical (i.e., hardcopy printed output) representations of visual images and/or other data generated by the apparatus 200 and for providing the user with a display terminal for displaying visual depictions or representations of such images and/or other data. Device 24 may include, for example, one or more conventional computer-user interface devices, such as pointing and keyboard input devices, and a display output device which together permit the human user to input commands to controller 18 to be performed by apparatus 200, and to receive from controller 18 indication of receipt and progress of apparatus 200 in executing the input commands.
As will be described more fully below, in apparatus 200, when controller 18 re- ceives the digital image 106 from the imaging device 14, controller 18 recovers therefrom, using the functions 30, the encoded message 28 and the image 16. The functions 30 may be stored in memory 20 or may be supplied to the controller 18 from a source external thereto (e.g., a not shown certifying authority connected to the controller 18 via a communications network). When supplied to the device 24 together with appro- priate commands from the controller 18, the image 16 and message 28 cause device 24 to generate therefrom the visual cover image 12 and a visual (i.e., written symbolic) representation 38 of the message 28, respectively.
Figure 5 is a symbolic illustration of a digitized stego-image 22 generated by the apparatus 10 of Figure 1, and processed by the apparatus 50 of Figure 2. The man- ner in which the stego-image 22 is generated by the controller 18 of apparatus 10 will now be described.
After controller 18 of apparatus 10 receives the digitized cover image 16, controller 18 initially processes the image 16 by logically associating together respective pluralities of mutually-continuous pixels in the image 16 into two respective disjoint sets of image regions (collectively referred to in Figure 5 by the numerals 300 and 302, respectively) of equal size (i.e., each of the sets 300, 302 contains the same number of image regions and each of the image regions contains the same number of pixels); each of the image regions 304, 306, 308, and 310 in the first set 300 is associated with a respective image region 312, 314, 316, and 318 in the second set 302. Thus, region 304 is associated with region 312, region 306 is associated with region 314, region 308 is associated with region 316, and region 310 is associated with region 318, respectively. In the illustrative embodiment of the first aspect of the present invention that is implemented by apparatus 10, each of the image regions 304, 306, 308, 310, 312, 314, 316, and 318 has a size of four pixels. It should be noted that the number and size of the image regions in each set 300, 302 described herein is merely for illustrative purposes and may vary depending upon the number of pixels in the image 16 and the number of data bits in the message 28 to be encoded. However, the number of image regions in each set 300, 302 should be equal to the number of data bits in the message 28 to be en- coded. Thus, in this illustrative embodiment, the message 28 is four bits in length. The locations of the image regions in the image 22 are predetermined and preprogrammed into the controller 18 of apparatus 10.
In accordance with the first aspect of the present invention, the message 28 is encoded into the image 22 based upon respective correlations and anti-correlations between the respective image regions in the sets 300, 302 that are associated with each other. More specifically, as will be described more fully below, the brightness intensity values of the pixels in each of the image regions of sets 300, 302 are treated as specifying coordinate values of respective vectors, and the data bits of the message 28 are encoded in the image 22 based upon correlations and anti-correlations between these vectors.
For example, for purposes of illustration, it is assumed that the intensity values of the pixels in image region 304 are given by variables a, b, c, and d, respectively; the intensity values of the pixels in image region 306 are given by variables i, j, k, and 1, respectively; the intensity values of the pixels in image region 308 are given by vari- ables q, r, s, and t, respectively; the intensity values of the pixels in image region 310 are given by variables y, z, aa, and bb, respectively. Also, for purposes of illustration, it is assumed that the intensity values of the pixels in image region 312 are given by variables e, f, g, and h, respectively; the intensity values of the pixels in image region 314 are given by variables m, n, o, and p, respectively; the intensity values of the pixels in image region 316 are given by variables u, v, w, and x, respectively; the intensity values of the pixels in image region 318 are given by variables cc, dd, ee, and ff, respectively. The vectors that may be generated from regions 304, 306, 308, 310, 312, 314, 316, and 318 are <a, b, c, d>, <i, j, k, 1>, <q, r, s, t> <y, z, aa, bb>, <e, f, g, h>, <m, n, o, p>, <u, v, w, x>, and <cc, dd, ee, ff>, respectively.
In accordance with this embodiment of the technique of the first aspect of the present invention, the respective vectors (e.g., vectors <a, b, c, d> and <e, f, g, h>) generated from two associated image regions (e.g., regions 304 and 312) are considered to be correlated if the inner product of the respective vectors (calculated after background or DC components have been filtered out from the pixel brightness values from which the respective vectors are generated) is positive. Conversely, the respective vectors are considered to be anti-correlated if the inner product of the respective vectors (calculated after background of DC components have been filtered out from the pixel brightness values from which the respective vectors are generated) is non-positive. The back- ground or DC component may be filtered out from the pixel brightness values from which a respective vector is generated by subtracting from the respective vector the mean of the respective vector (i.e., the inner product of the respective vector with itself). The respective vectors may each also be scaled so as to have the same magnitude. Two associated image regions (e.g., 304 and 312) encode a logically true mes- sage data bit if the respective vectors generated from the regions' pixel brightness values are correlated. Conversely, two associated image regions encode a logically false message data bit if the respective vectors generated from the regions' pixel brightness values are anti-correlated.
Controller 18 of apparatus 10 is configured to generate the image 22 so as to encode the message 28 therein based upon such correlations and anti-correlations between respective associated image regions of the image 22. That is, controller 18 generates the image 22 from the image 16 by modulating the pixel brightness intensity values of corresponding associated image regions in image 16 such that the resultant asso- ciated image regions in image 22 encode message 28 based upon respective correlations and anti-correlations between respective associated image regions in image 22. Each pair of associated image regions in image 22 encodes a single respective bit of the message 28 based upon whether the respective vectors generated from their pixel brightness values are correlated or anti-correlated with each other. It is important to note that, as stated previously, this technique of the present invention is in stark contrast to the prior art, wherein correlations and anti-correlations are made with reference to explicitly known and externally available (i.e., outside of the stego-image) carrier functions.
The manner in which controller 18 of apparatus 10 generates the respective pixel brightness intensity values a, b, c, . . . ff will now be described. Controller 18 of apparatus 10 first assigns each data bit of the message 28 a value of either plus or minus unity, depending upon whether the respective data bit is logically true or false, respectively. The respective message data bit values resulting from this assignment are then multiplied by respective values of respective carrier functions 30, to generate re- spective products. The respective image regions in the image 16 that correspond to regions 304, 306, 308, 310 are then associated with respective products, in accordance with a predetermined association algorithm, and the respective product associated with each respective corresponding image region in image 16 is added to each of the pixel brightness intensity values in that corresponding image region to generate the pixel intensity values in the image regions 304, 306, 308, 310 of the stego-image 22. Alternatively, prior to adding the resulting products to the respective pixel intensity values of the cover image, the products may first be multiplied by an empirically-determined scaling or gain factor for the purpose of improving encoding signal to noise ratio.
Controller 18 of apparatus 10 adds the respective values of the respective carrier functions 30 that were used to encode the message 28 in image regions 304, 306, 308, and 310 to each of the pixel brightness intensity values in respective corresponding image regions in image 16 to generate the pixel intensity values in the image regions 312, 314, 316, and 318.
By generating the stego-image 22 in this manner, the respective data message bit values are encoded in respective correlations and anti-correlations between respective associated image regions in the stego-image 22. Advantageously, by so encoding the message 28 in the image 22, it is possible to decode the message 28 from the image 22 without explicit knowledge of the carrier functions 30; however, without explicit knowledge of the carrier functions 30, it is also relatively difficult to generate from the image 22 the image 16, since, assuming that the decoder is not provided with the cover image 16, it is relatively difficult to discern without explicit knowledge of the carrier function values 30, the absolute modulations of the pixel brightness intensity values that generated the stego-image 22 from the cover image 16. In order to decode the message 28 from the stego-image 22, the controller 18 of apparatus 50 first analyzes the stego-image 22 to determine the pixel brightness intensity values of the pixels in the image regions 304, 306, 308, 310, 312, 314, 316, and 318. Controller 18 of apparatus 50 is preprogrammed with the locations and carrier sizes of these image regions in stego-image 22, as well as, which images regions in the sets 300, 302, are respectively associated with each other, and the predetermined algorithm that was used to associate the message data bits with the image regions. Controller 18 of apparatus 50 then determines, based upon this preprogrammed information and the respective correlations and anti-correlations between respective associated im- age regions, the logical values of the message bits, and assembles these logical values to decode the message 28. If the controller 18 of the apparatus 50 is supplied with the functions 30, the controller may also be programmed to use the knowledge embodied in the functions 30 (i.e., of the absolute modulations of the pixel brightness intensity values that were used to generate the stego-image 22 from the cover image 16) to generate from the image 22 the image 16.
Figure 6 is a symbolic illustration of a digitized stego-image 106 generated by the apparatus 100 of Figure 3, and processed by the apparatus 200 of Figure 4. The manner in which the stego-image 106 is generated by the apparatus 100 will now be described.
After controller 18 in apparatus 100 receives the image 104 from the upsampler
102, the controller 18 analyzes the image 104 to detect and locate therein the clusters 109 of respectively identical pixels therein. As stated previously, each of the clusters 109 contains a respective set of four respectively identical contiguous pixels. After detecting and locating the clusters 109, the controller 18 of apparatus 100 groups the clusters 109 into blocks which correspond to the image regions (symbolically referred to by numerals 400, 402) in the image 106. Each of the blocks contains a predetermined number of the clusters 109 (e.g., the blocks may each contain 100 of the clusters 109, and be in the form of a 10 cluster by 10 cluster square). It should be noted that the number and size of the blocks, and thus also of the corresponding image regions 400, 402 described herein is merely for illustrative purposes and may vary depending upon the number of pixels in the image 104 and the number of data bits in the message 28 to be encoded. However, the number of blocks and image regions 400, 402 should be equal to the number of data bits in the message 28 to be encoded, since each image region 400, 402 encodes a single respective data bit from the message 28. The assignment of bits of message 28 to be encoded in image regions 400, 402 is in accordance with a predetermined algorithm that may be preprogrammed into controller 18 of apparatus 100. Thus, in this illustrative embodiment of the technique of the second aspect of the present invention, the message 28 is 2 bits in length. The locations of the blocks and corresponding image regions 400, 402 in the image 22 are predetermined and preprogrammed into the controller 18 of apparatus 100.
Controller 18 of apparatus 100 assigns to the respective data bit values of the message 28 either plus or minus unity, depending upon whether the respective data bit value undergoing such assignment is logically true or false. Carrier functions 30 in this embodiment of the second aspect of the present invention comprise a plurality of sets of subcarrier functions. Each set of subcarrier functions comprises four respective values; each of these values may be either plus or minus unity, and subject to this constraint, and the further constraint that the values of the subcarrier functions in each set of subcarrier functions sum to zero, the particular values of the subcarrier functions in each such set may be selected randomly or pseudo-randomly. The number of sets of
subcarrier functions is equal to the number of the clusters 109 in the regions 400, 402 in
which message data is encoded, each of the sets of subcarrier functions is randomly as¬
sociated with a respective one of the clusters 109 in the regions 400, 402, and the re-
spective subcarrier function values in each respective set of subcarrier functions is as¬
sociated with a respective brightness intensity value of a respective pixel in the respec¬
tive one of the clusters 109 with which that respective set of subcarrier functions is as¬
sociated.
In order to encode the message 28 in the image 106, the controller 18 of appa-
ratus 100 multiplies each of the respective assigned values of the message data bits by
each of the respective values of the subcarrier functions that is associated with a re¬
spective brightness intensity value of a respective pixel in a respective image region
associated with that assigned message bit value. The stego-image 106 is then gener¬
ated by controller 18, by adding the respective resulting products to the respective
brightness intensities values of the respective pixels with which the respective subcar¬
rier function values that were used to generate the respective products are associated.
Alternatively, prior to adding the resulting products to the respective pixel intensity
values of the cover image, the products may first be multiplied by an empirically-
determined scaling or gain factor to improve signal to noise ratio of the encoded data.
Further alternatively, the carrier functions 30 may be modified such that each
set of subcarrier functions contains only three subcarrier values and these values do not
sum to zero. In this alternative, one respective pixel (collectively referred to in Figure
3 by numeral 1 1 1) in each of the clusters 109 used to encode message data is not
modulated with any of the carrier functions 30. Instead, the one respective pixel in each of the clusters 109 used to encode message data remains unchanged, in all respects, including brightness intensity value, from its condition in image 104. Various conventional schemes may be used to encode the message data into the remaining pixels of the image 106 (i.e., other than pixels 111). Advantageously, in this alternative, the respective brightness intensity values of these respective pixels 111 may be subtracted from the brightness intensity values of the other pixels in each of the clusters 109 comprising the respective pixels 111 prior to attempting to decode the message data from the image 106 in order to achieve substantial immunity to the effects of noise that may exist in the upsampled cover image 104 in decoding the message 28.
The manner in which the stego-image 106 is processed by the apparatus 200 will now be described. Controller 18 may be preprogrammed with information such as the respective carrier function values, the associations of these values with the respective pixels whose respective brightness intensity values were modulated therewith to encode the message 28 in image 106, the manner in which the respective encoded bits of the message 28 was assigned to the respective image regions 400, 402, and the locations and sizes of the clusters 109 and regions 400, 402. After controller 18 in apparatus 200 receives the stego-image 106 from device 14, the controller 18 decodes the respective message bits from each respective image region 400, 402 by multiplying each of the respective pixel brightness intensity values in each respective image region by the respective value of the respective associated carrier function (i.e., the respective carrier function value that was used to generate the respective pixel intensity value in the stego-image 106) to produce a series of products; these products are then summed in each respective region 400, 402 to produce a respective summation value for the respective image region 400, 402. If the respective summation value is negative, the re- spective message bit value encoded in the respective image region is decoded as a "false" data bit, and vice versa. The controller 18 also utilizes the aforesaid information in conventional techniques to generate from the image 106 the image 104. The controller 18 of apparatus 200 then appropriately downsamples the image 104 to gener- ate therefrom the image 16.
Although the present invention has been described in connection with specific embodiments and methods of use, it will be appreciated by those skilled in the art that many alternatives, variations and modifications thereof are possible without departing from the present invention. For example, the image misregistration correction tech- niques disclosed in the aforesaid copending U.S. Provisional Application Serial No. 60/139,758 may be used in conjunction with the techniques of the present invention. Other modifications are also possible.
For example, although the image 16 has been described as being generated by an imaging device 14 from a physical cover image 12 in apparatus 10, 100, if appropri- ately modified, the apparatus 10, 100 may generate the image 16 using a computer- executed application program. Also alternatively, if appropriately modified, the apparatus 10, 100 may be configured to upload the images 22, 106 to a local or remote server (e.g., a world wide web internet server) for access by others via a computer network (e.g., the internet). Yet other modifications are also possible. For example, the pixels and/or image regions in the image 22 of Figure 5 may be mutually interleaved among each other and need not be clustered among each other in their respective sets 300, 302. Additional modifications are also possible. Accordingly, the present invention should be viewed quite broadly, and as being defined only as set forth in the hereinafter appended claims.
What is claimed is:

Claims

1. Method for use in encoding data in an image, comprising: encoding in a first portion of the image a first function to be used in encoding the data; and encoding, based upon the first function, in a second portion of the image the data.
2. Method according to claim 1, wherein the first function describes a bitwise modulation to be applied to the data.
3. Method according to claim 1, wherein the first and second portions each com- prise respective arbitrarily-selected disjoint sets of pixels in the image.
4. Method according to claim 1 , wherein the encoding of the data in the image is such that the data may be decoded from the image based at least in part upon respective correlations and anti-correlations between pixel regions in the first and second portions.
5. Method for use in encoding data in a first image, comprising: upsampling the first image in at least one dimension of the first image whereby to generate an upsampled image of higher resolution than the first image, the upsam- pled image including a plurality of respective groups of respectively identical pixels in the direction of the at least one dimension; and encoding the data in the upsampled image.
6. Method according to claim 5, wherein at least one respective pixel in each of the groups of respectively identical pixels is unchanged after the data has been encoded in the upsampled image.
7. Method according to claim 5, wherein the encoding of the data in the upsampled image is based at least in part upon a bitwise modulation of the data.
8. Method according to claim 5, wherein the respective identical pixels in each said respective group are changed as a result of the encoding of the data in the upsam- pled image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each said respective group are equal to re- spective intensity values of respective corresponding pixels in the first image.
9. Method for use in decoding data encoded in a first portion of an image, com- prising: decoding the data from the first portion based at least in part upon respective correlations and anti-correlations between corresponding regions in the first portion and a second portion of the image, a function being encoded in the second portion, the function being having been used to encode the data in the first portion.
10. Method for use in decoding data encoded in a first image, comprising: determining from first groups of pixels in the first image respective bits of the data encoded in the first image, the first image having been generated from a second image generated by upsampling a third image in at least one dimension such that the second image has a higher resolution than the third image and includes second groups of respectively identical pixels in the direction of the at least one dimension corre- sponding to the first groups of pixels.
11. Method according to claim 10, wherein the determining of the respective bits is based at least in part upon a subtraction of a respective intensity value of a respective predetermined pixel in each of the first groups of pixels from respective intensity val- ues of the other respective pixels in each of the first groups of pixels, the respective in- tensity value of the respective predetermined pixel in each of the first groups of pixels being unchanged from a respective intensity value of a respective corresponding pixel in each of the second groups of respective identical pixels.
12. Method according to claim 9, wherein the first function describes a bitwise modulation applied to the data.
13. Method according to claim 9, wherein the first and second portions each com- prise respective arbitrarily-selected disjoint sets of pixels in the image.
14. Method according to claim 10, wherein the encoding of the data in the first im- age is based at least in part upon a bitwise modulation of the data.
15. Method according to claim 10, wherein the respective identical pixels in each of said second groups are changed as a result of the encoding of the data to produce the first image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each of the first groups are equal to respec- tive intensity values of respective corresponding pixels in the second image.
16. Apparatus for use in encoding data in an image, comprising: an encoder that encodes in a first portion of the image a first function to be used in encoding the data, the encoder also encoding, based upon the first function, in a sec- ond portion of the image the data.
17. Apparatus according to claim 16, wherein the first function describes a bitwise modulation to be applied to the data.
18. Apparatus according to claim 16, wherein the first and second portions each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
19. Apparatus according to claim 16, wherein the encoding of the data in the image is such that the data may be decoded from the image based at least in part upon respec- tive correlations and anti-correlations between pixel regions in the first and second por- tions.
20. Apparatus for use in encoding data in a first image, comprising: an upsampler that upsamples the first image in at least one dimension of the first image whereby to generate an upsampled image of higher resolution than the first im- age, the upsampled image including a plurality of respective groups of respectively identical pixels in the direction of the at least one dimension; and an encoder that encodes the data in the upsampled image.
21. Apparatus according to claim 20, wherein at least one respective pixel in each of the groups of respectively identical pixels is unchanged after the data has been en- coded in the upsampled image.
22. Apparatus according to claim 20, wherein the encoding of the data in the up- sampled image is based at least in part upon a bitwise modulation of the data.
23. Apparatus according to claim 20, wherein the respective identical pixels in each said respective group are changed as a result of the encoding of the data in the image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each said respective group are equal to respective in- tensity values of respective corresponding pixels in the first image.
24. Apparatus for use in decoding data encoded in a first portion of an image, com- prising: a decoder that decodes the data from the first portion based at least in part upon respective correlations and anti-correlations between corresponding regions in the first portion and a second portion of the image, a function being encoded in the second por- tion, the function being having been used to encode the data in the first portion.
25. Apparatus for use in decoding data encoded in a first image, comprising: a decoder that determines from first groups of pixels in the first image respec- tive bits of the data encoded in the first image, the first image having been generated from a second image generated by upsampling a third image in at least one dimension such that the second image has a higher resolution than the third image and includes second groups of respectively identical pixels in the direction of the at least one dimen- sion corresponding to the first groups of pixels.
26. Apparatus according to claim 25, wherein the decoder determines the respective bits based at least in part upon a subtraction of a respective intensity value of a respec- tive predetermined pixel in each of the first groups of pixels from respective intensity values of the other respective pixels in each of the first groups of pixels, the respective intensity value of the respective predetermined pixel in each of the first groups of pixels being unchanged from a respective intensity value of a respective corresponding pixel in each of the second groups of respective identical pixels.
27. Apparatus according to claim 24, wherein the first function describes a bitwise modulation applied to the data.
28. Apparatus according to claim 24, wherein the first and second portions each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
29. Method according to claim 25, wherein the encoding of the data in the first im- age is based at least in part upon a bitwise modulation of the data.
30. Apparatus according to claim 25, wherein the respective identical pixels in each of said second groups are changed as a result of the encoding of the data to produce the first image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each of the first groups are equal to respec- tive intensity values of respective corresponding pixels in the second image.
31. Computer-readable memory comprising computer program instructions for use in encoding data in an image, that when executed cause: encoding in a first portion of the image a first function to be used in encoding the data; and encoding, based upon the first function, in a second portion of the image the data.
32. Memory according to claim 31, wherein the first function describes a bitwise modulation to be applied to the data.
33. Memory according to claim 31 , wherein the first and second portions each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
34. Memory according to claim 31 , wherein the encoding of the data in the image is such that the data may be decoded from the image based at least in part upon respective correlations and anti-correlations between pixel regions in the first and second portions.
35. Computer-readable memory comprising computer program instructions for use in encoding data in a first image, and that when executed cause: upsampling the first image in at least one dimension of the first image whereby to generate an upsampled image of higher resolution than the first image, the upsam- pled image including a plurality of respective groups of respectively identical pixels in the direction of the at least one dimension; and encoding the data in the upsampled image.
36. Memory according to claim 35, wherein at least one respective pixel in each of the groups of respectively identical pixels is unchanged after the data has been encoded in the upsampled image.
37. Memory according to claim 35, wherein the encoding of the data in the upsam- pled image is based at least in part upon a bitwise modulation of the data.
38. Memory according to claim 35, wherein the respective identical pixels in each said respective group are changed as a result of the encoding of the data in the image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each said respective group are equal to respective in- tensity values of respective corresponding pixels in the first image.
39. Computer-readable memory comprising computer program instructions for use in decoding data encoded in a first portion of an image and that when executed cause: decoding the data from the first portion based at least in part upon respective correlations and anti-correlations between corresponding regions in the first portion and a second portion of the image, a function being encoded in the second portion, the function being having been used to encode the data in the first portion.
40. Computer-readable memory comprising computer program instructions for use in decoding data encoded in a first image and that when executed cause: determining from first groups of pixels in the first image respective bits of the data encoded in the first image, the first image having been generated from a second image generated by upsampling a third image in at least one dimension such that the second image has a higher resolution than the third image and includes second groups of respectively identical pixels in the direction of the at least one dimension corre- sponding to the first groups of pixels.
41. Memory according to claim 40, wherein the determining of the respective bits is based at least in part upon a subtraction of a respective intensity value of a respective predetermined pixel in each of the first groups of pixels from respective intensity val- ues of the other respective pixels in each of the first groups of pixels, the respective in- tensity value of the respective predetermined pixel in each of the first groups of pixels being unchanged from a respective intensity value of a respective corresponding pixel in each of the second groups of respective identical pixels.
42. Memory according to claim 39, wherein the first function describes a bitwise modulation applied to the data.
43. Memory according to claim 39, wherein the first and second portions each comprise respective arbitrarily-selected disjoint sets of pixels in the image.
44. Memory according to claim 40, wherein the encoding of the data in the first im- age is based at least in part upon a bitwise modulation of the data.
45. Memory according to claim 40, wherein the respective identical pixels in each of said second groups are changed as a result of the encoding of the data to produce the first image such that, after the encoding, respective summations of respective intensity values of the respective identical pixels in each of the first groups are equal to respec- tive intensity values of respective corresponding pixels in the second image.
PCT/US2000/016099 1999-06-15 2000-06-13 Data encoding and decoding WO2000078032A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP00939795A EP1203348A2 (en) 1999-06-15 2000-06-13 Data encoding and decoding
AU54824/00A AU5482400A (en) 1999-06-15 2000-06-13 Data encoding and decoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13975899P 1999-06-15 1999-06-15
US60/139,758 1999-06-15

Publications (2)

Publication Number Publication Date
WO2000078032A2 true WO2000078032A2 (en) 2000-12-21
WO2000078032A3 WO2000078032A3 (en) 2001-05-03

Family

ID=22488157

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/016099 WO2000078032A2 (en) 1999-06-15 2000-06-13 Data encoding and decoding

Country Status (3)

Country Link
EP (1) EP1203348A2 (en)
AU (1) AU5482400A (en)
WO (1) WO2000078032A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6820201B1 (en) 2000-08-04 2004-11-16 Sri International System and method using information-based indicia for securing and authenticating transactions

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0845758A2 (en) * 1996-11-28 1998-06-03 International Business Machines Corporation Embedding authentication information into an image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0845758A2 (en) * 1996-11-28 1998-06-03 International Business Machines Corporation Embedding authentication information into an image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HARTUNG AND GIROD: "DIGITAL WATERMARKING OF RAW AND COMPRESSED VIDEO" PROCEEDINGS OF THE SPIE,SPIE, BELLINGHAM, VA,US, vol. 2952, 7 October 1996 (1996-10-07), pages 205-213, XP002085796 *
See also references of EP1203348A2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6820201B1 (en) 2000-08-04 2004-11-16 Sri International System and method using information-based indicia for securing and authenticating transactions
US7117363B2 (en) 2000-08-04 2006-10-03 Sri International System and method using information-based indicia for securing and authenticating transactions
US8171297B2 (en) 2000-08-04 2012-05-01 Sint Holdings Limited Liability Company System and method using information based indicia for securing and authenticating transactions
US8255694B2 (en) 2000-08-04 2012-08-28 Sint Holdings Limited Liability Company System and method using information based indicia for securing and authenticating transactions

Also Published As

Publication number Publication date
EP1203348A2 (en) 2002-05-08
WO2000078032A3 (en) 2001-05-03
AU5482400A (en) 2001-01-02

Similar Documents

Publication Publication Date Title
Haghighi et al. TRLH: Fragile and blind dual watermarking for image tamper detection and self-recovery based on lifting wavelet transform and halftoning technique
Fridrich et al. Digital image steganography using stochastic modulation
US7187781B2 (en) Information processing device and method for processing picture data and digital watermark information
Kutter et al. Digital watermarking of color images using amplitude modulation
Lin et al. A hierarchical digital watermarking method for image tamper detection and recovery
Zain et al. Medical image watermarking with tamper detection and recovery
US20030210803A1 (en) Image processing apparatus and method
JPH10208026A (en) Picture verification device
WO2005124681A1 (en) Systems and methods for digital content security
US7523311B1 (en) Method for embedding electronic watermark, decoding method, and devices for the same
US20040017925A1 (en) System and method for image tamper detection via thumbnail hiding
Pal et al. Weighted matrix based reversible watermarking scheme using color image
Kumar et al. A reversible high capacity data hiding scheme using combinatorial strategy
JP2004007442A (en) Information processing method and device, computer program and computer readable storage medium
US7006254B2 (en) Method and system for data hiding and authentication via halftoning and coordinate projection
CN113179407B (en) Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation
Luo et al. Self embedding watermarking using halftoning technique
US6804373B1 (en) Method and system using renormalized pixels for public key and compressed images watermarks on prints
EP1634238A1 (en) Watermarking
EP1203348A2 (en) Data encoding and decoding
Singh et al. A Novel Method of high-Capacity Steganography Technique in Double Precision Images
Domingo-Ferrer et al. Invertible spread-spectrum watermarking for image authentication and multilevel access to precision-critical watermarked images
CN113392381A (en) Watermark generation method, watermark decoding method, storage medium, and electronic device
Gupta et al. A survey on reversible watermarking techniques for image security
Genov Digital watermarking of bitmap images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 54824/00

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2000939795

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWE Wipo information: entry into national phase

Ref document number: 10018416

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 2000939795

Country of ref document: EP

NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2000939795

Country of ref document: EP