WO2005011300A2 - Data overlay for a digital camera module - Google Patents

Data overlay for a digital camera module Download PDF

Info

Publication number
WO2005011300A2
WO2005011300A2 PCT/US2004/022497 US2004022497W WO2005011300A2 WO 2005011300 A2 WO2005011300 A2 WO 2005011300A2 US 2004022497 W US2004022497 W US 2004022497W WO 2005011300 A2 WO2005011300 A2 WO 2005011300A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera module
information
readable information
embedding
Prior art date
Application number
PCT/US2004/022497
Other languages
French (fr)
Other versions
WO2005011300A3 (en
Inventor
Amir Azizi
Original Assignee
Transchip, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transchip, Inc. filed Critical Transchip, Inc.
Publication of WO2005011300A2 publication Critical patent/WO2005011300A2/en
Publication of WO2005011300A3 publication Critical patent/WO2005011300A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/7921Processing of colour television signals in connection with recording for more than one processing mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • Camera modules perform many of the functions today that previously required much larger circuitry. Some camera modules include in imaging sensor, analog processing circuitry, digital processing circuitry, etc. Digital video and still images are produced by these modules with little oversight from the device the camera module is embedded into. Video and still images use various standards such as ITU-656/601, MPEG and JPEG. [04] Camera modules are becoming ever smaller and more integrated. Devices such as baby monitors, cellular phones, surveillance devices, etc. are benefiting from these improvements. For example, an inexpensive cellular phone can have an integrated digital and video camera with little impact on price or battery life.
  • Camera phones have the ability to send images and video over the cellular network. As the camera modules become smaller and having their main use as portables, debugging their functionality becomes more difficult. [05] Diagnosing problems with a camera module can be difficult. Often the device that integrates the camera module has limited ability to provide any status information about the camera module. Further, during debug and test of camera modules and their carrier devices, it can be difficult to get status from the camera module. The interfaces to the camera module are often not available outside the carrier device.
  • the present disclosure provides a method for embedding machine-readable information on an image from a camera module.
  • a first image is acquired with an imaging array integral to the camera module.
  • Information is gathered with the camera module and embedded in the first image to produce a second image.
  • the information could be status information gathered within the camera module.
  • the information can be machine read from the second image without human interaction.
  • the second image is sent away from the camera module whereby the image path can serve as a data channel for the gathered information.
  • FIGS. 1A, IB, 1C and ID are block diagrams of embodiments of a camera module
  • FIGS. 2A, 2B, 2C, 2D, and 2E are diagrams of embodiments of a carrier frame with embedded information
  • FIGS. 3 A, 3B, 3C, and 3D are diagrams of embodiments of embedded information portion of a frame
  • FIGS. 4A and 4B are flow diagrams of embodiments of a process for transporting information from a camera module using the image frame(s)
  • FIG. 5 is a flow diagram of an embodiment of a process for transporting information from a camera module using video frames.
  • the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged.
  • a process is terminated when its operations are completed.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
  • the term “storage medium” may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage mediums magnetic disk storage mediums
  • optical storage mediums flash memory devices and/or other machine readable mediums for storing information.
  • machine readable medium includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
  • embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine readable medium such as storage medium.
  • a processor may perform the necessary tasks.
  • a code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc.
  • FIG. 1 A a block diagram of an embodiment of a camera module 100-1 is shown that supports compression.
  • the camera module 100-1 captures images with a pixel array 148.
  • Column amplifiers 144 and a column selector 140 are used to shift out the pixels of the image under the direction f the timing unit 152.
  • An analog processor 136 improves the image while still in the analog domain. Conversion to the digital domain is performed in an along to digital converter (ADC) 132. Once the image is in digital form, digital domain image improvement occurs in the image signal processor 128.
  • ADC along to digital converter
  • the camera module in one embodiment is a slave device to the host device.
  • the host device controls and activates the camera module 100.
  • the frame can be analyzed by host device, sent directly to any display for the host device and/or read from the camera module 100.
  • the camera module 100 is directly connected to the host device with a digital interface that does not loose any of the detail in the compressed or uncompressed image.
  • the host device may save the compressed or uncompressed, video or still image with the embedded information within into files or may provide video streams.
  • the files or streams may be transferred to another system (e.g., computer, PDA, wireless phone, or other device) using direct cable connection (e.g., Firewire, USB, etc.) or wireless channel (cellular, Bluetooth, infra red, WiFi, WiMax, etc.). That system receiving the images can extract the embedded information.
  • An overlay unit 124 receives information from a controller 104 and/or an interface block 108 that can be added to the image.
  • the information is status information of the camera module, but could include information from the host device that embeds the camera module 100-1.
  • the overlay unit 124 adds status or other information to the images and video to use the standard pathway as a data channel.
  • the information could include parameters relating to the image capture (e.g., aperture, shutter speed, ambient light, use of flash, distance to subject, field of view, shaking or movement of camera module, etc.), the camera module (versions of the components of the camera module, serial number, software or firmware version, temperature, power supply quality factors, error rates, pixel array timing, error rates, etc.), the device housing the camera module (e.g., version of device, serial number of device, power levels, configuration loaded into the camera module, etc.), etc.
  • the image capture e.g., aperture, shutter speed, ambient light, use of flash, distance to subject, field of view, shaking or movement of camera module, etc.
  • the camera module versions of the components of the camera module, serial number, software or firmware version, temperature, power supply quality factors, error rates, pixel array timing, error rates, etc.
  • the device housing the camera module e.g., version of device, serial number of device, power levels, configuration loaded into the camera module, etc.
  • the overlay unit 124 has many options on how to embed the information in various embodiments. Some embodiments may be preconfigured to use one method or another, while other embodiments can choose from a number of methods.
  • the controller 104 can indicate to the overlay unit 124 which way to incorporate the information to the image. Some embodiments directly embed the information into the image or in a part of the image outside the frame that is transported with the image, but others embed the information in metadata transported with the still images or video.
  • the algorithms that embed the information in the still images or video can be chosen such that they survive analog conversion and/or compression. Other algorithms would not normally survive these lossy transformations.
  • the overlay unit 124 can select the appropriate algorithms to use. Some embodiments could produce two carrier images where one supports lossy transformations and the other does not, but both include the embedded information. Other embodiments can include the information twice in the carrier image, where one instance would support a lossy transformation and the other might not.
  • the image with the overlaid information is called a carrier image.
  • a frame buffer 120 holds the carrier image.
  • the frame buffer 120 could hold one or more carrier images and could support both still images and video.
  • the interface block 108 can get the carrier image at this stage in the processing to relay an uncompressed image or can couple the image to a compression unit 1 16.
  • the carrier image is reduced in storage size in the compression unit 116.
  • the compression could be JPEG, TIFF with LZW, GIFF, MPEG, or any other image or video compression algorithm.
  • the overlay unit 124 and controller 104 may specify certain parameters that the compression unit 116 uses to avoid excessive loss in image quality that might make the overlaid information unrecoverable. Some compression algorithms allow certain portions of the image to be compressed less severely.
  • the compression algorithm allows metadata to be embedded in the carrier images. The metadata could include the overlaid information such that any compression loss is avoided.
  • the interface block 108 is the communication port of the camera module 100.
  • a first port 154 provides the still frames and/or video in an uncompressed form which is derived from the frame buffer 120.
  • the compressed still frames and/or video are output from the compressed frame buffer 112 to a second port 156.
  • Status and other information are exchanged on a command/data port 160.
  • the device uses the command/data port to configure the camera module 100 and provide information to embed in the image and/or video.
  • the compressed and/or uncompressed carrier images and video are output in a form where the embedded information is machine readable.
  • Machine readable information is defined herein to exclude that which an average person can read. For example, printed text although can be recognized with character recognition, is not within the definition of machine readable. Machines can more readily understand certain patterns that are not readable by the average person without special training. By having the information machine-readable, it can be received without the aid of a person.
  • FIG. IB a block diagram of another embodiment of the camera module 100-2 is shown.
  • This embodiment does not include a compression unit 116, compressed frame buffer 1 12 or a first port 156. All carrier images are produced in uncompressed format.
  • the overlay unit 124 can still embed information using algorithms that support lossy transformations. Outside the camera module 100, the carrier images may be compressed or converted into analog form.
  • This embodiment includes an optional frame buffer 120 to store an image. Other embodiments forgo using the frame buffer 120 and the host device reads the image out as it is produced to avoid the need for the frame buffer 120.
  • FIG. 1C a block diagram of yet another embodiment of the camera module 100-3 is shown. This embodiment does not support compression or embedded information that originates outside the camera module 100-3. The embedded information is gathered within the camera module 100-3 and passed by the controller 104 to the overlay unit. The device could further embed information after the image leaves the camera module 100-3.
  • FIG. ID a block diagram of yet another embodiment of the camera module 100-4 is shown. This embodiment all images are compressed.
  • the overlay unit 124 uses algorithms that will likely survive the compression and, perhaps, additional lossy transformations.
  • the host device reads the compressed image from the interface block 108.
  • FIG. 2 A a diagram of an embodiment of a carrier frame 204 is shown with embedded information 212-1 in the lower-left corner of a carrier image 208.
  • the carrier image 208 is captured by the pixel array 148 before the overlay unit 124 places embedded information 212-1 in the lower-left corner of the carrier image 208. Any portion of the carrier image could have the embedded information 212 overlaid upon it.
  • Some embodiments may have several portions of the carrier image 208 overlaid with embedded information 212.
  • the embedded information 212 may completely obscure the carrier image 208, partially obscure the carrier image 208 or imperceptibly obscure the carrier image 208. Where completely obscured, the underlying carrier image is not visible at all. Some embodiments partially obscure the carrier image 208 by having the embedded information 212 transparent. Other embodiments modify some of the least significant bits for each pixel with data. For example, the least significant bit could be reserved for data. That data could be encrypted to randomize the data such that it appears like noise in the image. To embed information without perceptible change to the carrier image 208 any number of watermarking algorithms could be used.
  • FIG. 2B a diagram of another embodiment of the carrier frame 204 is shown with embedded information 212-2 in a few rows the carrier image 208.
  • the last few rows could be extra rows that are transported with the carrier image 208, but often cropped out of the image. For example, these rows could be beyond what is normally found in the viewfinder.
  • the embedded information 212-2 could overlay part of the carrier image 208 or just be tacked to the bottom of the fully carrier image 208 of an enlarged carrier frame 204.
  • FIG. 2C a diagram of yet another embodiment of the carrier frame 204 is shown with embedded information 212-3 and a text region 110.
  • the text region 110 includes part or all of the information in a format that can be readily read by a person.
  • FIG. 2D a diagram of still another embodiment of the carrier frame 204 is shown with embedded information 212-4 and the text region 1 10. This embodiment shows the text region 110 overlaying a different portion of the carrier image 208. A particular carrier image 208 could have any number of text regions 1 10.
  • FIG. 2E a diagram of one embodiment of the carrier frame 204 is shown with embedded information 212-5, 212-6 in two separate locations within the carrier frame. These two locations may use the same or different algorithms to embed the information. The information stored in the two locations may be completely different, partially different or the same. The position and size of these locations is programmable as well as the information embedded in those locations. Other embodiments may have more than two locations to store embedded information. In some cases, the locations may partially overlap.
  • each pixel can be turned on or off to represent zeros of ones of binary encoded information. Some embodiments could use a portion of each pixel word such that when the more significant bits are masked off, the underlying data is revealed.
  • FIG. 3B a diagram of another embodiment of embedded information portion 212 of a carrier frame 204 is shown.
  • This embodiment stores one and a half bits in each pixel such that each bit has three states (i.e., on, off or half-on). Other embodiments could store two, four or eight states per pixel. The reader of the embedded information would digitize each analog bit to determine the value.
  • FIG. 3C a diagram of yet another embodiment of embedded information portion 212 of a carrier frame 204 is shown. For embodiments that have various lossy transformations, a bar code could appear in the image as the embedded information 212. The bar code is likely to retain its information where other techniques might corrupt the information.
  • each image 208 could have a different bar code such that the information is divided into portions that are spread over a number of frames 204.
  • An embodiment using an algorithm that would survive lossy transformations may also include another portion 212 on each image that has the information also, but uses an algorithm that might not survive a transformation.
  • FIG. 3D a diagram of still another embodiment of embedded information portion 212 of a carrier frame 204 is shown.
  • This embodiment uses a two- dimensional bar code that could replace a portion 212 of the carrier image 208.
  • the bar codes could be of any size to support transfer of the information.
  • the whole image 208 is replaced by an overlay portion 212 such that only data is in the frame 204.
  • FIG. 4A a flow diagram of an embodiment of a process 400-1 for transporting information from the camera module 100 using the carrier image(s) 208.
  • the depicted portion of the process 400-1 begins in step 404 where an image is captured and processed to produce a digital image at the output of the image signal processor 128.
  • the controller 104 or the device gathers status or other information transport through the image channel in step 408.
  • the controller 104 instructs the overlay unit 124 which portion or portions 212 will be overlaid with embedded information and the algorithm or algorithms to use for the one or more portions 212 in step 412.
  • the overlay algorithms are generally divided into those that survive lossy transformations better than those that do not.
  • the algorithm may rotate with the frames such that one of the algorithms is likely to survive a particular combination of lossy transformations.
  • step 416 the current frame is obtained.
  • the frame may be obtained and processed by the overlay unit 124 in a serial fashion.
  • step 420 a determination is made to compress the carrier image or not. Where there will be compression or another lossy transformation, this embodiment goes to step 428 and chooses one of the algorithms specified for a lossy channel. If no transformations are anticipated in the channel or compression in the camera module, processing continues to step 424 where an algorithm is used that is less likely to pass through a lossy channel or compression.
  • Some embodiments may use both types of algorithms and embed two or more portions 212 into the image 208.
  • one image 208 could have four carrier portions 212 that use four different overlay algorithms to encode the same information.
  • the image is modified to create the carrier image 208.
  • the carrier image 208 is stored in step 434 after any compression is performed. This storage and compression may be done in serial fashion as the image is produced from the overlay unit 124.
  • the modified image is output from the camera module in step 436 as a still image and/or video image in compressed and/or uncompressed form.
  • a device is able to read the embedded portions 212 in step 440 to reformulate the information. Where the information is redundantly present in the image(s), algorithms may be performed to take advantage of this redundancy to lower error rates.
  • FIG. 4B a flow diagram of another embodiment of a process 400-2 for transporting information from the camera module 100 using the carrier frame(s) 204.
  • This embodiment produces a compressed carrier image 208 and an uncompressed carrier image 208 for each image.
  • an overlay algorithm for a non-lossy channel is used in steps 424 and 432, the carrier image 208 does not have compression performed in step 430 before storing the carrier image 208 as would logically follow.
  • a flow diagram of an embodiment of a process 500 for transporting information from the camera module 100 using video images 208 is shown.
  • the information is spread over a number of images 208 that form a video.
  • the depicted portion of the process begins in step 408 where the information is gathered.
  • the information frame is split into portions in step 504 that are to be spread over a number of images. Placement and encoding of the embedded information is determined in step 412.
  • the image is captured in step 404 and obtained by the overlay unit in step 416.
  • the information portion is overlaid in step 532.
  • the carrier image 208 is optionally compressed and stored.
  • the image is output from the cameral module 100 as part of a video in step 436.
  • step 540 the receiving end of the channel extracts the information portion in step 540 to reformulate the portion. If there are more portions to send as determined in step 508, processing loops back to step 408 until all portions are sent. Where all has been sent, the receiving end in step 512, reassembles the portions to determine the information.
  • this embodiment puts portions in images of a video, other embodiments could put portions in still images. The portions may stand alone or require other images to reformulate them. For example, a first image may include status information on the camera module, but a second frame includes status information on the device. These two types of information could be separately used by the receiving end of the channel.
  • a first image may include status information on the camera module, but a second frame includes status information on the device.

Abstract

According to the invention, a method for embedding machine-readable information on an image from a camera module is disclosed. In one step, a first image is acquired with a imaging array integral to the camera module. Information is gathered with the camera module and embedded in the first image to produce a second image. The information could be status information gathered within the camera module. The information can be machine read from the second image without human interaction. The second image is sent away from the camera module whereby the image path can serve as a data channel for the gathered information.

Description

DATA OVERLAY FOR A DIGITAL CAMERA MODULE
[01] This application claims the benefit of and is a non-provisional of US Application Serial No. 60/487,758 filed on July 15, 2003, which is incorporated by reference in its entirety. BACKGROUND OF THE DISCLOSURE
[02] This disclosure relates in general to digital camera devices and, more specifically, but not by way of limitation, to data transport using a digital camera device. [03] Camera modules perform many of the functions today that previously required much larger circuitry. Some camera modules include in imaging sensor, analog processing circuitry, digital processing circuitry, etc. Digital video and still images are produced by these modules with little oversight from the device the camera module is embedded into. Video and still images use various standards such as ITU-656/601, MPEG and JPEG. [04] Camera modules are becoming ever smaller and more integrated. Devices such as baby monitors, cellular phones, surveillance devices, etc. are benefiting from these improvements. For example, an inexpensive cellular phone can have an integrated digital and video camera with little impact on price or battery life. Camera phones have the ability to send images and video over the cellular network. As the camera modules become smaller and having their main use as portables, debugging their functionality becomes more difficult. [05] Diagnosing problems with a camera module can be difficult. Often the device that integrates the camera module has limited ability to provide any status information about the camera module. Further, during debug and test of camera modules and their carrier devices, it can be difficult to get status from the camera module. The interfaces to the camera module are often not available outside the carrier device.
BRIEF SUMMARY OF THE DISCLOSURE [06] In one embodiment, the present disclosure provides a method for embedding machine-readable information on an image from a camera module. In one step, a first image is acquired with an imaging array integral to the camera module. Information is gathered with the camera module and embedded in the first image to produce a second image. The information could be status information gathered within the camera module. The information can be machine read from the second image without human interaction. The second image is sent away from the camera module whereby the image path can serve as a data channel for the gathered information.
BRIEF DESCRIPTION OF THE DRAWINGS [07| The present disclosure is described in conjunction with the appended figures: FIGS. 1A, IB, 1C and ID are block diagrams of embodiments of a camera module; FIGS. 2A, 2B, 2C, 2D, and 2E are diagrams of embodiments of a carrier frame with embedded information; FIGS. 3 A, 3B, 3C, and 3D are diagrams of embodiments of embedded information portion of a frame; FIGS. 4A and 4B are flow diagrams of embodiments of a process for transporting information from a camera module using the image frame(s); and FIG. 5 is a flow diagram of an embodiment of a process for transporting information from a camera module using video frames. [08] In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [09] The ensuing description provides preferred exemplary embodiment(s) only, and is not intended to limit the scope, applicability or configuration of the invention. Rather, the ensuing description of the preferred exemplary embodiment(s) will provide those skilled in the art with an enabling description for implementing a preferred exemplary embodiment of the invention. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth in the appended claims. [10] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments maybe practiced without these specific details. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, structures and techniques may be shown in detail in order not to obscure the embodiments.
[11] Also, it is noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
[12] Moreover, as disclosed herein, the term "storage medium" may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information. The term "machine readable medium" includes, but is not limited to portable or fixed storage devices, optical storage devices, wireless channels and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
[13] Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium such as storage medium. A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. [14] Referring first to FIG. 1 A, a block diagram of an embodiment of a camera module 100-1 is shown that supports compression. The camera module 100-1 captures images with a pixel array 148. Column amplifiers 144 and a column selector 140 are used to shift out the pixels of the image under the direction f the timing unit 152. An analog processor 136 improves the image while still in the analog domain. Conversion to the digital domain is performed in an along to digital converter (ADC) 132. Once the image is in digital form, digital domain image improvement occurs in the image signal processor 128. [15] The camera module in one embodiment is a slave device to the host device. The host device controls and activates the camera module 100. The frame can be analyzed by host device, sent directly to any display for the host device and/or read from the camera module 100. The camera module 100 is directly connected to the host device with a digital interface that does not loose any of the detail in the compressed or uncompressed image. The host device may save the compressed or uncompressed, video or still image with the embedded information within into files or may provide video streams. The files or streams may be transferred to another system (e.g., computer, PDA, wireless phone, or other device) using direct cable connection (e.g., Firewire, USB, etc.) or wireless channel (cellular, Bluetooth, infra red, WiFi, WiMax, etc.). That system receiving the images can extract the embedded information. [16] An overlay unit 124 receives information from a controller 104 and/or an interface block 108 that can be added to the image. Generally, the information is status information of the camera module, but could include information from the host device that embeds the camera module 100-1. Typically, there are standard pathways that receive the images and video from the device. The overlay unit 124 adds status or other information to the images and video to use the standard pathway as a data channel. The information could include parameters relating to the image capture (e.g., aperture, shutter speed, ambient light, use of flash, distance to subject, field of view, shaking or movement of camera module, etc.), the camera module (versions of the components of the camera module, serial number, software or firmware version, temperature, power supply quality factors, error rates, pixel array timing, error rates, etc.), the device housing the camera module (e.g., version of device, serial number of device, power levels, configuration loaded into the camera module, etc.), etc.
Other debugging and characterization information from the controller 104 could be included in the embedded information such as software variables and buffer contents of the camera module 100 operating under various conditions. [17] The overlay unit 124 has many options on how to embed the information in various embodiments. Some embodiments may be preconfigured to use one method or another, while other embodiments can choose from a number of methods. The controller 104 can indicate to the overlay unit 124 which way to incorporate the information to the image. Some embodiments directly embed the information into the image or in a part of the image outside the frame that is transported with the image, but others embed the information in metadata transported with the still images or video.
[18] The algorithms that embed the information in the still images or video can be chosen such that they survive analog conversion and/or compression. Other algorithms would not normally survive these lossy transformations. The overlay unit 124 can select the appropriate algorithms to use. Some embodiments could produce two carrier images where one supports lossy transformations and the other does not, but both include the embedded information. Other embodiments can include the information twice in the carrier image, where one instance would support a lossy transformation and the other might not. [19] The image with the overlaid information is called a carrier image. A frame buffer 120 holds the carrier image. The frame buffer 120 could hold one or more carrier images and could support both still images and video. The interface block 108 can get the carrier image at this stage in the processing to relay an uncompressed image or can couple the image to a compression unit 1 16. [20] The carrier image is reduced in storage size in the compression unit 116. In various embodiments, the compression could be JPEG, TIFF with LZW, GIFF, MPEG, or any other image or video compression algorithm. The overlay unit 124 and controller 104 may specify certain parameters that the compression unit 116 uses to avoid excessive loss in image quality that might make the overlaid information unrecoverable. Some compression algorithms allow certain portions of the image to be compressed less severely. In one embodiment, the compression algorithm allows metadata to be embedded in the carrier images. The metadata could include the overlaid information such that any compression loss is avoided. After compression, the compressed carrier image is stored in a compressed frame buffer. [21] The interface block 108 is the communication port of the camera module 100. A first port 154 provides the still frames and/or video in an uncompressed form which is derived from the frame buffer 120. The compressed still frames and/or video are output from the compressed frame buffer 112 to a second port 156. Status and other information are exchanged on a command/data port 160. The device uses the command/data port to configure the camera module 100 and provide information to embed in the image and/or video. The compressed and/or uncompressed carrier images and video are output in a form where the embedded information is machine readable.
[22] Machine readable information is defined herein to exclude that which an average person can read. For example, printed text although can be recognized with character recognition, is not within the definition of machine readable. Machines can more readily understand certain patterns that are not readable by the average person without special training. By having the information machine-readable, it can be received without the aid of a person.
[23 J With reference to FIG. IB, a block diagram of another embodiment of the camera module 100-2 is shown. This embodiment does not include a compression unit 116, compressed frame buffer 1 12 or a first port 156. All carrier images are produced in uncompressed format. The overlay unit 124 can still embed information using algorithms that support lossy transformations. Outside the camera module 100, the carrier images may be compressed or converted into analog form. This embodiment includes an optional frame buffer 120 to store an image. Other embodiments forgo using the frame buffer 120 and the host device reads the image out as it is produced to avoid the need for the frame buffer 120. [24] Referring next to FIG. 1C, a block diagram of yet another embodiment of the camera module 100-3 is shown. This embodiment does not support compression or embedded information that originates outside the camera module 100-3. The embedded information is gathered within the camera module 100-3 and passed by the controller 104 to the overlay unit. The device could further embed information after the image leaves the camera module 100-3.
[25] Referring next to FIG. ID, a block diagram of yet another embodiment of the camera module 100-4 is shown. This embodiment all images are compressed. The overlay unit 124 uses algorithms that will likely survive the compression and, perhaps, additional lossy transformations. The host device reads the compressed image from the interface block 108. [26] With reference to FIG. 2 A, a diagram of an embodiment of a carrier frame 204 is shown with embedded information 212-1 in the lower-left corner of a carrier image 208. The carrier image 208 is captured by the pixel array 148 before the overlay unit 124 places embedded information 212-1 in the lower-left corner of the carrier image 208. Any portion of the carrier image could have the embedded information 212 overlaid upon it. Some embodiments may have several portions of the carrier image 208 overlaid with embedded information 212. [27 J The embedded information 212 may completely obscure the carrier image 208, partially obscure the carrier image 208 or imperceptibly obscure the carrier image 208. Where completely obscured, the underlying carrier image is not visible at all. Some embodiments partially obscure the carrier image 208 by having the embedded information 212 transparent. Other embodiments modify some of the least significant bits for each pixel with data. For example, the least significant bit could be reserved for data. That data could be encrypted to randomize the data such that it appears like noise in the image. To embed information without perceptible change to the carrier image 208 any number of watermarking algorithms could be used.
[28] Referring next to FIG. 2B, a diagram of another embodiment of the carrier frame 204 is shown with embedded information 212-2 in a few rows the carrier image 208. The last few rows could be extra rows that are transported with the carrier image 208, but often cropped out of the image. For example, these rows could be beyond what is normally found in the viewfinder. The embedded information 212-2 could overlay part of the carrier image 208 or just be tacked to the bottom of the fully carrier image 208 of an enlarged carrier frame 204. [29] With reference to FIG. 2C, a diagram of yet another embodiment of the carrier frame 204 is shown with embedded information 212-3 and a text region 110. The text region 110 includes part or all of the information in a format that can be readily read by a person. In some embodiments, this is additional information not included in the embedded information 212-3. [30] Referring next to FIG. 2D, a diagram of still another embodiment of the carrier frame 204 is shown with embedded information 212-4 and the text region 1 10. This embodiment shows the text region 110 overlaying a different portion of the carrier image 208. A particular carrier image 208 could have any number of text regions 1 10. [31] With reference to FIG. 2E, a diagram of one embodiment of the carrier frame 204 is shown with embedded information 212-5, 212-6 in two separate locations within the carrier frame. These two locations may use the same or different algorithms to embed the information. The information stored in the two locations may be completely different, partially different or the same. The position and size of these locations is programmable as well as the information embedded in those locations. Other embodiments may have more than two locations to store embedded information. In some cases, the locations may partially overlap.
[32] With reference to FIG. 3 A, a diagram of an embodiment of embedded information portion 212 of a carrier frame 204 is shown. In this embodiment, each pixel can be turned on or off to represent zeros of ones of binary encoded information. Some embodiments could use a portion of each pixel word such that when the more significant bits are masked off, the underlying data is revealed.
[33] Referring next to FIG. 3B, a diagram of another embodiment of embedded information portion 212 of a carrier frame 204 is shown. This embodiment stores one and a half bits in each pixel such that each bit has three states (i.e., on, off or half-on). Other embodiments could store two, four or eight states per pixel. The reader of the embedded information would digitize each analog bit to determine the value. [34] With reference to FIG. 3C, a diagram of yet another embodiment of embedded information portion 212 of a carrier frame 204 is shown. For embodiments that have various lossy transformations, a bar code could appear in the image as the embedded information 212. The bar code is likely to retain its information where other techniques might corrupt the information. For a video image, each image 208 could have a different bar code such that the information is divided into portions that are spread over a number of frames 204. An embodiment using an algorithm that would survive lossy transformations may also include another portion 212 on each image that has the information also, but uses an algorithm that might not survive a transformation.
[35] Referring next to FIG. 3D, a diagram of still another embodiment of embedded information portion 212 of a carrier frame 204 is shown. This embodiment uses a two- dimensional bar code that could replace a portion 212 of the carrier image 208. The bar codes could be of any size to support transfer of the information. In some embodiments, the whole image 208 is replaced by an overlay portion 212 such that only data is in the frame 204.
[36] With reference to FIG. 4A, a flow diagram of an embodiment of a process 400-1 for transporting information from the camera module 100 using the carrier image(s) 208. The depicted portion of the process 400-1 begins in step 404 where an image is captured and processed to produce a digital image at the output of the image signal processor 128. The controller 104 or the device (i.e., from outside the camera module 100) gathers status or other information transport through the image channel in step 408. The controller 104 instructs the overlay unit 124 which portion or portions 212 will be overlaid with embedded information and the algorithm or algorithms to use for the one or more portions 212 in step 412. The overlay algorithms are generally divided into those that survive lossy transformations better than those that do not. In video images, the algorithm may rotate with the frames such that one of the algorithms is likely to survive a particular combination of lossy transformations. [37] In step 416, the current frame is obtained. The frame may be obtained and processed by the overlay unit 124 in a serial fashion. In this embodiment, there is a compression unit 1 16. In step 420, a determination is made to compress the carrier image or not. Where there will be compression or another lossy transformation, this embodiment goes to step 428 and chooses one of the algorithms specified for a lossy channel. If no transformations are anticipated in the channel or compression in the camera module, processing continues to step 424 where an algorithm is used that is less likely to pass through a lossy channel or compression. Some embodiments may use both types of algorithms and embed two or more portions 212 into the image 208. For example, one image 208 could have four carrier portions 212 that use four different overlay algorithms to encode the same information. [38] In step 432, the image is modified to create the carrier image 208. The carrier image 208 is stored in step 434 after any compression is performed. This storage and compression may be done in serial fashion as the image is produced from the overlay unit 124. The modified image is output from the camera module in step 436 as a still image and/or video image in compressed and/or uncompressed form. [39] At the other end of the image channel, a device is able to read the embedded portions 212 in step 440 to reformulate the information. Where the information is redundantly present in the image(s), algorithms may be performed to take advantage of this redundancy to lower error rates. Some embodiments may use encoding algorithms for the embedded information that allow reformulating missing information, for example, error correction bits in each byte. [40] Referring next to FIG. 4B, a flow diagram of another embodiment of a process 400-2 for transporting information from the camera module 100 using the carrier frame(s) 204. This embodiment produces a compressed carrier image 208 and an uncompressed carrier image 208 for each image. Where an overlay algorithm for a non-lossy channel is used in steps 424 and 432, the carrier image 208 does not have compression performed in step 430 before storing the carrier image 208 as would logically follow.
[41] With reference to FIG. 5, a flow diagram of an embodiment of a process 500 for transporting information from the camera module 100 using video images 208 is shown. In this embodiment, the information is spread over a number of images 208 that form a video. The depicted portion of the process begins in step 408 where the information is gathered. The information frame is split into portions in step 504 that are to be spread over a number of images. Placement and encoding of the embedded information is determined in step 412. [42] The image is captured in step 404 and obtained by the overlay unit in step 416. The information portion is overlaid in step 532. In step 434, the carrier image 208 is optionally compressed and stored. The image is output from the cameral module 100 as part of a video in step 436. In step 540, the receiving end of the channel extracts the information portion in step 540 to reformulate the portion. If there are more portions to send as determined in step 508, processing loops back to step 408 until all portions are sent. Where all has been sent, the receiving end in step 512, reassembles the portions to determine the information. [43] Although this embodiment puts portions in images of a video, other embodiments could put portions in still images. The portions may stand alone or require other images to reformulate them. For example, a first image may include status information on the camera module, but a second frame includes status information on the device. These two types of information could be separately used by the receiving end of the channel. [44] While the principles of the invention have been described above in connection with specific apparatuses and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the invention.

Claims

WHAT IS CLAIMED IS: 1. A method for embedding machine-readable information on an image from a camera module, the method comprising steps of: acquiring a first image with an imaging array integral to the camera module; gathering information with the camera module; embedding the information in the first image to produce a second image, wherein the information can be machine read from the second image; and sending the second image away from the camera module.
2. The method for embedding machine-readable information on the image from the camera module as recited in claim 1, further comprising a step of embedding textual information in the image.
3. The method for embedding machine-readable information on the image from the camera module as recited in claim 1, wherein the information is generated within the camera module.
4. The method for embedding machine-readable information on the image from the camera module as recited in claim 1, further comprising a step of producing a video stream, wherein the second image is part of the video stream.
5. The method for embedding machine-readable information on the image from the camera module as recited in claim 1, further comprising a step of producing a video stream including a plurality of frames, wherein the information is resampled and embedded for each of the plurality.
6. The method for embedding machine-readable information on the image from the camera module as recited in claim 1 , further comprising a step of compressing the second image.
7. The method for embedding machine-readable information on the image from the camera module as recited in claim 1 , wherein the machine-readable information does not include text characters.
8. A computer-readable medium having computer-executable instructions for performing the computer-implementable method for embedding machine-readable information on the image from the camera module of claim 1.
9. A computer system adapted to perform the computer-implementable method for embedding machine-readable information on the image from the camera module of claim 1.
10. A camera module for embedding machine-readable information on an image, the camera module comprising: an imaging array that detects an image; conversion circuitry that converts the image into a first frame; an overlay unit that gathers information and embeds that information in the first frame to produce a second frame; and an output port coupled to the overlay unit that makes the second frame available.
11. The camera module for embedding machine-readable information on the image as recited in claim 10, further comprising a compression unit that compresses the second frame.
12. The camera module for embedding machine-readable information on the image as recited in claim 10, wherein the second frame is part of a video stream available on the output port.
13. The camera module for embedding machine-readable information on the image as recited in claim 10, further comprising a receiving device that is remote to the output port, but coupled to the output port, wherein the receiving device automatically extracts the information from the second frame.
14. A method for embedding machine-readable information in a video stream from a camera module, the method comprising steps of: gathering information with the camera module; dividing the information into a plurality of portions acquiring a first image with a imaging array integral to the camera module; embedding a first portion of the plurality of portions in the first image to produce a first carrier image, wherein the second portion can be machine read from the first carrier image; sending the first carrier image away from the camera module; acquiring a second image with a imaging array integral to the camera module; embedding a second portion of the plurality of portions in the second image to produce a second carrier image, wherein the second portion can be machine read from the second carrier image; and sending the second carrier image away from the camera module.
15. The method for embedding machine-readable information in the video stream from the camera module as recited in claim 14, further comprising a step of embedding textual information in at least one of the first and second carrier image.
16. The method for embedding machine-readable information in the video stream from the camera module as recited in claim 14, wherein the information is generated within the camera module.
17. The method for embedding machine-readable information in the video stream from the camera module as recited in claim 14, further comprising a step of compressing at least one of the first and second carrier image.
18. A computer-readable medium having computer-executable instructions for performing the computer-implementable method for embedding machine-readable information in the video stream from the camera module of claim 14.
19. A computer system adapted to perform the computer-implementable method for embedding machine-readable information in the video stream from the camera module of claim 14.
PCT/US2004/022497 2003-07-15 2004-07-13 Data overlay for a digital camera module WO2005011300A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48775803P 2003-07-15 2003-07-15
US60/487,758 2003-07-15

Publications (2)

Publication Number Publication Date
WO2005011300A2 true WO2005011300A2 (en) 2005-02-03
WO2005011300A3 WO2005011300A3 (en) 2005-06-09

Family

ID=34102718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/022497 WO2005011300A2 (en) 2003-07-15 2004-07-13 Data overlay for a digital camera module

Country Status (1)

Country Link
WO (1) WO2005011300A2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499294A (en) * 1993-11-24 1996-03-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Digital camera with apparatus for authentication of images produced from an image file
US5633678A (en) * 1995-12-20 1997-05-27 Eastman Kodak Company Electronic still camera for capturing and categorizing images
US6044156A (en) * 1997-04-28 2000-03-28 Eastman Kodak Company Method for generating an improved carrier for use in an image data embedding application
US20020114488A1 (en) * 1998-04-10 2002-08-22 Hirofumi Suda Image processing apparatus, image processing method and computer readable memory medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499294A (en) * 1993-11-24 1996-03-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Digital camera with apparatus for authentication of images produced from an image file
US5633678A (en) * 1995-12-20 1997-05-27 Eastman Kodak Company Electronic still camera for capturing and categorizing images
US6044156A (en) * 1997-04-28 2000-03-28 Eastman Kodak Company Method for generating an improved carrier for use in an image data embedding application
US20020114488A1 (en) * 1998-04-10 2002-08-22 Hirofumi Suda Image processing apparatus, image processing method and computer readable memory medium

Also Published As

Publication number Publication date
WO2005011300A3 (en) 2005-06-09

Similar Documents

Publication Publication Date Title
CN101536491B (en) Imaging device and imaging method
KR101023945B1 (en) Image processing device for reducing JPEGJoint Photographic Coding Experts Group capture time and method of capturing JPEG in the same device
US20080316331A1 (en) Image processing apparatus and method for displaying captured image without time delay and computer readable medium stored thereon computer executable instructions for performing the method
US20050213830A1 (en) Image processing device and method
US8558909B2 (en) Method and apparatus for generating compressed file, camera module associated therewith, and terminal including the same
US9961297B2 (en) Method and system of rotation of video frames for displaying a video
JP2005287029A (en) Method for dynamically processing data and digital camera
JP2009159359A (en) Moving image data encoding apparatus, moving image data decoding apparatus, moving image data encoding method, moving image data decoding method and program
KR100902419B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
JP4497945B2 (en) Imaging device
WO2004112396A1 (en) Electronic device for compressing image data and creating thumbnail image, image processor, and data structure
JP4343657B2 (en) Image recording device
US20050036046A1 (en) Method of or device for processing image data, a processed image data format, and a method of or device for displaying at least one image from the processed image data
US9066111B2 (en) Image encoder and method for encoding images
JP4302661B2 (en) Image processing system
WO2005011300A2 (en) Data overlay for a digital camera module
KR100827680B1 (en) Method and device for transmitting thumbnail data
JP4822507B2 (en) Image processing apparatus and apparatus connected to image processing apparatus
JP2007201862A (en) Communication terminal device
US8154749B2 (en) Image signal processor and deferred vertical synchronous signal outputting method
KR100902421B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
KR100902420B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
JP2004221825A (en) Device and method for decoding image data
JP5194242B2 (en) Image processing apparatus and apparatus connected to image processing apparatus
JP4382640B2 (en) Digital video camera

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase