US20080292132A1 - Method And System For Inserting Software Processing In A Hardware Image Sensor Pipeline - Google Patents

Method And System For Inserting Software Processing In A Hardware Image Sensor Pipeline Download PDF

Info

Publication number
US20080292132A1
US20080292132A1 US11/940,788 US94078807A US2008292132A1 US 20080292132 A1 US20080292132 A1 US 20080292132A1 US 94078807 A US94078807 A US 94078807A US 2008292132 A1 US2008292132 A1 US 2008292132A1
Authority
US
United States
Prior art keywords
processing
hardware
isp
software
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/940,788
Other versions
US20090232347A9 (en
US9058668B2 (en
Inventor
David Plowman
Gary Keall
Clive Walker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadcom Corp
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US11/940,788 priority Critical patent/US9058668B2/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KEALL, GARY, PLOWMAN, DAVID A., WALKER, CLIVE
Publication of US20080292132A1 publication Critical patent/US20080292132A1/en
Publication of US20090232347A9 publication Critical patent/US20090232347A9/en
Application granted granted Critical
Publication of US9058668B2 publication Critical patent/US9058668B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00281Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal
    • H04N1/00307Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal with a mobile telephone apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0096Portable devices

Definitions

  • Certain embodiments of the invention relate to processing of images. More specifically, certain embodiments of the invention relate to a method and system for inserting software processing in a hardware image sensor pipeline.
  • Mobile devices For many people, mobile or hand held electronic devices have become a part of everyday life. Mobile devices have evolved from a convenient method for voice communication to multi functional resources that offer, for example, camera features, media playback, electronic gaming, internet browsing, email and office assistance.
  • a method and system for inserting software processing in a hardware image sensor pipeline substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • FIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention.
  • FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention.
  • FIG. 2A is a block diagram of an exemplary mobile device configured to perform image processing via a hardware image sensor pipeline (ISP) and a software program executed by a processor, in accordance with an embodiment of the invention.
  • ISP hardware image sensor pipeline
  • FIG. 2B is a block diagram of an exemplary portion of a hardware ISP configured for insertion of software processing between hardware ISP stages, in accordance with an embodiment of the invention.
  • FIG. 3 is a flow chart illustrating exemplary steps for processing image data via a hardware ISP with software processing steps inserted between hardware ISP processing stages, in accordance with an embodiment of the invention.
  • Certain aspects of the invention may be found in a method and system for inserting software processing between hardware image sensor pipeline (ISP) processing stages on a mobile imaging device.
  • Data may be tapped or removed from any stage of the hardware image sensor pipeline and sent to a software process for processing.
  • the resulting software processed data may then be reinserted at any stage of the hardware image sensor pipeline for processing.
  • Data may be tapped from the hardware ISP, communicated to a software process, and reinserted back into any point of the hardware image sensor pipeline as many times as may be necessary for processing.
  • the hardware ISP may comprise a plurality of hardware processing stages wherein one or more hardware processing stages may be communicatively coupled with random access memory and/or one or more processors.
  • the hardware ISP, one or more processors and/or memory may be integrated on a chip.
  • a processor may direct transmission of mega pixel images from an image source to the hardware ISP.
  • Image data may be received and processed by one or more hardware stages within the hardware ISP and a processed output may be stored in memory. Subsequently, a processor may retrieve the hardware ISP processed output from memory, perform one or more software processing steps and store the results in memory.
  • the processor may signal any ISP hardware stage to fetch the software processing output from memory for additional processing within the hardware ISP if needed.
  • the processed image data output from any hardware ISP stage or software processing step may be stored in for future use. Accordingly, the hardware ISP as well as one or more processors may simultaneously process different portions of image data. Utilizing software for image data processing enables modification of processing algorithms and/or techniques while utilizing the same hardware. In some embodiments of the invention, image data may be processed in a tiled format.
  • FIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention.
  • a mobile multimedia system 105 that comprises a mobile multimedia device 105 a , a TV 101 h , a PC 101 k , an external camera 101 m , external memory 101 n , and external LCD display 101 p .
  • the mobile multimedia device 105 a may be a cellular telephone or other handheld communication device.
  • the mobile multimedia device 105 a may comprise a mobile multimedia processor (MMP) 101 a , an antenna 101 d , an audio block 101 s , a radio frequency (RF) block 101 e , a baseband processing block 101 f , an LCD display 101 b , a keypad 101 c , and a camera 101 g.
  • MMP mobile multimedia processor
  • RF radio frequency
  • the MMP 101 a may comprise suitable circuitry, logic, and/or code and may be adapted to perform video and/or multimedia processing for the mobile multimedia device 105 a .
  • the MMP 101 a may further comprise a plurality of processor cores, indicated in FIG. 1A by Core 1 and Core 2 as well as a hardware image sensor pipeline (ISP) 101 x .
  • the MMP 101 a may also comprise integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105 a .
  • the MMP 101 a may support connections to a TV 101 h , a PC 101 k , an external camera 101 m , external memory 101 n , and an external LCD display 101 p.
  • the mobile multimedia device may receive signals via the antenna 101 d .
  • Received signals may be processed by the RF block 101 e and the RF signals may be converted to baseband by the baseband processing block 101 f .
  • Baseband signals may then be processed by the MMP 101 a .
  • Audio and/or video data may be received from the external camera 101 m , and image data may be received via the integrated camera 101 g .
  • the image data may be forwarded to the hardware ISP 101 x for a plurality of image data processing steps.
  • the image data may be passed between the hardware ISP and one or more of the MMP 101 a processor cores for software processing.
  • Image processing software may be modifiable providing flexibility in processing algorithms and/or techniques.
  • concurrent processing operations may occur within one or more MMP 101 a processing cores and within the hardware ISP 101 x .
  • software processing may not reduce the speed of processing via the hardware ISP.
  • Image data may be processed in tile format, which may reduce the memory requirements for buffering of data during processing.
  • the MMP 101 a may utilize the external memory 101 n for storing processed data.
  • Processed audio data may be communicated to the audio block 101 s and processed video data may be communicated to the LCD 101 b or the external LCD 101 p , for example.
  • the keypad 101 c may be utilized for communicating processing commands and/or other data, which may be required for audio or video data processing by the MMP 101 a.
  • FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention.
  • the mobile multimedia processor 102 may comprise suitable logic, circuitry and/or code that may be adapted to perform video and/or multimedia processing for handheld multimedia products.
  • the mobile multimedia processor 102 may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming, utilizing integrated peripherals and a video processing core.
  • the mobile multimedia processor 102 may comprise video processing cores 103 A and 103 B, RAM 104 , an analog block 106 , a direct memory access (DMA) controller 163 , an audio interface (I/F) 142 , a memory stick I/F 144 , SD card I/F 146 , JTAG I/F 148 , TV output I/F 150 , USB I/F 152 , a camera I/F 154 , and a host I/F 129 .
  • DMA direct memory access
  • the mobile multimedia processor 102 may further comprise a serial peripheral interface (SPI) 157 , a universal asynchronous receiver/transmitter (UART) I/F 159 , general purpose input/output (GPIO) pins 164 , a display controller 162 , an external memory I/F 158 , and a second external memory I/F 160 .
  • SPI serial peripheral interface
  • UART universal asynchronous receiver/transmitter
  • GPIO general purpose input/output
  • the video processing cores 103 A and 103 B may comprise suitable circuitry, logic, and/or code and may be adapted to perform video processing of data.
  • the RAM 104 may comprise suitable logic, circuitry and/or code that may be adapted to store on-chip data such as video data. In an exemplary embodiment of the invention, the RAM 104 may be adapted to store 10 Mbits of on-chip data, for example. The size of the on-chip RAM 104 may vary depending on cost or other factors such as chip size.
  • the hardware image sensor pipeline (ISP) 103 C may comprise suitable circuitry, logic and/or code that may enable the processing of image data.
  • the hardware ISP 103 C may perform a plurality of processing techniques comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting, for example.
  • the hardware ISP 103 C may be communicatively coupled with the video processing cores 103 A and/or 103 B via the on-chip RAM 104 .
  • the processing of image data may be performed on variable sized tiles, reducing the memory requirements of the hardware ISP 103 C processes.
  • the hardware image sensor pipeline 103 C may be tapped at any point and resulting tapped data may be communicated to a software process for handling.
  • the resulting software processed data may then be reinserted back into the hardware image sensor pipeline 103 C at any stage or point for continued processing.
  • Data may be tapped from the hardware image sensor pipeline 103 C at any point, communicated to a software process for processing, and reinserted back into any point of the hardware ISP hardware pipeline 103 C as may times as may be necessary for processing.
  • the analog block 106 may comprise a switch mode power supply (SMPS) block and a phase locked loop (PLL) block.
  • SMPS switch mode power supply
  • PLL phase locked loop
  • the analog block 106 may comprise an on-chip SMPS controller, which may be adapted to generate its core voltage.
  • the core voltage may be software programmable according to, for example, speed demands on the mobile multimedia processor 102 , allowing further control of power management.
  • the normal core operating range may be about 0.8 V-1.2 V and may be reduced to about 0.6 V during hibernate mode.
  • the analog block 106 may also comprise a plurality of PLL's that may be adapted to generate about 195 kHz-200 MHz clocks, for example, for external devices. Other voltages and clock speeds may be utilized depending on the type of application.
  • the mobile multimedia processor 102 may comprise a plurality of power modes of operation, for example, run, sleep, hibernate and power down.
  • the mobile multimedia processor 102 may comprise a bypass mode that may allow a host to access memory mapped peripherals in power down mode, for example. In bypass mode, the mobile multimedia processor 102 may be adapted to directly control the display during normal operation while giving a host the ability to maintain the display during standby mode.
  • the audio block 108 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an inter-IC sound (I 2 S), pulse code modulation (PCM) or audio codec (AC'97) interface 142 or other suitable interface, for example.
  • I 2 S inter-IC sound
  • PCM pulse code modulation
  • AC'97 audio codec
  • suitable audio controller, processor and/or circuitry may be adapted to provide AC'97 and/or I 2 S audio output respectively, in either master or slave mode.
  • a suitable audio controller, processor and/or circuitry may be adapted to allow input and output of telephony or high quality stereo audio.
  • the PCM audio controller, processor and/or circuitry may comprise independent transmit and receive first in first out (FIFO) buffers and may use DMA to further reduce processor overhead.
  • the audio block 108 may also comprise an audio in, audio out port and a speaker/microphone port (not illustrated in FIG. 1B ).
  • the mobile multimedia device 100 may comprise at least one portable memory input/output (I/O) block.
  • the memorystick block 110 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a memorystick pro interface 144 , for example.
  • the SD card block 112 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a SD input/output (I/O) interface 146 , for example.
  • a multimedia card (MMC) may also be utilized to communicate with the mobile multimedia processor 102 via the SD input/output (I/O) interface 146 , for example.
  • the mobile multimedia device 100 may comprise other portable memory I/O blocks such an xD I/O card.
  • the debug block 114 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a joint test action group (JTAG) interface 148 , for example.
  • JTAG joint test action group
  • the debug block 114 may be adapted to access the address space of the mobile multimedia processor 102 and may be adapted to perform boundary scan via an emulation interface.
  • Other test access ports (TAPs) may be utilized.
  • TIPs phase alternate line
  • NTSC national television standards committee
  • TV output I/F 150 may be utilized for communication with a TV
  • USB universal serial bus
  • slave port I/F 152 may be utilized for communications with a PC, for example.
  • the cameras 120 and/or 122 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a multiformat raw CCIR 601 camera interface 154 , for example.
  • the camera I/F 154 may utilize windowing and sub-sampling functions, for example, to connect the mobile multimedia processor 102 to a mobile TV front end.
  • the mobile multimedia processor 102 may also comprise a plurality of serial interfaces, such as the USB I/F 152 , a serial peripheral interface (SPI) 157 , and a universal asynchronous receiver/transmitter (UART) I/F 159 for Bluetooth or IrDA.
  • the SPI master interface 157 may comprise suitable circuitry, logic, and/or code and may be utilized to control image sensors. Two chip selects may be provided, for example, to work in a polled mode with interrupts or via a DMA controller 163 .
  • the mobile multimedia processor 102 may comprise a plurality of general purpose I/O (GPIO) pins 164 , which may be utilized for user defined I/O or to connect to the internal peripherals.
  • the display controller 162 may comprise suitable circuitry, logic, and/or code and may be adapted to support multiple displays with XGA resolution, for example, and to handle 8/9/16/18/21-bit video data.
  • the baseband flash memory 124 may be adapted to receive data from the mobile multimedia processor 102 via an 8/16 bit parallel host interface 129 , for example.
  • the host interface 129 may be adapted to provide two channels with independent address and data registers through which a host processor may read and/or write directly to the memory space of the mobile multimedia processor 102 .
  • the baseband processing block 126 may comprise suitable logic, circuitry and/or code that may be adapted to convert RF signals to baseband and communicate the baseband processed signals to the mobile multimedia processor 102 via the host interface 129 , for example.
  • the RF processing block 130 may comprise suitable logic, circuitry and/or code that may be adapted to receive signals via the antenna 132 and to communicate RF signals to the baseband processing block 126 .
  • the host interface 129 may comprise a dual software channel with a power efficient bypass mode.
  • the main LCD 134 may be adapted to receive data from the mobile multimedia processor 102 via a display controller 162 and/or from a second external memory interface 160 , for example.
  • the display controller 162 may comprise suitable logic, circuitry and/or code and may be adapted to drive an internal TV out function or be connected to a range of LCD's.
  • the display controller 162 may be adapted to support a range of screen buffer formats and may utilize direct memory access (DMA) to access the buffer directly and increase video processing efficiency of the video processing core 103 .
  • DMA direct memory access
  • Both NTSC and PAL raster formats may be generated by the display controller 162 for driving the TV out.
  • Other formats for example SECAM, may also be supported.
  • the display controller 162 may be adapted to support a plurality of displays, such as an interlaced display, for example a TV, and/or a non-interlaced display, such as an LCD.
  • the display controller 162 may also recognize and communicate a display type to the DMA controller 163 .
  • the DMA controller 163 may be fetch video data in an interlaced or non-interlaced fashion for communication to an interlaced or non-interlaced display coupled to the mobile multimedia processor 102 via the display controller 162 .
  • the substitute LCD 136 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a second external memory interface, for example.
  • the mobile multimedia processor 102 may comprise a RGB external data bus.
  • the mobile multimedia processor 102 may be adapted to scale image output with pixel level interpolation and a configurable refresh rate.
  • the optional flash memory 138 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an external memory interface 158 , for example.
  • the optional SDRAM 140 may comprise suitable logic, circuitry and/or code that may be adapted to receive data from the mobile multimedia processor 102 via the external memory interface 158 , for example.
  • the external memory I/F 158 may be utilized by the mobile multimedia processor 102 to connect to external SDRAM 140 , SRAM, Flash memory 138 , and/or external peripherals, for example. Control and timing information for the SDRAM 140 and other asynchronous devices may be configurable by the mobile multimedia processor 102 .
  • the mobile multimedia processor 102 may further comprise a secondary memory interface 160 to connect to connect to memory-mapped LCD and external peripherals, for example.
  • the secondary memory interface 160 may comprise suitable circuitry, logic, and/or code and may be utilized to connect the mobile multimedia processor 102 to slower devices without compromising the speed of external memory access.
  • the secondary memory interface 160 may provide 16 data lines, for example, 6 chip select/address lines, and programmable bus timing for setup, access and hold times, for example.
  • the mobile multimedia processor 102 may be adapted to provide support for NAND/NOR Flash including NAND boot and high speed direct memory access (DMA), for example.
  • DMA direct memory access
  • the mobile multimedia processor 102 may be integrated with a hardware image sensor pipeline (ISP) 103 C.
  • ISP hardware image sensor pipeline
  • a plurality of image processing steps may be performed on a unit of image data wherein a portion of the steps may be performed in various stages of hardware by the hardware ISP 103 C and/or another portion of processing steps may be performed in software by one or more processing cores 103 A and/or 103 B for example.
  • Image processing steps may comprise dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example.
  • Output from one or more of the image processing steps may be stored for future or alternative use.
  • FIG. 2A is a block diagram of an exemplary mobile device configured to perform image processing via a hardware image sensor pipeline (ISP) and a software program executed by a processor, in accordance with an embodiment of the invention.
  • an image processing system 200 comprising an image source 201 , a random access memory (RAM) 203 , a processing block 205 , a display 207 , a hardware image sensor pipeline (ISP) 209 and a non-volatile memory (NVM) 211 .
  • ISP hardware image sensor pipeline
  • NVM non-volatile memory
  • the image source 201 may comprise suitable circuitry, logic and or code to detect a visual image and convert light to an electrical signal representing the image.
  • the image source 201 may comprise, for example, a multi-mega pixel charged-coupled device (CCD) array, a complimentary metal oxide semiconductor (CMOS) array or another related technology.
  • the image source 201 may be communicatively coupled with the RAM 203 and the processing block 205 .
  • the processing block 205 may comprise suitable circuitry, logic and/or code that may be enabled to process image data via software program and to manage and/or regulate image processing in tasks among a plurality of functional units comprising the image source 201 , hardware ISP 209 , RAM 203 , display 207 and NVM 211 .
  • the processing block 205 may be similar or substantially the same as the mobile multimedia processor (MMP) 101 a described with respect to FIG. 1A and/or the MMP 102 described with respect to FIG. 1B .
  • the processing block 205 may exchange image data with the image source 201 , the hardware ISP 209 , the RAM 203 and/or the NVM 211 .
  • the processing block 205 may be enabled to perform software image processing tasks or steps comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example.
  • the processing block 205 may be enabled to receive image data output from any processing stage in the hardware ISP 209 and to perform software image processing steps on the received image data. The output from software image processing steps may be sent to the RAM 203 .
  • the processor 205 may issue a command to the hardware ISP 209 to fetch the software processed image data in RAM 203 and to further process the fetched image data.
  • Software image processing steps may be inserted before or after any stage of image processing within the hardware ISP hardware.
  • image data output from any software or hardware processing step or stage may be stored in the NMM 211 for future use.
  • the processing block 205 may direct processed image data to the display 207 and/or the NVM 211 .
  • Image data may be processed in variable size tiles.
  • the display 207 may comprise suitable circuitry, logic and/or code for displaying an image received from the system 200 and/or a storage device.
  • the display 207 may receive control information and/or commands from the processor 205 and may be communicatively coupled with the processor 205 , hardware ISP 209 , RAM 203 and/or the NVM 211 .
  • the RAM 203 may comprise suitable circuitry, logic and/or code for storing data.
  • the RAM 203 may be similar or substantially the same as the RAM 104 described in FIG. 1B .
  • the RAM 203 may be utilized to store image data after various steps or stages of processing, for example, during an exchange of image data between the hardware ISP 209 and processor 205 .
  • the RAM 203 may store configuration data related to image processing. For example, characteristics of the image source 201 may be measured at the time of manufacture, and the distortion of the optics across a resulting image may be stored in the RAM 203 .
  • the hardware ISP 209 may comprise suitable circuitry, logic and/or code that may enable processing of image data received from the image source 201 .
  • the hardware ISP 209 may comprise circuitry allocated for image processing tasks such as steps or stages comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example. Processing steps or stages may be performed by hardware in the hardware ISP 209 and/or by software stored in the RAM 203 and executed by the processor 205 .
  • image processing performed via software processes may be inserted before or after one or more of the hardware ISP image processing stages.
  • the processor 205 may issue a command to the hardware ISP 209 to fetch the software processed image data in RAM 203 and to further process the fetched image data.
  • the NVM 211 may comprise suitable circuitry, logic and/or code for storing data.
  • the NVM 211 may be similar to or substantially the same as the memorystick block 110 , the baseband flash memory 124 , the optional flash memory 138 and/or the SDRAM 140 described in FIG. 1B for example.
  • the NVM 211 may be communicatively coupled to the RAM 203 , processing block 205 and/or the hardware ISP 209 .
  • the processor 205 may receive image data from the image source 201 .
  • the processor 205 may provide clock and control signals for synchronizing transfer of image data from the image source 201 .
  • Image data may be in tiled format and processing may begin when a tile is received.
  • the size of tiles may be determined by distortion in the image data that may be due to optical effects. Smaller sized tiles may be utilized in areas of the image where there may be higher distortion, such as around the edges, for example.
  • the tile sizes may be determined by the distortion characteristics stored in the RAM 203 .
  • the image data may be passed to the hardware ISP for various processing steps, for example, dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting.
  • dark pixel compensation for example, dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting.
  • the output of one or more hardware ISP image processing steps may be stored in the RAM 203 .
  • the processor 205 may fetch the image data from the RAM 203 and may perform one or more image processing steps via software. The output from the software processing steps may be returned to RAM 203 .
  • the processor 205 may direct a subsequent hardware processing step within the hardware ISP to fetch the software processed image data from the RAM 203 and to continue image processing steps within the hardware ISP 209 .
  • the hardware ISP 209 is capable of being tapped at any point and resulting tapped data may be communicated to a software process for handling.
  • the resulting software processed data may then be reinserted back into the hardware image sensor pipeline 209 at any stage or point for continued processing. Data may be tapped from the hardware image sensor pipeline 209 at any point, communicated to a software process for processing, and reinserted back into any point of the hardware ISP pipeline 103 C as may times as may be necessary for processing.
  • the data may be stored in the RAM 203 prior to being communicated to the display 207 .
  • the processor 205 may communicate address data to the RAM 203 to determine where to read or write processed image data in the RAM 203 .
  • Output from various intermediate steps or a final step of image processing may be may be stored for future use in the NVM 211 .
  • FIG. 2B is a block diagram of an exemplary portion of a hardware ISP configured for insertion of software processing between hardware ISP stages, in accordance with an embodiment of the invention.
  • FIG. 2B there is shown three hardware ISP processing stages 217 , 219 and 221 , a random access memory (RAM) 203 , and a processor 205 .
  • the processor 205 and RAM 203 may be similar or substantially the same as the processor 205 and RAM 203 described in FIG. 2A .
  • the hardware ISP processing stages 217 , 219 and 221 may each perform an image processing task that may comprise, for example, dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and/or output formatting for example.
  • the hardware ISP stages 217 , 219 and 221 may each be communicatively coupled with a previous hardware ISP processing stage and/or a subsequent hardware ISP processing stage as well as the RAM 203 and the processor 205 .
  • the hardware ISP processing stages 217 , 219 and 221 may represent a portion of the processing stages comprised within the hardware ISP 209 described in FIG. 2A . Accordingly, the hardware ISP processing stages 217 , 219 and 221 may comprise suitable circuitry, logic and/or code to enable processing of image data received from the image source 201 , to receive control signals from the processor 205 and to send and receive image data to and from the RAM 203 . Image processing software may be stored in the RAM 203 and executed by the processor 205 .
  • a unit of image data may be processed sequentially via the hardware ISP processing stages 217 , 219 and 221 and/or may be passed to the processor 205 for software image processing before and/or after one or more of the hardware ISP processing stages 217 , 219 and 221 .
  • the processor 205 may issue commands to the hardware ISP 209 to process a unit of image data within stage 217 and to send output to RAM 203 .
  • the processor 205 may retrieve and software process the hardware ISP stage 217 output from RAM 203 and may send software processing output to RAM 203 .
  • the processor 205 may issue commands to the ISP stage 219 to retrieve and process the software processing output from RAM 203 and send its output to hardware ISP processing stage 221 for additional processing.
  • the processor 205 may issue commands to the hardware ISP stage 221 to retrieve and process output from the hardware ISP stage 219 . Moreover, multiple units of image data may be processed simultaneously within the hardware ISP stages 217 , 219 , 221 and one or more processing cores in processor 205 .
  • FIG. 3 is a flow chart illustrating exemplary steps for processing image data via a hardware ISP with software processing inserted between hardware ISP processing stages, in accordance with an embodiment of the invention.
  • the hardware ISP 209 may receive a unit of image data from the image source 201 .
  • the hardware ISP 209 may process the unit of image data and may output processed image data to the RAM 203 .
  • the processor 205 may read the hardware ISP 209 output processed image data from the RAM 203 and may process it utilizing software.
  • Image data output from software processing may be stored in the RAM 203 .
  • the hardware ISP may retrieve the image data output from software processing in RAM 203 and may perform additional processing steps on it.
  • processed image data may be sent to a video display or stored in memory.
  • step 322 is the end step.
  • image data is processed via one or stages by a hardware image sensor pipeline 209 (ISP) wherein one or more software processing steps may be inserted at any point within the hardware ISP 209 .
  • ISP hardware image sensor pipeline 209
  • Output from any stage of the hardware ISP 209 may be stored in RAM 203 .
  • Stored hardware ISP 209 output may be retrieved from RAM 203 and processed via one or more software processes.
  • Results from the one or more software processes may be stored in RAM 203 and communicated to any stage of the hardware ISP 209 for additional processing.
  • the hardware ISP 209 and one or more processors within the processing block 205 may simultaneously process portions of image data.
  • the ISP 209 and the one or more processors within the processing block 205 may be integrated within a chip.
  • Certain embodiments of the invention may comprise a machine-readable storage having stored thereon, a computer program having at least one code section for inserting software processing in a hardware image sensor pipeline, the at least one code section being executable by a machine for causing the machine to perform one or more of the steps described herein.
  • aspects of the invention may be realized in hardware, software, firmware or a combination thereof.
  • the invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware, software and firmware may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • One embodiment of the present invention may be implemented as a board level product, as a single chip, application specific integrated circuit (ASIC), or with varying levels integrated on a single chip with other portions of the system as separate components.
  • the degree of integration of the system will primarily be determined by speed and cost considerations. Because of the sophisticated nature of modern processors, it is possible to utilize a commercially available processor, which may be implemented external to an ASIC implementation of the present system. Alternatively, if the processor is available as an ASIC core or logic block, then the commercially available processor may be implemented as part of an ASIC device with various functions implemented as firmware.
  • the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context may mean, for example, any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • other meanings of computer program within the understanding of those skilled in the art are also contemplated by the present invention.

Abstract

Image data may be processed via one or more stages by a hardware image sensor pipeline (ISP) wherein one or more software processing steps may be inserted at any point within the hardware ISP. Output from any stage of the hardware ISP may be stored in memory. Stored hardware ISP output may be retrieved from memory and processed via one or more software processes. Results from the one or more software processes may be stored in memory and communicated to any stage of the hardware ISP for additional processing. In this regard, the hardware ISP and one or more processors may simultaneously process portions of image data. In addition, the hardware ISP and the one or more processors may be integrated within a chip.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE
  • This application makes reference to, claims priority to, and claims the benefit of U.S. Provisional Application Ser. No. 60/939,914 (Attorney Docket No. 18637US01), filed on May 24, 2007, which is hereby incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • Certain embodiments of the invention relate to processing of images. More specifically, certain embodiments of the invention relate to a method and system for inserting software processing in a hardware image sensor pipeline.
  • BACKGROUND OF THE INVENTION
  • For many people, mobile or hand held electronic devices have become a part of everyday life. Mobile devices have evolved from a convenient method for voice communication to multi functional resources that offer, for example, camera features, media playback, electronic gaming, internet browsing, email and office assistance.
  • Cellular phones with built-in cameras, or camera phones, have become prevalent in the mobile phone market, due to the low cost of CMOS image sensors and the ever increasing customer demand for more advanced cellular phones.
  • Historically, the resolution of camera phones has been limited in comparison to typical digital cameras. In this regard, they must be integrated into the small package of a cellular handset, limiting both the image sensor and lens size. In addition, because of the stringent power requirements of cellular handsets, large image sensors with advanced processing have been difficult to incorporate. However, due to advancements in image sensors, multimedia processors, and lens technology, the resolution of camera phones has steadily improved rivaling that of many digital cameras.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
  • BRIEF SUMMARY OF THE INVENTION
  • A method and system for inserting software processing in a hardware image sensor pipeline, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
  • BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention.
  • FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention.
  • FIG. 2A is a block diagram of an exemplary mobile device configured to perform image processing via a hardware image sensor pipeline (ISP) and a software program executed by a processor, in accordance with an embodiment of the invention.
  • FIG. 2B is a block diagram of an exemplary portion of a hardware ISP configured for insertion of software processing between hardware ISP stages, in accordance with an embodiment of the invention.
  • FIG. 3 is a flow chart illustrating exemplary steps for processing image data via a hardware ISP with software processing steps inserted between hardware ISP processing stages, in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Certain aspects of the invention may be found in a method and system for inserting software processing between hardware image sensor pipeline (ISP) processing stages on a mobile imaging device. Data may be tapped or removed from any stage of the hardware image sensor pipeline and sent to a software process for processing. The resulting software processed data may then be reinserted at any stage of the hardware image sensor pipeline for processing. Data may be tapped from the hardware ISP, communicated to a software process, and reinserted back into any point of the hardware image sensor pipeline as many times as may be necessary for processing. In this regard, the hardware ISP may comprise a plurality of hardware processing stages wherein one or more hardware processing stages may be communicatively coupled with random access memory and/or one or more processors. The hardware ISP, one or more processors and/or memory may be integrated on a chip. A processor may direct transmission of mega pixel images from an image source to the hardware ISP. Image data may be received and processed by one or more hardware stages within the hardware ISP and a processed output may be stored in memory. Subsequently, a processor may retrieve the hardware ISP processed output from memory, perform one or more software processing steps and store the results in memory. The processor may signal any ISP hardware stage to fetch the software processing output from memory for additional processing within the hardware ISP if needed. The processed image data output from any hardware ISP stage or software processing step may be stored in for future use. Accordingly, the hardware ISP as well as one or more processors may simultaneously process different portions of image data. Utilizing software for image data processing enables modification of processing algorithms and/or techniques while utilizing the same hardware. In some embodiments of the invention, image data may be processed in a tiled format.
  • FIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention. Referring to FIG. 1A, there is shown a mobile multimedia system 105 that comprises a mobile multimedia device 105 a, a TV 101 h, a PC 101 k, an external camera 101 m, external memory 101 n, and external LCD display 101 p. The mobile multimedia device 105 a may be a cellular telephone or other handheld communication device. The mobile multimedia device 105 a may comprise a mobile multimedia processor (MMP) 101 a, an antenna 101 d, an audio block 101 s, a radio frequency (RF) block 101 e, a baseband processing block 101 f, an LCD display 101 b, a keypad 101 c, and a camera 101 g.
  • The MMP 101 a may comprise suitable circuitry, logic, and/or code and may be adapted to perform video and/or multimedia processing for the mobile multimedia device 105 a. The MMP 101 a may further comprise a plurality of processor cores, indicated in FIG. 1A by Core 1 and Core 2 as well as a hardware image sensor pipeline (ISP) 101 x. The MMP 101 a may also comprise integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105 a. For example, the MMP 101 a may support connections to a TV 101 h, a PC 101 k, an external camera 101 m, external memory 101 n, and an external LCD display 101 p.
  • In operation, the mobile multimedia device may receive signals via the antenna 101 d. Received signals may be processed by the RF block 101 e and the RF signals may be converted to baseband by the baseband processing block 101 f. Baseband signals may then be processed by the MMP 101 a. Audio and/or video data may be received from the external camera 101 m, and image data may be received via the integrated camera 101 g. The image data may be forwarded to the hardware ISP 101 x for a plurality of image data processing steps. During processing, the image data may be passed between the hardware ISP and one or more of the MMP 101 a processor cores for software processing. Image processing software may be modifiable providing flexibility in processing algorithms and/or techniques. In some embodiments of the invention, concurrent processing operations may occur within one or more MMP 101 a processing cores and within the hardware ISP 101 x. In this manner, software processing may not reduce the speed of processing via the hardware ISP. Image data may be processed in tile format, which may reduce the memory requirements for buffering of data during processing. During processing, the MMP 101 a may utilize the external memory 101 n for storing processed data. Processed audio data may be communicated to the audio block 101 s and processed video data may be communicated to the LCD 101 b or the external LCD 101 p, for example. The keypad 101 c may be utilized for communicating processing commands and/or other data, which may be required for audio or video data processing by the MMP 101 a.
  • FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention. Referring to FIG. 1B, the mobile multimedia processor 102 may comprise suitable logic, circuitry and/or code that may be adapted to perform video and/or multimedia processing for handheld multimedia products. For example, the mobile multimedia processor 102 may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming, utilizing integrated peripherals and a video processing core. The mobile multimedia processor 102 may comprise video processing cores 103A and 103B, RAM 104, an analog block 106, a direct memory access (DMA) controller 163, an audio interface (I/F) 142, a memory stick I/F 144, SD card I/F 146, JTAG I/F 148, TV output I/F 150, USB I/F 152, a camera I/F 154, and a host I/F 129. The mobile multimedia processor 102 may further comprise a serial peripheral interface (SPI) 157, a universal asynchronous receiver/transmitter (UART) I/F 159, general purpose input/output (GPIO) pins 164, a display controller 162, an external memory I/F 158, and a second external memory I/F 160.
  • The video processing cores 103A and 103B may comprise suitable circuitry, logic, and/or code and may be adapted to perform video processing of data. The RAM 104 may comprise suitable logic, circuitry and/or code that may be adapted to store on-chip data such as video data. In an exemplary embodiment of the invention, the RAM 104 may be adapted to store 10 Mbits of on-chip data, for example. The size of the on-chip RAM 104 may vary depending on cost or other factors such as chip size.
  • The hardware image sensor pipeline (ISP) 103C may comprise suitable circuitry, logic and/or code that may enable the processing of image data. The hardware ISP 103C may perform a plurality of processing techniques comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting, for example. The hardware ISP 103C may be communicatively coupled with the video processing cores 103A and/or 103B via the on-chip RAM 104. The processing of image data may be performed on variable sized tiles, reducing the memory requirements of the hardware ISP 103C processes. In accordance with an embodiment of the invention, the hardware image sensor pipeline 103C may be tapped at any point and resulting tapped data may be communicated to a software process for handling. The resulting software processed data may then be reinserted back into the hardware image sensor pipeline 103C at any stage or point for continued processing. Data may be tapped from the hardware image sensor pipeline 103C at any point, communicated to a software process for processing, and reinserted back into any point of the hardware ISP hardware pipeline 103C as may times as may be necessary for processing.
  • The analog block 106 may comprise a switch mode power supply (SMPS) block and a phase locked loop (PLL) block. In addition, the analog block 106 may comprise an on-chip SMPS controller, which may be adapted to generate its core voltage. The core voltage may be software programmable according to, for example, speed demands on the mobile multimedia processor 102, allowing further control of power management.
  • In an exemplary embodiment of the invention, the normal core operating range may be about 0.8 V-1.2 V and may be reduced to about 0.6 V during hibernate mode. The analog block 106 may also comprise a plurality of PLL's that may be adapted to generate about 195 kHz-200 MHz clocks, for example, for external devices. Other voltages and clock speeds may be utilized depending on the type of application. The mobile multimedia processor 102 may comprise a plurality of power modes of operation, for example, run, sleep, hibernate and power down. In accordance with an embodiment of the invention, the mobile multimedia processor 102 may comprise a bypass mode that may allow a host to access memory mapped peripherals in power down mode, for example. In bypass mode, the mobile multimedia processor 102 may be adapted to directly control the display during normal operation while giving a host the ability to maintain the display during standby mode.
  • The audio block 108 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an inter-IC sound (I2S), pulse code modulation (PCM) or audio codec (AC'97) interface 142 or other suitable interface, for example. In the case of an AC'97 and/or an I2S interface, suitable audio controller, processor and/or circuitry may be adapted to provide AC'97 and/or I2S audio output respectively, in either master or slave mode. In the case of the PCM interface, a suitable audio controller, processor and/or circuitry may be adapted to allow input and output of telephony or high quality stereo audio. The PCM audio controller, processor and/or circuitry may comprise independent transmit and receive first in first out (FIFO) buffers and may use DMA to further reduce processor overhead. The audio block 108 may also comprise an audio in, audio out port and a speaker/microphone port (not illustrated in FIG. 1B).
  • The mobile multimedia device 100 may comprise at least one portable memory input/output (I/O) block. In this regard, the memorystick block 110 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a memorystick pro interface 144, for example. The SD card block 112 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a SD input/output (I/O) interface 146, for example. A multimedia card (MMC) may also be utilized to communicate with the mobile multimedia processor 102 via the SD input/output (I/O) interface 146, for example. The mobile multimedia device 100 may comprise other portable memory I/O blocks such an xD I/O card.
  • The debug block 114 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a joint test action group (JTAG) interface 148, for example. The debug block 114 may be adapted to access the address space of the mobile multimedia processor 102 and may be adapted to perform boundary scan via an emulation interface. Other test access ports (TAPs) may be utilized. The phase alternate line (PAL)/national television standards committee (NTSC) TV output I/F 150 may be utilized for communication with a TV, and the universal serial bus (USB) 1.1, or other variant thereof, slave port I/F 152 may be utilized for communications with a PC, for example. The cameras 120 and/or 122 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a multiformat raw CCIR 601 camera interface 154, for example. The camera I/F 154 may utilize windowing and sub-sampling functions, for example, to connect the mobile multimedia processor 102 to a mobile TV front end.
  • The mobile multimedia processor 102 may also comprise a plurality of serial interfaces, such as the USB I/F 152, a serial peripheral interface (SPI) 157, and a universal asynchronous receiver/transmitter (UART) I/F 159 for Bluetooth or IrDA. The SPI master interface 157 may comprise suitable circuitry, logic, and/or code and may be utilized to control image sensors. Two chip selects may be provided, for example, to work in a polled mode with interrupts or via a DMA controller 163. Furthermore, the mobile multimedia processor 102 may comprise a plurality of general purpose I/O (GPIO) pins 164, which may be utilized for user defined I/O or to connect to the internal peripherals. The display controller 162 may comprise suitable circuitry, logic, and/or code and may be adapted to support multiple displays with XGA resolution, for example, and to handle 8/9/16/18/21-bit video data.
  • The baseband flash memory 124 may be adapted to receive data from the mobile multimedia processor 102 via an 8/16 bit parallel host interface 129, for example. The host interface 129 may be adapted to provide two channels with independent address and data registers through which a host processor may read and/or write directly to the memory space of the mobile multimedia processor 102. The baseband processing block 126 may comprise suitable logic, circuitry and/or code that may be adapted to convert RF signals to baseband and communicate the baseband processed signals to the mobile multimedia processor 102 via the host interface 129, for example. The RF processing block 130 may comprise suitable logic, circuitry and/or code that may be adapted to receive signals via the antenna 132 and to communicate RF signals to the baseband processing block 126. The host interface 129 may comprise a dual software channel with a power efficient bypass mode.
  • The main LCD 134 may be adapted to receive data from the mobile multimedia processor 102 via a display controller 162 and/or from a second external memory interface 160, for example. The display controller 162 may comprise suitable logic, circuitry and/or code and may be adapted to drive an internal TV out function or be connected to a range of LCD's. The display controller 162 may be adapted to support a range of screen buffer formats and may utilize direct memory access (DMA) to access the buffer directly and increase video processing efficiency of the video processing core 103. Both NTSC and PAL raster formats may be generated by the display controller 162 for driving the TV out. Other formats, for example SECAM, may also be supported.
  • In one embodiment of the invention, the display controller 162 may be adapted to support a plurality of displays, such as an interlaced display, for example a TV, and/or a non-interlaced display, such as an LCD. The display controller 162 may also recognize and communicate a display type to the DMA controller 163. In this regard, the DMA controller 163 may be fetch video data in an interlaced or non-interlaced fashion for communication to an interlaced or non-interlaced display coupled to the mobile multimedia processor 102 via the display controller 162.
  • The substitute LCD 136 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a second external memory interface, for example. The mobile multimedia processor 102 may comprise a RGB external data bus. The mobile multimedia processor 102 may be adapted to scale image output with pixel level interpolation and a configurable refresh rate.
  • The optional flash memory 138 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an external memory interface 158, for example. The optional SDRAM 140 may comprise suitable logic, circuitry and/or code that may be adapted to receive data from the mobile multimedia processor 102 via the external memory interface 158, for example. The external memory I/F 158 may be utilized by the mobile multimedia processor 102 to connect to external SDRAM 140, SRAM, Flash memory 138, and/or external peripherals, for example. Control and timing information for the SDRAM 140 and other asynchronous devices may be configurable by the mobile multimedia processor 102.
  • The mobile multimedia processor 102 may further comprise a secondary memory interface 160 to connect to connect to memory-mapped LCD and external peripherals, for example. The secondary memory interface 160 may comprise suitable circuitry, logic, and/or code and may be utilized to connect the mobile multimedia processor 102 to slower devices without compromising the speed of external memory access. The secondary memory interface 160 may provide 16 data lines, for example, 6 chip select/address lines, and programmable bus timing for setup, access and hold times, for example. The mobile multimedia processor 102 may be adapted to provide support for NAND/NOR Flash including NAND boot and high speed direct memory access (DMA), for example.
  • In operation, the mobile multimedia processor 102 may be integrated with a hardware image sensor pipeline (ISP) 103C. In this regard, a plurality of image processing steps may be performed on a unit of image data wherein a portion of the steps may be performed in various stages of hardware by the hardware ISP 103C and/or another portion of processing steps may be performed in software by one or more processing cores 103A and/or 103B for example. Image processing steps may comprise dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example. Output from one or more of the image processing steps may be stored for future or alternative use.
  • FIG. 2A is a block diagram of an exemplary mobile device configured to perform image processing via a hardware image sensor pipeline (ISP) and a software program executed by a processor, in accordance with an embodiment of the invention. Referring to FIG. 2A, there is shown an image processing system 200 comprising an image source 201, a random access memory (RAM) 203, a processing block 205, a display 207, a hardware image sensor pipeline (ISP) 209 and a non-volatile memory (NVM) 211.
  • The image source 201 may comprise suitable circuitry, logic and or code to detect a visual image and convert light to an electrical signal representing the image. In this regard, the image source 201 may comprise, for example, a multi-mega pixel charged-coupled device (CCD) array, a complimentary metal oxide semiconductor (CMOS) array or another related technology. The image source 201 may be communicatively coupled with the RAM 203 and the processing block 205.
  • The processing block 205 may comprise suitable circuitry, logic and/or code that may be enabled to process image data via software program and to manage and/or regulate image processing in tasks among a plurality of functional units comprising the image source 201, hardware ISP 209, RAM 203, display 207 and NVM 211. The processing block 205 may be similar or substantially the same as the mobile multimedia processor (MMP) 101 a described with respect to FIG. 1A and/or the MMP 102 described with respect to FIG. 1B. The processing block 205 may exchange image data with the image source 201, the hardware ISP 209, the RAM 203 and/or the NVM 211. The processing block 205 may be enabled to perform software image processing tasks or steps comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example. In this regard, the processing block 205 may be enabled to receive image data output from any processing stage in the hardware ISP 209 and to perform software image processing steps on the received image data. The output from software image processing steps may be sent to the RAM 203. The processor 205 may issue a command to the hardware ISP 209 to fetch the software processed image data in RAM 203 and to further process the fetched image data. Software image processing steps may be inserted before or after any stage of image processing within the hardware ISP hardware. Moreover, image data output from any software or hardware processing step or stage may be stored in the NMM 211 for future use. The processing block 205 may direct processed image data to the display 207 and/or the NVM 211. Image data may be processed in variable size tiles.
  • The display 207 may comprise suitable circuitry, logic and/or code for displaying an image received from the system 200 and/or a storage device. The display 207 may receive control information and/or commands from the processor 205 and may be communicatively coupled with the processor 205, hardware ISP 209, RAM 203 and/or the NVM 211.
  • The RAM 203 may comprise suitable circuitry, logic and/or code for storing data. The RAM 203 may be similar or substantially the same as the RAM 104 described in FIG. 1B. The RAM 203 may be utilized to store image data after various steps or stages of processing, for example, during an exchange of image data between the hardware ISP 209 and processor 205. In addition, the RAM 203 may store configuration data related to image processing. For example, characteristics of the image source 201 may be measured at the time of manufacture, and the distortion of the optics across a resulting image may be stored in the RAM 203.
  • The hardware ISP 209 may comprise suitable circuitry, logic and/or code that may enable processing of image data received from the image source 201. The hardware ISP 209 may comprise circuitry allocated for image processing tasks such as steps or stages comprising dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting for example. Processing steps or stages may be performed by hardware in the hardware ISP 209 and/or by software stored in the RAM 203 and executed by the processor 205. In this regard, image processing performed via software processes may be inserted before or after one or more of the hardware ISP image processing stages. The processor 205 may issue a command to the hardware ISP 209 to fetch the software processed image data in RAM 203 and to further process the fetched image data.
  • The NVM 211 may comprise suitable circuitry, logic and/or code for storing data. In various embodiments of the invention, the NVM 211 may be similar to or substantially the same as the memorystick block 110, the baseband flash memory 124, the optional flash memory 138 and/or the SDRAM 140 described in FIG. 1B for example. The NVM 211 may be communicatively coupled to the RAM 203, processing block 205 and/or the hardware ISP 209.
  • In operation, the processor 205 may receive image data from the image source 201. The processor 205 may provide clock and control signals for synchronizing transfer of image data from the image source 201. Image data may be in tiled format and processing may begin when a tile is received. The size of tiles may be determined by distortion in the image data that may be due to optical effects. Smaller sized tiles may be utilized in areas of the image where there may be higher distortion, such as around the edges, for example. The tile sizes may be determined by the distortion characteristics stored in the RAM 203. The image data may be passed to the hardware ISP for various processing steps, for example, dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and output formatting.
  • The output of one or more hardware ISP image processing steps may be stored in the RAM 203. The processor 205 may fetch the image data from the RAM 203 and may perform one or more image processing steps via software. The output from the software processing steps may be returned to RAM 203. The processor 205 may direct a subsequent hardware processing step within the hardware ISP to fetch the software processed image data from the RAM 203 and to continue image processing steps within the hardware ISP 209. Accordingly, the hardware ISP 209 is capable of being tapped at any point and resulting tapped data may be communicated to a software process for handling. The resulting software processed data may then be reinserted back into the hardware image sensor pipeline 209 at any stage or point for continued processing. Data may be tapped from the hardware image sensor pipeline 209 at any point, communicated to a software process for processing, and reinserted back into any point of the hardware ISP pipeline 103C as may times as may be necessary for processing.
  • The data may be stored in the RAM 203 prior to being communicated to the display 207. The processor 205 may communicate address data to the RAM 203 to determine where to read or write processed image data in the RAM 203. Output from various intermediate steps or a final step of image processing may be may be stored for future use in the NVM 211.
  • FIG. 2B is a block diagram of an exemplary portion of a hardware ISP configured for insertion of software processing between hardware ISP stages, in accordance with an embodiment of the invention. Referring to FIG. 2B, there is shown three hardware ISP processing stages 217, 219 and 221, a random access memory (RAM) 203, and a processor 205. The processor 205 and RAM 203 may be similar or substantially the same as the processor 205 and RAM 203 described in FIG. 2A.
  • The hardware ISP processing stages 217, 219 and 221 may each perform an image processing task that may comprise, for example, dark pixel compensation, lens shading correction, white balance and gain control, defective pixel correction, resampling, crosstalk correction, bayer denoising, demosaicing, gamma correction, YCbCr denoising, false color suppression, sharpening, distortion correction, high resolution resize, color processing, color conversion, low resolution resize and/or output formatting for example. The hardware ISP stages 217, 219 and 221 may each be communicatively coupled with a previous hardware ISP processing stage and/or a subsequent hardware ISP processing stage as well as the RAM 203 and the processor 205.
  • In operation, the hardware ISP processing stages 217, 219 and 221 may represent a portion of the processing stages comprised within the hardware ISP 209 described in FIG. 2A. Accordingly, the hardware ISP processing stages 217, 219 and 221 may comprise suitable circuitry, logic and/or code to enable processing of image data received from the image source 201, to receive control signals from the processor 205 and to send and receive image data to and from the RAM 203. Image processing software may be stored in the RAM 203 and executed by the processor 205. In this regard, a unit of image data may be processed sequentially via the hardware ISP processing stages 217, 219 and 221 and/or may be passed to the processor 205 for software image processing before and/or after one or more of the hardware ISP processing stages 217, 219 and 221. In some embodiments of the invention, the processor 205 may issue commands to the hardware ISP 209 to process a unit of image data within stage 217 and to send output to RAM 203. The processor 205 may retrieve and software process the hardware ISP stage 217 output from RAM 203 and may send software processing output to RAM 203. The processor 205 may issue commands to the ISP stage 219 to retrieve and process the software processing output from RAM 203 and send its output to hardware ISP processing stage 221 for additional processing. The processor 205 may issue commands to the hardware ISP stage 221 to retrieve and process output from the hardware ISP stage 219. Moreover, multiple units of image data may be processed simultaneously within the hardware ISP stages 217, 219, 221 and one or more processing cores in processor 205.
  • FIG. 3 is a flow chart illustrating exemplary steps for processing image data via a hardware ISP with software processing inserted between hardware ISP processing stages, in accordance with an embodiment of the invention. Referring to FIG. 3, after start step 310, in step 312 the hardware ISP 209 may receive a unit of image data from the image source 201. In step 314, the hardware ISP 209 may process the unit of image data and may output processed image data to the RAM 203. In step 316, the processor 205 may read the hardware ISP 209 output processed image data from the RAM 203 and may process it utilizing software. Image data output from software processing may be stored in the RAM 203. In step 318, the hardware ISP may retrieve the image data output from software processing in RAM 203 and may perform additional processing steps on it. In step 320, processed image data may be sent to a video display or stored in memory. In step 322 is the end step.
  • In an embodiment of the invention, image data is processed via one or stages by a hardware image sensor pipeline 209 (ISP) wherein one or more software processing steps may be inserted at any point within the hardware ISP 209. Output from any stage of the hardware ISP 209 may be stored in RAM 203. Stored hardware ISP 209 output may be retrieved from RAM 203 and processed via one or more software processes. Results from the one or more software processes may be stored in RAM 203 and communicated to any stage of the hardware ISP 209 for additional processing. In this regard, the hardware ISP 209 and one or more processors within the processing block 205 may simultaneously process portions of image data. In addition, the ISP 209 and the one or more processors within the processing block 205 may be integrated within a chip.
  • Certain embodiments of the invention may comprise a machine-readable storage having stored thereon, a computer program having at least one code section for inserting software processing in a hardware image sensor pipeline, the at least one code section being executable by a machine for causing the machine to perform one or more of the steps described herein.
  • Accordingly, aspects of the invention may be realized in hardware, software, firmware or a combination thereof. The invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware, software and firmware may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • One embodiment of the present invention may be implemented as a board level product, as a single chip, application specific integrated circuit (ASIC), or with varying levels integrated on a single chip with other portions of the system as separate components. The degree of integration of the system will primarily be determined by speed and cost considerations. Because of the sophisticated nature of modern processors, it is possible to utilize a commercially available processor, which may be implemented external to an ASIC implementation of the present system. Alternatively, if the processor is available as an ASIC core or logic block, then the commercially available processor may be implemented as part of an ASIC device with various functions implemented as firmware.
  • The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context may mean, for example, any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. However, other meanings of computer program within the understanding of those skilled in the art are also contemplated by the present invention.
  • While the invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims (24)

1. A method for processing images, the method comprising:
processing image data via one or more steps or stages handled by a hardware image sensor pipeline (ISP) wherein said hardware ISP enables insertion of one or more software processing steps or stages at any point within said hardware ISP.
2. The method according to claim 1, comprising storing output from any portion of said hardware ISP.
3. The method according to claim 2, comprising retrieving said stored output for handling via one or more software processes.
4. The method according to claim 3, comprising processing said retrieved stored output for processing via said one or more software processes.
5. The method according to claim 4, comprising storing results from said processing via said one or more software processes.
6. The method according to claim 5, comprising communicating said stored results from said processing via said one or more software processes to any portion of said hardware ISP for processing.
7. The method according to claim 1, comprising simultaneously processing a portion of said image data via said hardware ISP and a portion of said image data via one or more processors handling said software processes.
8. The method according to claim 1, wherein said hardware ISP and one or more processors enabled to handle said one or more software processing steps are integrated within a chip.
9. A system for processing images, the system comprising:
one or more circuits comprising a hardware image sensor pipeline (ISP), said one or more circuits enable processing of image data via one or more steps or stages handled by said hardware image sensor pipeline (ISP) and wherein said one or more circuits enable the insertion of one or more software processing steps at any point in said hardware ISP.
10. The system according to claim 9, wherein said one or more circuits enables storage of output from any portion of said hardware ISP.
11. The system according to claim 10, wherein said one or more circuits enables retrieval of said stored output for handling via one or more software processes.
12. The system according to claim 11, wherein said one or more circuits enables processing of said retrieved stored output for processing via said one or more software processes.
13. The system according to claim 12, wherein said one or more circuits enables storage of results from said processing via said one or more software processes.
14. The system according to claim 13, wherein said one or more circuits enables communication of said stored results from said processing via said one
15. The system according to claim 9, wherein said one or more circuits enables simultaneously processing of a portion of said image data via said hardware ISP and a portion of said image data via one or more processors handling said software processes.
16. The system according to claim 9, wherein said hardware ISP and one or more processors enabled to handle said one or more software processing steps are integrated within a chip.
17. A machine-readable storage having stored thereon, a computer program having at least one code section for processing images, the at least one code section being executable by a machine for causing the machine to perform steps comprising:
processing image data via one or more steps or stages handled by a hardware image sensor pipeline (ISP) wherein said hardware ISP enables the insertion of one or more software processing steps at any point in said hardware ISP.
18. The machine-readable storage according to claim 17, wherein said at least one code section comprises code that enables storing of output from any portion of said hardware ISP.
19. The machine-readable storage according to claim 18, wherein said at least one code section comprises code that enables retrieving of said stored output for handling via one or more software processes.
20. The machine-readable storage according to claim 19, wherein said at least one code section comprises code that enables processing of said retrieved stored output for processing via said one or more software processes.
21. The machine-readable storage according to claim 20, wherein said at least one code section comprises code that enables storing of results from said processing via said one or more software processes.
22. The machine-readable storage according to claim 21, wherein said at least one code section comprises code that enables communicating of said stored results from said processing via said one or more software processes to any portion of said hardware ISP for processing
23. The machine-readable storage according to claim 17, wherein said at least one code section comprises code that enables simultaneously processing of a portion of said image data via said hardware ISP and a portion of said image data via one or more processors handling said software processes.
24. The machine-readable storage according to claim 17, wherein said hardware ISP and one or more processors enabled to handle said one or more software processing steps are integrated within a chip.
US11/940,788 2007-05-24 2007-11-15 Method and system for inserting software processing in a hardware image sensor pipeline Active 2032-03-22 US9058668B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/940,788 US9058668B2 (en) 2007-05-24 2007-11-15 Method and system for inserting software processing in a hardware image sensor pipeline

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US93991407P 2007-05-24 2007-05-24
US11/940,788 US9058668B2 (en) 2007-05-24 2007-11-15 Method and system for inserting software processing in a hardware image sensor pipeline

Publications (3)

Publication Number Publication Date
US20080292132A1 true US20080292132A1 (en) 2008-11-27
US20090232347A9 US20090232347A9 (en) 2009-09-17
US9058668B2 US9058668B2 (en) 2015-06-16

Family

ID=40072421

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/940,788 Active 2032-03-22 US9058668B2 (en) 2007-05-24 2007-11-15 Method and system for inserting software processing in a hardware image sensor pipeline

Country Status (1)

Country Link
US (1) US9058668B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110261061A1 (en) * 2010-04-22 2011-10-27 Adrian Lees Method and system for processing image data on a per tile basis in an image sensor pipeline

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9100640B2 (en) * 2010-08-27 2015-08-04 Broadcom Corporation Method and system for utilizing image sensor pipeline (ISP) for enhancing color of the 3D image utilizing z-depth information
KR102325341B1 (en) * 2015-07-17 2021-11-11 삼성전자주식회사 Image display apparatus and method for the same

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5686960A (en) * 1992-01-14 1997-11-11 Michael Sussman Image input device having optical deflection elements for capturing multiple sub-images
US5886742A (en) * 1995-01-12 1999-03-23 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
US5973844A (en) * 1996-01-26 1999-10-26 Proxemics Lenslet array systems and methods
US6163621A (en) * 1997-02-27 2000-12-19 Samsung Electronics Co., Ltd Histogram equalization method and device in contrast enhancement apparatus for image processing system
US6285398B1 (en) * 1997-11-17 2001-09-04 Sony Corporation Charge-coupled device video camera with raw data format output and software implemented camera signal processing
US20010021271A1 (en) * 2000-03-06 2001-09-13 Hideyasu Ishibashi Method and apparatus for compressing multispectral images
US20030002746A1 (en) * 2000-09-28 2003-01-02 Yosuke Kusaka Image creating device and image creating method
US20030077064A1 (en) * 2001-09-27 2003-04-24 Fuji Photo Film Co., Ltd. Image data sending method, digital camera, image data storing method, image data storing apparatus, and programs therefor
US20040218059A1 (en) * 2001-12-21 2004-11-04 Pere Obrador Concurrent dual pipeline for acquisition, processing and transmission of digital video and high resolution digital still photographs
US20050025372A1 (en) * 2003-07-28 2005-02-03 Samsung Electronics Co., Ltd. Discrete wavelet transform unit and method for adaptively encoding still image based on energy of each block
US20050078755A1 (en) * 2003-06-10 2005-04-14 Woods John W. Overlapped block motion compensation for variable size blocks in the context of MCTF scalable video coders
US20050140787A1 (en) * 2003-11-21 2005-06-30 Michael Kaplinsky High resolution network video camera with massively parallel implementation of image processing, compression and network server
US6933973B1 (en) * 1999-03-01 2005-08-23 Kawasaki Microelectronics, Inc. CMOS image sensor having block scanning capability
US7027665B1 (en) * 2000-09-29 2006-04-11 Microsoft Corporation Method and apparatus for reducing image acquisition time in a digital imaging device
US20060090002A1 (en) * 2000-03-01 2006-04-27 Real Communications, Inc. (Subsidiary Of Realtek Semiconductor Corp.) Programmable task scheduler for use with multiport xDSL processing system
US20060188014A1 (en) * 2005-02-23 2006-08-24 Civanlar M R Video coding and adaptation by semantics-driven resolution control for transport and storage
US20060274170A1 (en) * 2005-06-07 2006-12-07 Olympus Corporation Image pickup device
US20070065043A1 (en) * 2003-10-29 2007-03-22 Hisashi Sano Image processing method, image processing device and program
US20070133870A1 (en) * 2005-12-14 2007-06-14 Micron Technology, Inc. Method, apparatus, and system for improved color statistic pruning for automatic color balance
US7359563B1 (en) * 2004-04-05 2008-04-15 Louisiana Tech University Research Foundation Method to stabilize a moving image
US20100118935A1 (en) * 2004-04-23 2010-05-13 Sumitomo Electric Industries, Ltd. Coding method for motion-image data, decoding method, terminal equipment executing these, and two-way interactive system

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5686960A (en) * 1992-01-14 1997-11-11 Michael Sussman Image input device having optical deflection elements for capturing multiple sub-images
US5886742A (en) * 1995-01-12 1999-03-23 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
US6275532B1 (en) * 1995-03-18 2001-08-14 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
US5973844A (en) * 1996-01-26 1999-10-26 Proxemics Lenslet array systems and methods
US6163621A (en) * 1997-02-27 2000-12-19 Samsung Electronics Co., Ltd Histogram equalization method and device in contrast enhancement apparatus for image processing system
US6285398B1 (en) * 1997-11-17 2001-09-04 Sony Corporation Charge-coupled device video camera with raw data format output and software implemented camera signal processing
US6933973B1 (en) * 1999-03-01 2005-08-23 Kawasaki Microelectronics, Inc. CMOS image sensor having block scanning capability
US20060090002A1 (en) * 2000-03-01 2006-04-27 Real Communications, Inc. (Subsidiary Of Realtek Semiconductor Corp.) Programmable task scheduler for use with multiport xDSL processing system
US20010021271A1 (en) * 2000-03-06 2001-09-13 Hideyasu Ishibashi Method and apparatus for compressing multispectral images
US20030002746A1 (en) * 2000-09-28 2003-01-02 Yosuke Kusaka Image creating device and image creating method
US7027665B1 (en) * 2000-09-29 2006-04-11 Microsoft Corporation Method and apparatus for reducing image acquisition time in a digital imaging device
US20030077064A1 (en) * 2001-09-27 2003-04-24 Fuji Photo Film Co., Ltd. Image data sending method, digital camera, image data storing method, image data storing apparatus, and programs therefor
US20040218059A1 (en) * 2001-12-21 2004-11-04 Pere Obrador Concurrent dual pipeline for acquisition, processing and transmission of digital video and high resolution digital still photographs
US20050078755A1 (en) * 2003-06-10 2005-04-14 Woods John W. Overlapped block motion compensation for variable size blocks in the context of MCTF scalable video coders
US20050025372A1 (en) * 2003-07-28 2005-02-03 Samsung Electronics Co., Ltd. Discrete wavelet transform unit and method for adaptively encoding still image based on energy of each block
US20070065043A1 (en) * 2003-10-29 2007-03-22 Hisashi Sano Image processing method, image processing device and program
US20050140787A1 (en) * 2003-11-21 2005-06-30 Michael Kaplinsky High resolution network video camera with massively parallel implementation of image processing, compression and network server
US7359563B1 (en) * 2004-04-05 2008-04-15 Louisiana Tech University Research Foundation Method to stabilize a moving image
US20100118935A1 (en) * 2004-04-23 2010-05-13 Sumitomo Electric Industries, Ltd. Coding method for motion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20060188014A1 (en) * 2005-02-23 2006-08-24 Civanlar M R Video coding and adaptation by semantics-driven resolution control for transport and storage
US20060274170A1 (en) * 2005-06-07 2006-12-07 Olympus Corporation Image pickup device
US20070133870A1 (en) * 2005-12-14 2007-06-14 Micron Technology, Inc. Method, apparatus, and system for improved color statistic pruning for automatic color balance

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110261061A1 (en) * 2010-04-22 2011-10-27 Adrian Lees Method and system for processing image data on a per tile basis in an image sensor pipeline
US8798386B2 (en) * 2010-04-22 2014-08-05 Broadcom Corporation Method and system for processing image data on a per tile basis in an image sensor pipeline

Also Published As

Publication number Publication date
US20090232347A9 (en) 2009-09-17
US9058668B2 (en) 2015-06-16

Similar Documents

Publication Publication Date Title
US20080292219A1 (en) Method And System For An Image Sensor Pipeline On A Mobile Imaging Device
US9232125B2 (en) Method of eliminating a shutter-lag, camera module, and mobile device having the same
US9538087B2 (en) Image processing device with multiple image signal processors and image processing method
JP4993856B2 (en) Image conversion device, direct memory access device for image conversion, and camera interface supporting image conversion
US20080292216A1 (en) Method and system for processing images using variable size tiles
US20100087147A1 (en) Method and System for Input/Output Pads in a Mobile Multimedia Processor
US20060181547A1 (en) Method and system for image editing in a mobile multimedia processor
US8798386B2 (en) Method and system for processing image data on a per tile basis in an image sensor pipeline
WO2013054486A1 (en) Electronic equipment and program
EP1691371A2 (en) Image editor used for editing images in a mobile communication device
US20100023654A1 (en) Method and system for input/output pads in a mobile multimedia processor
US8180398B2 (en) Multimedia data communication method and system
CN101753820A (en) Information processing apparatus, buffer control method, and computer program
EP1691368A2 (en) An image editor with plug-in capability for editing images in a mobile communication device
US20080293449A1 (en) Method and system for partitioning a device into domains to optimize power consumption
US9058668B2 (en) Method and system for inserting software processing in a hardware image sensor pipeline
US7793007B2 (en) Method and system for deglitching in a mobile multimedia processor
US9135036B2 (en) Method and system for reducing communication during video processing utilizing merge buffering
US8363158B2 (en) Imaging device employing a buffer unit having a terminating resistor
US8804009B2 (en) Multimedia information appliance
JP4266477B2 (en) Information processing apparatus and control method thereof
JP2017199392A (en) Electronic apparatus and program
KR20060054716A (en) Camera interface apparatus for mobile communication terminal
CN115883987A (en) Data processing method, sub-chip and electronic equipment
CN115878527A (en) Data transmission method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PLOWMAN, DAVID A.;KEALL, GARY;WALKER, CLIVE;REEL/FRAME:020177/0803

Effective date: 20071114

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047229/0408

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE PREVIOUSLY RECORDED ON REEL 047229 FRAME 0408. ASSIGNOR(S) HEREBY CONFIRMS THE THE EFFECTIVE DATE IS 09/05/2018;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047349/0001

Effective date: 20180905

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE PATENT NUMBER 9,385,856 TO 9,385,756 PREVIOUSLY RECORDED AT REEL: 47349 FRAME: 001. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:051144/0648

Effective date: 20180905

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8