WO1999063751A1 - Low-power parallel processor and imager integrated circuit - Google Patents

Low-power parallel processor and imager integrated circuit Download PDF

Info

Publication number
WO1999063751A1
WO1999063751A1 PCT/US1999/012172 US9912172W WO9963751A1 WO 1999063751 A1 WO1999063751 A1 WO 1999063751A1 US 9912172 W US9912172 W US 9912172W WO 9963751 A1 WO9963751 A1 WO 9963751A1
Authority
WO
WIPO (PCT)
Prior art keywords
processors
processor
memory
image
integrated circuit
Prior art date
Application number
PCT/US1999/012172
Other languages
French (fr)
Inventor
Jeff Y. F. Hsieh
Teresa H. Y. Meng
Original Assignee
The Board Of Trustees Of The Leland Stanford Junior University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Board Of Trustees Of The Leland Stanford Junior University filed Critical The Board Of Trustees Of The Leland Stanford Junior University
Priority to AU43266/99A priority Critical patent/AU4326699A/en
Publication of WO1999063751A1 publication Critical patent/WO1999063751A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof

Definitions

  • This invention relates to a low-power, single chip, parallel processor and imager system, and, more specifically, in one embodiment, a low power, large scale MPEG2 encoder and imager system for a single-chip digital CMOS video camera is disclosed.
  • Processing of digital data obtained from an image sensor requires complex calculations.
  • Video processing engines are designed to optimize processing of video data stored in a secondary storage medium, e.g., random access memory, hard drive, or DVD. This results in a need for an external chipset whose primary task is to provide the necessary bandwidth for data transfer between the video engine and the secondary storage medium. The requirement of such an external data transfer eliminates the possibility for a low-power, single-chip solution.
  • a secondary storage medium e.g., random access memory, hard drive, or DVD.
  • Another existing solution that uses less power is a single integrated circuit chip for both the image sensor and digital processor.
  • An example of such a single integrated circuit chip is the VLSI Vision Limited VV6405 NTSC Colour CMOS Image Sensor.
  • the digital processor disclosed operates upon consecutive rows of pixel data sequentially to perform simple pixel- level computations. While this solution uses less power than other alternatives, it does not have the ability to perform operations at rates that are desired.
  • the present invention implements a parallel processing architecture in which a plurality of parallel processors concurrently operate upon a different block, preferably a column, of image data.
  • this single chip solution has characteristics that provide the throughput necessary to perform computationally complex operations, such as color correction, RGB to YUV conversion and DCT operations in either still or video applications, and motion estimation in digital video processing applications.
  • a parallel processor and imager system implements in a preferred embodiment a single-chip digital CMOS video camera with real-time MPEG2 encoding capability. Computationally intensive operations of the video compression algorithms can be performed on-chip, at a location right beside the output of the imager, resulting in low latency and low power consumption. In all embodiments, this architecture takes advantage of parallelism in image processing algorithms, which is exploited to obtain efficient processing. BRIEF DESCRIPTION OF THE DRAWINGS
  • Fig. 1 illustrates a single monolithic integrated circuit containing an image sensor array and parallel processors according to the present invention
  • Figs 2A-C illustrate alternative manners in which instructions can be fed into each of the plurality of parallel processors according to the present invention
  • Fig. 3 illustrates a single integrated circuit containing an image sensor array, parallel processors, and embedded memory capable of encoding sequential images according to the present invention
  • Figs. 4 illustrate another layout of a single integrated circuit for the embodiment described in Fig. 3;
  • Fig. 5 illustrates a more detailed diagram of one of the parallel processors for the embodiment described in Fig. 3 according to the present invention
  • Fig. 6 illustrates a more detailed diagram of one embodiment of an arithmetic logic unit for the embodiment described in Fig. 5 according to the present invention
  • Figs 7A and 7B illustrate alternative addressing schemes that can be used with the parallel processors operating upon columns of pixel data according to the present invention
  • Fig. 8 provides a table of estimated cycle count per processor per frame needed for each encoding/decoding step.
  • This invention in its most basic form, has the capacity to sense to a single image, generate pixel data as a result of the sensed image, and concurrently process that image using a plurality of parallel processors, each of which simultaneously operate on portions of the pixel data associated with the image.
  • the portions of the pixel image that each processor operates upon is a column of pixel data, although pixel data that is concurrently operated upon can be divided in various other ways, such as blocks.
  • digital processor and imager system 10 includes a sensor array 12 that detects an image and generates detected signals corresponding thereto.
  • This sensor array 12 is preferably a CMOS photo sensor array, but could also be other types of arrays, such as charge coupled devices.
  • Also included in the system 10 are a plurality of parallel processors 14, each of which inputs certain predetermined ones of the detected signals by being coupled to and in close proximity with the sensor array 12, and also being coupled to an output buffer 16.
  • the image data such as from a single image that is sensed in a digital camera, is detected by the sensor array 12, and the detected signals, also called pixel data are transmitted columnwise into a plurality of parallel processors 14, forty in the embodiment illustrated.
  • Each of the forty processors operates upon the input detected signals to generate encoded signals, which are then output to the output buffer 16, the encoded signals being encoded based upon the algorithm that each of the processors is implementing.
  • the number of parallel processors, the size of each of the parallel processors, the search space within a processor domain, and the size of certain memories, for instance are based upon an array having a predetermined resolution of 640x480 array of sensing elements. It should be noted, however, that, for each of the embodiments described, the specific numbers of processors, implementation of each processor, and search space, memory requirements, and other specific implementation aspects as recited are not intended to be limiting, but instead to completely describe a presently preferred embodiment.
  • the relationship of specific implementation aspects is not arbitrary, but based upon considerations in which computationally intensive operations can be simultaneously repeated by multiple processors in order to obtain the fullest throughput.
  • This throughput is dependent in part upon the algorithms that need to be implemented, for example the fact that motion estimation requires knowledge of neighboring pixel data, whereas RDB to YUV conversion and DCT operations do not require such knowledge.
  • the size of the sensing array will assist in determining the proper search space, with the larger the sensor array, the larger the search space being able to be without having adverse effects on throughput and increased power usage.
  • the larger the number of pixels that each processor operates upon the greater the resulting clock rate, and the more complex the associated circuitry becomes. Accordingly, specific implementation aspects are dependent upon factors such as these.
  • Figs 2A-2C illustrate the manner in which the parallel processors 14 can be loaded with instructions, that will then cause them to perform the intended operation.
  • each processor 14 can sequentially receive the same instruction
  • Figs 2B and 2C illustrate more complex instruction loading sequences.
  • These instruction loading sequences are maintained by a host processor that provides overall control of the parallel processors, and uses the equivalent of the interprocessor communication unit, to communicate with each of the parallel processors, in a manner that is known with respect to parallel processor implementations generally.
  • the host processor can be implemented on the same monolithic integrated circuit chip, or die, or off-chip.
  • the parallel processor and imager system 20 exploits the parallelism inherent in video processing algorithms, the small dynamic range used by existing video compression algorithms, the digital CMOS sensor technology, and the embedded DRAM technology to realize a lower power, single-chip solution for low-cost video capturing.
  • the invention enables capture and processing of video data on the same chip.
  • the acquired video data is stored directly in the on-chip embedded DRAM, also termed pixel memory 30, which serves as a high-bandwidth video frame buffer.
  • the bandwidth of embedded DRAM can be as high as 8 Gbyte/s, making it possible to support several (40 in this preferred embodiment described herein) parallel video processors.
  • each processor is limited to 16 bits. This description is not intended to be limiting, as many alternative configurations are possible, as will be apparent.
  • these parallel processors are designed to run at relatively low clock rates described further hereinafter, thereby allowing total computational throughput as high as 1.6 BOPS while consuming less than 40 mW of power.
  • Fig 3 also illustrates one layout of the CMOS photo sensors 22, the embedded DRAM 30, and the parallel DSP processors 40-1 to 40-40 on a single integrated circuit chip 20.
  • the CMOS photo sensor array 22 are disposed on a top layer of the integrated circuit chip in such a location where they will be able to receive incident light, and include, for instance, photo diodes, A/D converters, and A/D offset correction circuitry.
  • the embedded DRAM or pixel memory 30 resides under the photo diodes and provides storage for the current and two past frames of captured image, as well as intermediate variables such as motion vectors (MV's) and multi-resolution pixel values.
  • the parallel video processors 40 are located next to the imaging circuitry and each operates independently on a 16 column of pixels.
  • processor system 20 has the advantage of supporting high computational throughput at low clock rates when executing highly repetitive operations. It is less efficient when operating on more complex algorithms that require access to data outside of the processor domain. The size of the processor domain is, therefore, an important design parameter, which requires careful examination of the types of video processing algorithms, as described hereinafter.
  • Processor system 20 is described herein with reference to its structure, and then described with reference to how this structure can implement three algorithms commonly used in video coding standards: RGB to YUV conversion, DCT, and motion estimation. RGB to YUV conversion is performed on the pixel level and requires no additional information from neighboring pixels. It is computationally intensive, requiring multiple multiplies and adds per pixel, but can be easily achieved with a parallel architecture.
  • DCT is performed on a block basis. It operates on a row or a column of pixels in each pass and requires bit reverse or base offset addressing to simplify the instruction set.
  • Implementing DCT with a pixel-level processor domain would be unnecessarily complicated. Similar to DCT, motion estimation works best with a block-level processor domain.
  • motion estimation requires access to adjacent blocks regardless of the size of the processor domain.
  • the extent of the locality of interprocessor communication depends on the search space. In this processor design, a search space between processor domains is assumed. No assumption is made with the size of the search space within a processor domain.
  • some motion estimation procedures do not require any multiplication other than simple shifts, as in the example below.
  • Interprocessor communication circuitry is needed to access data between processor domains and to communicate domain- specific information such as MV's and reference blocks for block search.
  • each CMOS photo diode has a dimension of lO ⁇ x lO ⁇ . With 16 pixels per processor, each processor is preferably limited to a width of 160 ⁇ . This limits the datapath to 36 bits for the arithmetic unit assuming that the individual ones of the parallel processors are staggered so that certain processing units in the datapath can be made wider. With staggering, the width dimension can at most double at the cost of more complicated layout and routing.
  • the embedded DRAM can sustain high memory throughput via large data buses (64 bits), the access time of the embedded DRAM with a 3.3V supply is twice as long as the cycle time (50 ns).
  • a DMA (direct memory access) unit is introduced to serve as an interface between the DRAM and the local memory units, as described hereinafter. In addition, the DMA unit may communicate with adjacent processors to access pixel data outside of the processor domain.
  • Program flow control such as branching can be performed outside of the parallel processors. This reduces unnecessary energy overhead to perform program decoding in the parallel processors, which, consequently, gets multiplied by the number of parallel processors to account for the total consumed power.
  • Most image transformation and filtering algorithms are data independent. DCT and color conversion are such examples.
  • a portion of the motion estimation algorithm is also data independent. It is, however, data dependent during MV refinement where local searches are required, as will be described hereinafter.
  • the single chip parallel processor and image system of Fig. 3 achieves the following three goals simultaneously: realize the image/video processing algorithms; minimize DMA accesses to the pixel DRAM; and maximize computational throughput while keeping the power consumption at a minimal level. Minimizing DMA access to the pixel memory is crucial not only to reduce power consumption, but also to reduce instruction overhead incurred with access latencies.
  • Each processor 40 as illustrated in Fig. 5 described herein contains a DMA 50, a 288-byte block visible RAM 52, a 36-byte auxiliary RAM 54, a 32-word register file 56, an ALU 58, an inter-processor communication unit 60, an external I/O buffer 62, and the processor control unit 64.
  • the processor control unit 64 consists of the program RAM 66, the instruction decoder 68, and the address generation unit 70.
  • the proposed parallel processor and imager system 10 supports certain types of addressing modes and data flow between memory units mentioned above. For color conversion and DCT, there is no need to access adjacent pixel memories. Transfer of data from the pixel memory 30 to local memories are implemented with a simple DMA. Local memories and addressing modes requirements are implemented as described hereinafter. Two-operands single cycle instructions can be realized with two data paths 80 and 82 to the ALU 58, a path 80 from local pixel storage (block visible RAM 52) and a second path 82 from coefficients storage (auxiliary RAM 54 or the register file 56). Automatic post increment and offset addressing modes are available.
  • data flow involves adjacent pixel memories.
  • data flow may involve pixel memories that are two processor domains away.
  • the motion estimation algorithm can be partitioned into four main sections: subsampling, hierarchical and multiresolution block matching, MV candidate selection, and MV refinement.
  • the data flow for subsampling and hierarchical resolution reduction is restricted to the current processor domain.
  • Block matching requires access to adjacent pixel memories.
  • MV candidate selection may require access to data stored two processor domains away.
  • the proposed processor enables these types of data flow by employing special DMA, local memories, and addressing schemes, as will be described hereinafter.
  • the DMA 50 illustrated in Fig. 5 is the primary interface between the parallel processor's local memories (i.e. auxiliary RAM 54 and block visible RAM 52) and the embedded pixel DRAM 30. It is also the primary mechanism for inter-processor data transfer.
  • the DMA 50 separates the task of pixel memory access from the parallel processors such that DRAM access latencies do not stall program execution.
  • the DMA 50 also supports memory access requests from pixel DRAM's that lie within two processor domains. Access requests that involve two processor domains are not optimal and are meant only for retrieving small amounts of data.
  • the DMA 50 is implemented in the preferred embodiment described herein with four access registers and memory buffers as is conventional. Each memory access consists of a 64- bit (8 pixels) packet. Access requests are pipelined along with the instructions into the access registers and they are prioritized in a first come first serve fashion. Memory buffers provide the temporary storage needed for the DMA to work with both 64-bit (DRAM) and 8-bit (SRAM) data packets.
  • An access request contains information such as the source and destination addresses, the relative processor domain "read” ID, the relative processor domain "write” ID, and the read/write block size.
  • a status flag is associated with each DMA access register to indicate access request completion. This flag is used in conjunction with a wait instruction to allow better program flow control. Program flow control is necessary during external pixel DRAM accesses, especially during data dependent processing.
  • the DMA 50 resolves access contention from the on-chip or off-chip host processors, as previously described by placing the request in a FIFO queue. External access requests are treated with equal priority by the DMA 50 as the internal access requests. However, each DMA 50 has a limited FIFO queue and if full, new DMA access requests will be stalled and so will the processor 40 issuing the request. To keep track of accesses to pixel DRAM's 30 that are two processor domains away, a relative processor LD and a backward relative processor D is appended to each access request.
  • the block visible RAM 52 is used to provide temporary storage for a block of up to 16x16 pixels of 9-bit wide data for motion estimation and 8x8 pixels of 18-bit wide data for LDCT to comply with the IEEE error specifications. These addressing schemes provide additional flexibility to facilitate local memory accesses and to reduce DMA overheads, as described hereinafter.
  • the first addressing scheme is called block visible addressing and is illustrated in Fig. 7A. It enables the block visible RAM 52 in one processor (such as 40-3) to be readable by adjacent processors (such as 40-2 and 40-4). This is especially useful in operations that involve access to a block of data stored in the block visible RAM 52 of adjacent processors. It is specifically used in data independent mode; otherwise, the data stored in adjacent block visible RAM's cannot be predetermined. Being able to address data from adjacent block visible RAM's 50 has the advantage of providing a second level of inter-processor data communication without the cost of performing external DMA accesses. The cost of utilizing this addressing scheme is an increased number of SRAM reads per cycle to avoid memory access contentions.
  • the second addressing scheme is called modulo offset addressing and is illustrated in Fig.
  • This addressing scheme may work in both data dependent and independent modes.
  • the block visible RAM 52 and the auxiliary RAM 54 are addressed by two address pointers, each pointer representing a coordinate in the cartesian coordinate system with the pointer address being generated from the processor 40, the DMA 50, as well as the address generation unit 70.
  • This data address representation is more suitable for image processing due to the 2- dimensional nature of images.
  • this representation supports more flexible addressing modes such as auto increments and modulo in both x and y directions.
  • the modulo offset addressing scheme augments the 2-D address representation by allowing all addresses to be offset by an amount preloaded into the two offset registers (one for each dimension).
  • This is two advantages for using this addressing scheme.
  • all address pointers are relative to the offset coordinates (i.e. the offset coordinates are treated as the origin). This allows a program to be reused for processing another set of pixels by simply modifying the offset values. In data dependent mode, this may result in a smaller code size needed to be stored in the local program RAM 66.
  • the second advantage lies with a reduction of DMA accesses to external pixel DRAM. During block search, blocks of 16x16 pixels belonging to the previous frame need to be read from the pixel memory and stored in the block visible RAM 52.
  • DMA updates may be interleaved into the search algorithm (since a 16x16 block search requires a minimum of 256 cycles to calculate the error metric) to reduce DMA access latencies.
  • modulo offset addressing not only modifies the address pointers, but also the ones generated by the DMA 50. Therefore, DMA access requests can remain the same in the program code.
  • modulo offset addressing is available for both data dependent and independent operations.
  • block visible addressing is available only during data independent mode. Visibility can be turned off to reduce the power consumption induced by multiple reads issued to the block visible RAM.
  • the auxiliary memory 54 in the preferred Fig. 5 embodiment being described herein is a 4x8 by 9-bit SRAM used to provide a second pixel buffer for operations that involve two blocks of pixels (i.e. block matching). It provides the second path 82 to the ALU 58 for optimal computational efficiency. It can also be used to store lookup coefficients that are 9-bit wide during non-block matching operations.
  • the auxiliary memory 54 does not support the two addressing schemes available to the block visible RAM 52 since it is used to store pixel values primarily from the current processor domain. Its role in block matching is to buffer the reference block, which remains constant throughout block search.
  • the auxiliary memory 54 and the block visible RAM 52 are the only two local memories accessible by the DMA.
  • the auxiliary memory 54 also serves as a gateway between the processor 40 and the external I/O buffer 62. Data from the processor 40 can be transferred to the external I/O buffer 62 which communicates with the I/O pins (not shown).
  • a 32 word, 18-bit register file 56 is available.
  • the register file 56 provides a fast, higher precision, low power workable memory space.
  • the register file 56 has two data paths 84 and 86 to the ALU 58 allowing most operations to be performed by the ALU 58 and the register file 56. It is large enough such that it can also store both lookup coefficients (e.g. DCT coefficients) and system variables.
  • the ALU 58 illustrated in Fig. 5 has a limited complexity due to the constraints on area and power.
  • the ALU 58 is implemented, as shown in Fig. 6, with a 36-bit carry select adder 90, an 9-bit subtractor 92, a conditional signed negation unit 94 (for calculating absolute values), a 16x17 multiplier 96, a bit manipulation logic unit 98, a shifter 100, a T register 100, and a 36-bit accumulator 102. Operations involving addition, shifting and bit manipulations can be executed in one cycle.
  • the calculation of the absolute error involves the 9-bit subtractor 92, the conditional signed negation unit 94, and the adder 90.
  • the T register 100 is used in conjunction with the SAA instruction, primarily for algorithmic power reduction.
  • the T register 100 can be preloaded with a pixel value from the auxiliary memory 54 and depending on the algorithm, it can be reused without incurring SRAM memory access energy overheads.
  • the hardware multiplier 96 is implemented to perform the DCT and IDCT efficiently.
  • the inter-processor communication unit 60 illustrated in Fig. 5 is responsible for instruction pipelining and processor status signaling. Instructions are pipelined from one processor 40 to the next and they may be executed immediately or stored in the program RAM 66 depending on whether the processor 40 is operating in data independent or dependent modes, respectively. In a data dependent mode, execution of the code stored in the program memory 40 occurs immediately after the first instruction has been buffered. Execution of the code segment ends when an end-of-program instruction is reached. At this point, a status flag is set to indicate code completion and the processor 40 halts until a new instruction clears it and forces the processor 40 to operate in data independent mode. The central controller (not shown) reinitializes instruction pipelining when it determines that all processors 40 have completed execution.
  • the task of address generation may be handled by the central controller in order to reduce power consumption.
  • the individual parallel processors 40 consume less than 1 mW of power at a clock rate of 40 MHz, amounting to approximately 40 mW of total power consumption.
  • An estimated cycle count per processor per frame needed for each encoding/decoding step is provided in Fig. 11.
  • the number of cycles necessary to perform IBBPBBPBB MPEG-2 encoding at 30 ⁇ s is estimated to be 35 MIPS for each processor 40.
  • the utilization of the functional units within the processor 40 is approximately 40% for the adder, 6% for the multiplier, 50% for the subtract-absolute-accumulate unit, and 4% for DRAM memory accesses.
  • the processor area is approximately 160 um by 1800 um.
  • Appendix A outlines the psuedo code for implementing the RGB to YUV conversion. This pseudo code is provided as one exemplary way in which the processors 40 can implement these this and other algorithms.
  • the RGB-YUV conversion is a pixel level operation. It consists of a matrix multiplication of the color vector to produce the target color vector. This is depicted in the following equation:
  • the color vectors have to be pre-loaded from pixel DRAM 30
  • the processor uses a 4 stage pipeline: fetch, decode/address generation, read, and execute.
  • the processor takes the pipelined instruction and decodes them directly.
  • the pipeline looks like a 3 stage pipeline.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention implements a parallel processing architecture in which a plurality of parallel processors concurrently operate upon a different block, preferably a column, of image data. Implemented on a single monolithic integrated circuit chip, this single chip solution has characteristics that provide the throughput necessary to perform computationally complex operations, such as color correction, RGB to YUV conversion and DCT operations in either still or video applications, and motion estimation in digital video processing applications.

Description

LOW-POWER PARALLEL PROCESSOR AND IMAGER INTEGRATED CIRCUIT
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a low-power, single chip, parallel processor and imager system, and, more specifically, in one embodiment, a low power, large scale MPEG2 encoder and imager system for a single-chip digital CMOS video camera is disclosed.
2. Background of the Related Art
Processing of digital data obtained from an image sensor requires complex calculations.
Processing of video data, which requires motion estimation, is particularly computationally intensive. Accordingly, various techniques have been proposed to meet these processing requirements. Thus, processors capable of performing over one billion operations per second are becoming commonplace.
A conflicting requirement for certain applications, however, is that the overall power be minimized, especially for devices such as camcorders and the like that are required to be battery powered. Thus, although the same complex calculations are required, they must be performed with a system that uses minimal amounts of power, so that the devices can operate for a reasonable period of time before requiring recharging.
Existing video processing engines are designed to optimize processing of video data stored in a secondary storage medium, e.g., random access memory, hard drive, or DVD. This results in a need for an external chipset whose primary task is to provide the necessary bandwidth for data transfer between the video engine and the secondary storage medium. The requirement of such an external data transfer eliminates the possibility for a low-power, single-chip solution.
Another existing solution that uses less power is a single integrated circuit chip for both the image sensor and digital processor. An example of such a single integrated circuit chip is the VLSI Vision Limited VV6405 NTSC Colour CMOS Image Sensor. The digital processor disclosed operates upon consecutive rows of pixel data sequentially to perform simple pixel- level computations. While this solution uses less power than other alternatives, it does not have the ability to perform operations at rates that are desired. SUMMARY OF THE INVENTION
It is an object of the invention, therefore, to provide an integrated image sensor and processor architecture which satisfies low power requirements.
It is a further object of the invention to provide a integrated image sensor and processor capable of performing complex operations.
In view of the above recited objects, among others, the present invention implements a parallel processing architecture in which a plurality of parallel processors concurrently operate upon a different block, preferably a column, of image data. Implemented on a single monolithic integrated circuit chip, this single chip solution has characteristics that provide the throughput necessary to perform computationally complex operations, such as color correction, RGB to YUV conversion and DCT operations in either still or video applications, and motion estimation in digital video processing applications.
In a specific embodiment according to the present invention, a parallel processor and imager system according to the present invention implements in a preferred embodiment a single-chip digital CMOS video camera with real-time MPEG2 encoding capability. Computationally intensive operations of the video compression algorithms can be performed on-chip, at a location right beside the output of the imager, resulting in low latency and low power consumption. In all embodiments, this architecture takes advantage of parallelism in image processing algorithms, which is exploited to obtain efficient processing. BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects, features, and advantages of the present invention are better understood by reading the following detailed description of the preferred embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 illustrates a single monolithic integrated circuit containing an image sensor array and parallel processors according to the present invention;
Figs 2A-C illustrate alternative manners in which instructions can be fed into each of the plurality of parallel processors according to the present invention;
Fig. 3 illustrates a single integrated circuit containing an image sensor array, parallel processors, and embedded memory capable of encoding sequential images according to the present invention; Figs. 4 illustrate another layout of a single integrated circuit for the embodiment described in Fig. 3;
Fig. 5 illustrates a more detailed diagram of one of the parallel processors for the embodiment described in Fig. 3 according to the present invention; Fig. 6 illustrates a more detailed diagram of one embodiment of an arithmetic logic unit for the embodiment described in Fig. 5 according to the present invention;
Figs 7A and 7B illustrate alternative addressing schemes that can be used with the parallel processors operating upon columns of pixel data according to the present invention;
Fig. 8 provides a table of estimated cycle count per processor per frame needed for each encoding/decoding step; and
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
This invention, in its most basic form, has the capacity to sense to a single image, generate pixel data as a result of the sensed image, and concurrently process that image using a plurality of parallel processors, each of which simultaneously operate on portions of the pixel data associated with the image. In a preferred embodiment, as described hereinafter, the portions of the pixel image that each processor operates upon is a column of pixel data, although pixel data that is concurrently operated upon can be divided in various other ways, such as blocks.
As illustrated in Fig. 1, digital processor and imager system 10 includes a sensor array 12 that detects an image and generates detected signals corresponding thereto. This sensor array 12 is preferably a CMOS photo sensor array, but could also be other types of arrays, such as charge coupled devices. Also included in the system 10 are a plurality of parallel processors 14, each of which inputs certain predetermined ones of the detected signals by being coupled to and in close proximity with the sensor array 12, and also being coupled to an output buffer 16. The image data, such as from a single image that is sensed in a digital camera, is detected by the sensor array 12, and the detected signals, also called pixel data are transmitted columnwise into a plurality of parallel processors 14, forty in the embodiment illustrated. Each of the forty processors operates upon the input detected signals to generate encoded signals, which are then output to the output buffer 16, the encoded signals being encoded based upon the algorithm that each of the processors is implementing. In the specific preferred embodiment disclosed hereinafter, the number of parallel processors, the size of each of the parallel processors, the search space within a processor domain, and the size of certain memories, for instance, are based upon an array having a predetermined resolution of 640x480 array of sensing elements. It should be noted, however, that, for each of the embodiments described, the specific numbers of processors, implementation of each processor, and search space, memory requirements, and other specific implementation aspects as recited are not intended to be limiting, but instead to completely describe a presently preferred embodiment. As described, the relationship of specific implementation aspects is not arbitrary, but based upon considerations in which computationally intensive operations can be simultaneously repeated by multiple processors in order to obtain the fullest throughput. This throughput is dependent in part upon the algorithms that need to be implemented, for example the fact that motion estimation requires knowledge of neighboring pixel data, whereas RDB to YUV conversion and DCT operations do not require such knowledge. Further, the size of the sensing array will assist in determining the proper search space, with the larger the sensor array, the larger the search space being able to be without having adverse effects on throughput and increased power usage. Similarly, the larger the number of pixels that each processor operates upon, the greater the resulting clock rate, and the more complex the associated circuitry becomes. Accordingly, specific implementation aspects are dependent upon factors such as these.
Figs 2A-2C illustrate the manner in which the parallel processors 14 can be loaded with instructions, that will then cause them to perform the intended operation. As illustrated in Fig. 2 A, each processor 14 can sequentially receive the same instruction, whereas Figs 2B and 2C illustrate more complex instruction loading sequences. These instruction loading sequences are maintained by a host processor that provides overall control of the parallel processors, and uses the equivalent of the interprocessor communication unit, to communicate with each of the parallel processors, in a manner that is known with respect to parallel processor implementations generally. The host processor can be implemented on the same monolithic integrated circuit chip, or die, or off-chip. There are also custodian tasks that need to be performed, such as variable length encoding, after the pixel data has been processed. The computation of these tasks can easily be integrated on the same chip, as their computation requirements are much more relaxed compared to that of the pixel level processing
The descriptions provided hereinafter, which are of a specific preferred embodiment shown in block diagram form in Fig. 3, are also not intended to be interpreted as showing only a single particular embodiment, but rather the descriptions provided with respect to this embodiment are intended to illustrate that the parallel processors, operating concurrently on various portions of pixel data, can be configured in a variety of ways, since the operations described that these parallel processors operate upon are the most computationally difficult. Accordingly, many modifications can be made and still be within the intended scope of the invention. With reference to this embodiment illustrated in Fig. 3, the parallel processor and imager system 20 according to this embodiment of the present invention exploits the parallelism inherent in video processing algorithms, the small dynamic range used by existing video compression algorithms, the digital CMOS sensor technology, and the embedded DRAM technology to realize a lower power, single-chip solution for low-cost video capturing. Thus, the invention enables capture and processing of video data on the same chip. The acquired video data is stored directly in the on-chip embedded DRAM, also termed pixel memory 30, which serves as a high-bandwidth video frame buffer. The bandwidth of embedded DRAM can be as high as 8 Gbyte/s, making it possible to support several (40 in this preferred embodiment described herein) parallel video processors. It should be noted that the preferred embodiment is described with respect to a particular implementation, including a configuration in which each processor is limited to 16 bits. This description is not intended to be limiting, as many alternative configurations are possible, as will be apparent. For low power purposes, these parallel processors are designed to run at relatively low clock rates described further hereinafter, thereby allowing total computational throughput as high as 1.6 BOPS while consuming less than 40 mW of power.
Fig 3 also illustrates one layout of the CMOS photo sensors 22, the embedded DRAM 30, and the parallel DSP processors 40-1 to 40-40 on a single integrated circuit chip 20. The CMOS photo sensor array 22 are disposed on a top layer of the integrated circuit chip in such a location where they will be able to receive incident light, and include, for instance, photo diodes, A/D converters, and A/D offset correction circuitry. The embedded DRAM or pixel memory 30 resides under the photo diodes and provides storage for the current and two past frames of captured image, as well as intermediate variables such as motion vectors (MV's) and multi-resolution pixel values. The parallel video processors 40 are located next to the imaging circuitry and each operates independently on a 16 column of pixels.
In the specific embodiment of the processor system 20, as described herein, has the advantage of supporting high computational throughput at low clock rates when executing highly repetitive operations. It is less efficient when operating on more complex algorithms that require access to data outside of the processor domain. The size of the processor domain is, therefore, an important design parameter, which requires careful examination of the types of video processing algorithms, as described hereinafter. Processor system 20 is described herein with reference to its structure, and then described with reference to how this structure can implement three algorithms commonly used in video coding standards: RGB to YUV conversion, DCT, and motion estimation. RGB to YUV conversion is performed on the pixel level and requires no additional information from neighboring pixels. It is computationally intensive, requiring multiple multiplies and adds per pixel, but can be easily achieved with a parallel architecture. DCT, on the other hand, is performed on a block basis. It operates on a row or a column of pixels in each pass and requires bit reverse or base offset addressing to simplify the instruction set. Implementing DCT with a pixel-level processor domain would be unnecessarily complicated. Similar to DCT, motion estimation works best with a block-level processor domain.
Unlike DCT in which processing variables are confined within a block, motion estimation requires access to adjacent blocks regardless of the size of the processor domain. The extent of the locality of interprocessor communication depends on the search space. In this processor design, a search space between processor domains is assumed. No assumption is made with the size of the search space within a processor domain. Furthermore, some motion estimation procedures do not require any multiplication other than simple shifts, as in the example below.
These algorithmic constraints place certain requirements on the design of the parallel processor. In short, the computational throughput (less than 1.6 BOPS based on the algorithm proposed by Chalidabhongse and Kuo in Junavit Chalidabhongse and C.-C. Jay Kuo. "Fast motion vector estimation using multiresolution-spatio-temporal correlations" IEEE Transactions on Circuits and Systems for Video Technology, Vol.7, No.3, pp. 477-488, June 1997.) required for motion estimation results in the most effect size being 16 pixels for each processor with the given technology (preferably less than 0.2m) and the clock rate (preferably less than about 40 MHz). Special addressing modes such as bit reversal, base-offset, auto increment, and modulo operations are needed for DCT and motion estimation. Interprocessor communication circuitry is needed to access data between processor domains and to communicate domain- specific information such as MV's and reference blocks for block search.
In addition to constraints posed by the algorithms, physical and technological limitations are also considered. In the physical layout, each CMOS photo diode has a dimension of lOμ x lOμ. With 16 pixels per processor, each processor is preferably limited to a width of 160μ. This limits the datapath to 36 bits for the arithmetic unit assuming that the individual ones of the parallel processors are staggered so that certain processing units in the datapath can be made wider. With staggering, the width dimension can at most double at the cost of more complicated layout and routing. Although the embedded DRAM can sustain high memory throughput via large data buses (64 bits), the access time of the embedded DRAM with a 3.3V supply is twice as long as the cycle time (50 ns). A DMA (direct memory access) unit is introduced to serve as an interface between the DRAM and the local memory units, as described hereinafter. In addition, the DMA unit may communicate with adjacent processors to access pixel data outside of the processor domain.
Finally, an important algorithmic distinction is made with data dependency. As the local program memory space is severely limited, it is desirable to partition the program code such that individual code segments can be stored locally. It is also advantageous to partition the program code based on data dependency. A data independent algorithm enables codes to be executed in a predictable manner. A data dependent algorithm has an unpredictable program flow and, therefore, would require the attention of individual processors. By partitioning the code into data independent and dependent segments, it is possible to store data independent codes outside of the processor and only to store data dependent codes local to the processor. Data independent instructions can be stored on a much larger program space either on-chip or off-chip and instructions would be sequentially pipelined into the individual parallel processors. If instructions are not so pipelined, a large memory bandwidth to the central program store is required. Program flow control such as branching can be performed outside of the parallel processors. This reduces unnecessary energy overhead to perform program decoding in the parallel processors, which, consequently, gets multiplied by the number of parallel processors to account for the total consumed power. Most image transformation and filtering algorithms are data independent. DCT and color conversion are such examples. A portion of the motion estimation algorithm is also data independent. It is, however, data dependent during MV refinement where local searches are required, as will be described hereinafter.
The single chip parallel processor and image system of Fig. 3 according to the invention achieves the following three goals simultaneously: realize the image/video processing algorithms; minimize DMA accesses to the pixel DRAM; and maximize computational throughput while keeping the power consumption at a minimal level. Minimizing DMA access to the pixel memory is crucial not only to reduce power consumption, but also to reduce instruction overhead incurred with access latencies. Each processor 40 as illustrated in Fig. 5 described herein contains a DMA 50, a 288-byte block visible RAM 52, a 36-byte auxiliary RAM 54, a 32-word register file 56, an ALU 58, an inter-processor communication unit 60, an external I/O buffer 62, and the processor control unit 64. The processor control unit 64 consists of the program RAM 66, the instruction decoder 68, and the address generation unit 70.
To realize the image/video processing algorithms, the proposed parallel processor and imager system 10 supports certain types of addressing modes and data flow between memory units mentioned above. For color conversion and DCT, there is no need to access adjacent pixel memories. Transfer of data from the pixel memory 30 to local memories are implemented with a simple DMA. Local memories and addressing modes requirements are implemented as described hereinafter. Two-operands single cycle instructions can be realized with two data paths 80 and 82 to the ALU 58, a path 80 from local pixel storage (block visible RAM 52) and a second path 82 from coefficients storage (auxiliary RAM 54 or the register file 56). Automatic post increment and offset addressing modes are available.
For motion estimation, data flow involves adjacent pixel memories. Depending on the motion estimation algorithm used, data flow may involve pixel memories that are two processor domains away. The motion estimation algorithm can be partitioned into four main sections: subsampling, hierarchical and multiresolution block matching, MV candidate selection, and MV refinement. The data flow for subsampling and hierarchical resolution reduction is restricted to the current processor domain. Block matching requires access to adjacent pixel memories. And MV candidate selection may require access to data stored two processor domains away. The proposed processor enables these types of data flow by employing special DMA, local memories, and addressing schemes, as will be described hereinafter.
The DMA 50 illustrated in Fig. 5 is the primary interface between the parallel processor's local memories (i.e. auxiliary RAM 54 and block visible RAM 52) and the embedded pixel DRAM 30. It is also the primary mechanism for inter-processor data transfer. The DMA 50 separates the task of pixel memory access from the parallel processors such that DRAM access latencies do not stall program execution. The DMA 50 also supports memory access requests from pixel DRAM's that lie within two processor domains. Access requests that involve two processor domains are not optimal and are meant only for retrieving small amounts of data.
The DMA 50 is implemented in the preferred embodiment described herein with four access registers and memory buffers as is conventional. Each memory access consists of a 64- bit (8 pixels) packet. Access requests are pipelined along with the instructions into the access registers and they are prioritized in a first come first serve fashion. Memory buffers provide the temporary storage needed for the DMA to work with both 64-bit (DRAM) and 8-bit (SRAM) data packets. An access request contains information such as the source and destination addresses, the relative processor domain "read" ID, the relative processor domain "write" ID, and the read/write block size. A status flag is associated with each DMA access register to indicate access request completion. This flag is used in conjunction with a wait instruction to allow better program flow control. Program flow control is necessary during external pixel DRAM accesses, especially during data dependent processing.
The DMA 50 resolves access contention from the on-chip or off-chip host processors, as previously described by placing the request in a FIFO queue. External access requests are treated with equal priority by the DMA 50 as the internal access requests. However, each DMA 50 has a limited FIFO queue and if full, new DMA access requests will be stalled and so will the processor 40 issuing the request. To keep track of accesses to pixel DRAM's 30 that are two processor domains away, a relative processor LD and a backward relative processor D is appended to each access request.
Two special addressing schemes are available for the block visible RAM 52. The block visible RAM 52 is used to provide temporary storage for a block of up to 16x16 pixels of 9-bit wide data for motion estimation and 8x8 pixels of 18-bit wide data for LDCT to comply with the IEEE error specifications. These addressing schemes provide additional flexibility to facilitate local memory accesses and to reduce DMA overheads, as described hereinafter.
The first addressing scheme is called block visible addressing and is illustrated in Fig. 7A. It enables the block visible RAM 52 in one processor (such as 40-3) to be readable by adjacent processors (such as 40-2 and 40-4). This is especially useful in operations that involve access to a block of data stored in the block visible RAM 52 of adjacent processors. It is specifically used in data independent mode; otherwise, the data stored in adjacent block visible RAM's cannot be predetermined. Being able to address data from adjacent block visible RAM's 50 has the advantage of providing a second level of inter-processor data communication without the cost of performing external DMA accesses. The cost of utilizing this addressing scheme is an increased number of SRAM reads per cycle to avoid memory access contentions. However, it is justified due to a much larger energy and latency overhead associated with DMA accesses. Also, this addressing scheme reduces chip area, a result of reusing the block visible RAM 52. The second addressing scheme is called modulo offset addressing and is illustrated in Fig.
7B. It involves an automatic modulo offsetting of the addresses issued to the block visible
RAM. This addressing scheme may work in both data dependent and independent modes. The block visible RAM 52 and the auxiliary RAM 54 are addressed by two address pointers, each pointer representing a coordinate in the cartesian coordinate system with the pointer address being generated from the processor 40, the DMA 50, as well as the address generation unit 70. This data address representation is more suitable for image processing due to the 2- dimensional nature of images. In addition, this representation supports more flexible addressing modes such as auto increments and modulo in both x and y directions.
The modulo offset addressing scheme augments the 2-D address representation by allowing all addresses to be offset by an amount preloaded into the two offset registers (one for each dimension). There are two advantages for using this addressing scheme. First, all address pointers are relative to the offset coordinates (i.e. the offset coordinates are treated as the origin). This allows a program to be reused for processing another set of pixels by simply modifying the offset values. In data dependent mode, this may result in a smaller code size needed to be stored in the local program RAM 66. The second advantage lies with a reduction of DMA accesses to external pixel DRAM. During block search, blocks of 16x16 pixels belonging to the previous frame need to be read from the pixel memory and stored in the block visible RAM 52. Almost all blocks used in block search require external pixel DRAM access. However, since consecutive blocks that are retrieved from the pixel DRAM 30 are displaced by only a few pixels, it is costly to re-read pixels in the overlapped region. DMA 50 accesses to external pixel memories 30 are inefficient since it contends with adjacent DMAs for memory bandwidth. The modulo offset addressing scheme offers a simple implementation to reuse pixel values in the block visible RAM 32. Offsets may be modified to reposition the origin to point to the coordinates of the new block. Only non-overlapped pixel regions between the previous block and the current block need to be updated with DMA accesses. These DMA updates may be interleaved into the search algorithm (since a 16x16 block search requires a minimum of 256 cycles to calculate the error metric) to reduce DMA access latencies. Note also that the modulo offset addressing not only modifies the address pointers, but also the ones generated by the DMA 50. Therefore, DMA access requests can remain the same in the program code.
The modulo offset addressing is available for both data dependent and independent operations. On the other hand, the block visible addressing is available only during data independent mode. Visibility can be turned off to reduce the power consumption induced by multiple reads issued to the block visible RAM.
The auxiliary memory 54 in the preferred Fig. 5 embodiment being described herein is a 4x8 by 9-bit SRAM used to provide a second pixel buffer for operations that involve two blocks of pixels (i.e. block matching). It provides the second path 82 to the ALU 58 for optimal computational efficiency. It can also be used to store lookup coefficients that are 9-bit wide during non-block matching operations. The auxiliary memory 54 does not support the two addressing schemes available to the block visible RAM 52 since it is used to store pixel values primarily from the current processor domain. Its role in block matching is to buffer the reference block, which remains constant throughout block search. The auxiliary memory 54 and the block visible RAM 52 are the only two local memories accessible by the DMA. The auxiliary memory 54 also serves as a gateway between the processor 40 and the external I/O buffer 62. Data from the processor 40 can be transferred to the external I/O buffer 62 which communicates with the I/O pins (not shown).
To compliment the 9-bit local SRAM units that make up auxiliary memory 54, a 32 word, 18-bit register file 56 is available. The register file 56 provides a fast, higher precision, low power workable memory space. The register file 56 has two data paths 84 and 86 to the ALU 58 allowing most operations to be performed by the ALU 58 and the register file 56. It is large enough such that it can also store both lookup coefficients (e.g. DCT coefficients) and system variables.
The ALU 58 illustrated in Fig. 5 has a limited complexity due to the constraints on area and power. The ALU 58 is implemented, as shown in Fig. 6, with a 36-bit carry select adder 90, an 9-bit subtractor 92, a conditional signed negation unit 94 (for calculating absolute values), a 16x17 multiplier 96, a bit manipulation logic unit 98, a shifter 100, a T register 100, and a 36-bit accumulator 102. Operations involving addition, shifting and bit manipulations can be executed in one cycle. The calculation of the absolute error involves the 9-bit subtractor 92, the conditional signed negation unit 94, and the adder 90. Operations are pipelined in 2 stages such that one subtract-absolute-accumulate (SAA) instruction can be executed every cycle. The first stage consists of the 9-bit subtraction and conditional signed negation, and the second stage involves accumulating the absolute differences. The T register 100 is used in conjunction with the SAA instruction, primarily for algorithmic power reduction. The T register 100 can be preloaded with a pixel value from the auxiliary memory 54 and depending on the algorithm, it can be reused without incurring SRAM memory access energy overheads. Finally, the hardware multiplier 96 is implemented to perform the DCT and IDCT efficiently.
The inter-processor communication unit 60 illustrated in Fig. 5 is responsible for instruction pipelining and processor status signaling. Instructions are pipelined from one processor 40 to the next and they may be executed immediately or stored in the program RAM 66 depending on whether the processor 40 is operating in data independent or dependent modes, respectively. In a data dependent mode, execution of the code stored in the program memory 40 occurs immediately after the first instruction has been buffered. Execution of the code segment ends when an end-of-program instruction is reached. At this point, a status flag is set to indicate code completion and the processor 40 halts until a new instruction clears it and forces the processor 40 to operate in data independent mode. The central controller (not shown) reinitializes instruction pipelining when it determines that all processors 40 have completed execution. In data independent mode, the task of address generation may be handled by the central controller in order to reduce power consumption. With the construction described above, the individual parallel processors 40 according to the preferred embodiment of the present invention consume less than 1 mW of power at a clock rate of 40 MHz, amounting to approximately 40 mW of total power consumption. An estimated cycle count per processor per frame needed for each encoding/decoding step is provided in Fig. 11. The number of cycles necessary to perform IBBPBBPBB MPEG-2 encoding at 30 φs is estimated to be 35 MIPS for each processor 40. The utilization of the functional units within the processor 40 is approximately 40% for the adder, 6% for the multiplier, 50% for the subtract-absolute-accumulate unit, and 4% for DRAM memory accesses. The processor area is approximately 160 um by 1800 um.
Appendix A outlines the psuedo code for implementing the RGB to YUV conversion. This pseudo code is provided as one exemplary way in which the processors 40 can implement these this and other algorithms.
While the invention has been described with reference to preferred embodiments, variations and modifications may be made without departing from the spirit and scope of the invention. For example, while the algorithms noted above are described in terms of visual video, an additional parallel processor can be used to implement an audio channel, which audio is sensed using AN analog to digital converter. . Also, the photo sensor array, as illustrated in Figs. 4, can be located adjacent to the pixel memory, rather than above it as illustrated in Fig, 3. Accordingly, the present invention is properly defined by the following claims. APPENDIX A
The RGB-YUV conversion is a pixel level operation. It consists of a matrix multiplication of the color vector to produce the target color vector. This is depicted in the following equation:
Y an an an R u = a2 *12 a23 X G
V β3. α32 Ω33 B
The implications are as follows:
1. The color vectors have to be pre-loaded from pixel DRAM 30
2. The coefficients aυ have to be loaded into the local memory of each processor 40
3. The resulting color vector has to be stored back to the pixel DRAM 30
Note that this algorithm is data independent (i.e. regardless of what values R, G, or B takes on, the program flow is not affected). This means that instructions can be pipelined to each processor in a predictable manner. Also, no local buffering of the instructions is necessary. Each processor executes the instruction on a first-come-first-serve basis. In effect, the array processors can be programmed as a single processing entity. Note that the pseudo-code given below does not pay any attention to how the instructions are fed to each processor.
The processor uses a 4 stage pipeline: fetch, decode/address generation, read, and execute.
In data independent mode, the processor takes the pipelined instruction and decodes them directly. As a result, the pipeline looks like a 3 stage pipeline.
A sample pseudo code for implementing this algorithm follows:
Figure imgf000015_0001
Figure imgf000016_0001
Figure imgf000017_0001
Figure imgf000018_0001
Figure imgf000019_0001
Figure imgf000020_0001
Figure imgf000021_0001
Total cycle count for RGB-YUV is 152 cycles / 8 pixels per cycle * 480 V pixels * 16 H pixels = 145,920 cycles.

Claims

We claim:
1. An apparatus for detecting an image at a predetermined resolution comprising: a monolithic integrated circuit chip including: an image sensor array, said image sensor array capable of detecting said image at a predetermined resolution and outputting detected signals corresponding thereto to said sensed image at said predetermined resolution; and a plurality of processors each coupled to said image sensor array and capable of inputting a predetermined number of said detected signals, such that each of said detected signals is input to one of said plurality of processors, said processors each concurrently operating upon said input detected signals and generating encoded signals corresponding thereto, said encoded signals being concurrently output from each of said plurality of parallel processors.
2. An apparatus according to claim 1 further including: a dynamic random access memory disposed on said monolithic integrated circuit chip, and coupled between said image sensor array and said plurality of processors, said dynamic random access memory array being partitioned such that each of said predetermined number of detected signals is stored for a period of time in a partition of said dynamic random access memory associated with one of said processors.
3. An apparatus according to claim 2 wherein said image array is disposed above said dynamic random access memory array on said monolithic integrated circuit chip.
4. An apparatus according to claim 2 wherein said image array is disposed adjacent to said dynamic random access memory array on said monolithic integrated circuit chip.
5. An apparatus according to claim 2 wherein each of said processors includes a direct memory access unit, and each of said direct memory access units is coupled to adjacent direct memory access units, so that a direct memory access of said detected signals stored in one of said partitions associated with one of said processors can be accessed by another adjacent processor through said direct memory access units.
6. An apparatus according to claim 5 wherein each of said processors further includes: a local memory coupled to said direct memory access unit; a register unit coupled to said local memory; an arithmetic logic unit coupled between said local memory and said register unit; and means for allowing one local memory in one of said processors to directly access data stored in another local memory of an adjacent one of said processors.
7. An apparatus according to claim 6 wherein said processor implements a motion estimation algorithm.
8. An apparatus according to claim 1 wherein each of said processors operates upon at least 16 of said detected signals.
9. An apparatus according to claim 1 wherein said plurality is at least 40.
10. An apparatus according to claim 1 wherein said plurality is at least one of a length and width pixel dimension of the image divided by 16.
11. An apparatus according to claim 1 wherein each of said processors can operate on said pipelined instructions independently of other processors as well as dependently of a host processor.
12. An apparatus according to claim 1 further including a plurality of output buffers, and wherein each output buffer is capable of parsing said encoded signals from an associated one of said processors.
13. An apparatus according to claim 1 each of said processors includes an address generator that can perform said 2-D vector addressing of a local memory associated with the processor, said 2-D vector addressing using a 2-D base address offset and 2-D modulo addressing to enable motion estimation.
14. An apparatus according to claim 2 wherein the said dynamic random access memory is capable of storing 9-bit and 18-bit data values.
PCT/US1999/012172 1998-05-30 1999-05-28 Low-power parallel processor and imager integrated circuit WO1999063751A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU43266/99A AU4326699A (en) 1998-05-30 1999-05-28 Low-power parallel processor and imager integrated circuit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8738798P 1998-05-30 1998-05-30
US60/087,387 1998-05-30

Publications (1)

Publication Number Publication Date
WO1999063751A1 true WO1999063751A1 (en) 1999-12-09

Family

ID=22204887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/012172 WO1999063751A1 (en) 1998-05-30 1999-05-28 Low-power parallel processor and imager integrated circuit

Country Status (2)

Country Link
AU (1) AU4326699A (en)
WO (1) WO1999063751A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6757019B1 (en) * 1999-03-13 2004-06-29 The Board Of Trustees Of The Leland Stanford Junior University Low-power parallel processor and imager having peripheral control circuitry
FR2854754A1 (en) * 2003-05-06 2004-11-12 Envivio France IMAGE ENCODING OR DECODING METHOD AND DEVICE WITH PARALLELIZATION OF PROCESSING ON A PLURALITY OF PROCESSORS, CORRESPONDING COMPUTER PROGRAM AND SYNCHRONIZATION SIGNAL
CN1297929C (en) * 2000-05-11 2007-01-31 索尼公司 Data processing equipment and data processing method and the recording medium
AU2007200566B2 (en) * 2003-02-17 2007-05-31 Silverbrook Research Pty Ltd Synchronisation protocol
EP1971152A2 (en) * 2007-03-14 2008-09-17 Stmicroelectronics Sa Data management for image processing
US8200992B2 (en) 2007-09-24 2012-06-12 Cognitive Electronics, Inc. Parallel processing computer systems with reduced power consumption and methods for providing the same
US8209597B2 (en) 2009-03-23 2012-06-26 Cognitive Electronics, Inc. System and method for achieving improved accuracy from efficient computer architectures
CN103237225A (en) * 2013-05-10 2013-08-07 上海国茂数字技术有限公司 Method for correcting video encoding and decoding errors through utilizing luma and chroma (YUV) and red, green and blue (RGB) space union
US9063754B2 (en) 2013-03-15 2015-06-23 Cognitive Electronics, Inc. Profiling and optimization of program code/application
US9141131B2 (en) 2011-08-26 2015-09-22 Cognitive Electronics, Inc. Methods and systems for performing exponentiation in a parallel processing environment
US9934043B2 (en) 2013-08-08 2018-04-03 Linear Algebra Technologies Limited Apparatus, systems, and methods for providing computational imaging pipeline
US11768689B2 (en) 2013-08-08 2023-09-26 Movidius Limited Apparatus, systems, and methods for low power computational imaging

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN K ET AL: "PASIC: A PROCESSOR-A/D CONVERTER-SENSOR INTEGRATED CIRCUIT", PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, NEW ORLEANS, MAY 1 - 3, 1990, vol. 3, no. CONF. 23, 1 May 1990 (1990-05-01), INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, pages 1705 - 1708, XP000163546 *
HSIEH J.Y.F. ; MENG T.H.Y.: "LOW-POWER MPEG 2 ENCODER ARCHITECTURE FOR DIGITAL CMOS CAMERA", PROCEEDINGS OF THE 1998 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, 31 May 1998 (1998-05-31) - 3 June 1998 (1998-06-03), pages 301 - 304, XP002113654 *
KAWAHITO S ET AL: "A CMOS IMAGE SENSOR WITH ANALOG TWO-DIMENSIONAL DCT-BASED COMPRESSION CIRCUITS FOR ONE-CHIP CAMERAS", IEEE JOURNAL OF SOLID-STATE CIRCUITS, vol. 32, no. 12, 1 December 1997 (1997-12-01), pages 2030 - 2041, XP000767452, ISSN: 0018-9200 *
MENG T H: "Wireless Video Systems", PROCEEDINGS IEEE COMPUTER SOCIETY WORKSHOP ON VLSI'98 SYSTEM LEVEL DESIGN, 16 April 1998 (1998-04-16) - 17 April 1998 (1998-04-17), Los Alamitos, CA, USA, pages 28 - 33, XP002113653 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6757019B1 (en) * 1999-03-13 2004-06-29 The Board Of Trustees Of The Leland Stanford Junior University Low-power parallel processor and imager having peripheral control circuitry
CN1297929C (en) * 2000-05-11 2007-01-31 索尼公司 Data processing equipment and data processing method and the recording medium
AU2007200566B2 (en) * 2003-02-17 2007-05-31 Silverbrook Research Pty Ltd Synchronisation protocol
US7885334B2 (en) 2003-05-06 2011-02-08 Envivio France Image coding or decoding device and method involving multithreading of processing operations over a plurality of processors, and corresponding computer program and synchronisation signal
FR2854754A1 (en) * 2003-05-06 2004-11-12 Envivio France IMAGE ENCODING OR DECODING METHOD AND DEVICE WITH PARALLELIZATION OF PROCESSING ON A PLURALITY OF PROCESSORS, CORRESPONDING COMPUTER PROGRAM AND SYNCHRONIZATION SIGNAL
WO2004100557A2 (en) * 2003-05-06 2004-11-18 Envivio France Image coding or decoding device and method involving multithreading of processing operations over a plurality of processors, corresponding computer program and synchronisation signal
WO2004100557A3 (en) * 2003-05-06 2006-10-05 Envivio France Image coding or decoding device and method involving multithreading of processing operations over a plurality of processors, corresponding computer program and synchronisation signal
US8264496B2 (en) 2007-03-14 2012-09-11 Stmicroelectronics S.A. Data management for image processing
FR2913784A1 (en) * 2007-03-14 2008-09-19 St Microelectronics Sa DATA MANAGEMENT FOR IMAGE PROCESSING
EP1971152A2 (en) * 2007-03-14 2008-09-17 Stmicroelectronics Sa Data management for image processing
EP1971152A3 (en) * 2007-03-14 2010-07-14 Stmicroelectronics Sa Data management for image processing
US8200992B2 (en) 2007-09-24 2012-06-12 Cognitive Electronics, Inc. Parallel processing computer systems with reduced power consumption and methods for providing the same
US9281026B2 (en) 2007-09-24 2016-03-08 Cognitive Electronics, Inc. Parallel processing computer systems with reduced power consumption and methods for providing the same
US8516280B2 (en) 2007-09-24 2013-08-20 Cognitive Electronics, Inc. Parallel processing computer systems with reduced power consumption and methods for providing the same
US8713335B2 (en) 2007-09-24 2014-04-29 Cognitive Electronics, Inc. Parallel processing computer systems with reduced power consumption and methods for providing the same
US8209597B2 (en) 2009-03-23 2012-06-26 Cognitive Electronics, Inc. System and method for achieving improved accuracy from efficient computer architectures
US9141131B2 (en) 2011-08-26 2015-09-22 Cognitive Electronics, Inc. Methods and systems for performing exponentiation in a parallel processing environment
US9063754B2 (en) 2013-03-15 2015-06-23 Cognitive Electronics, Inc. Profiling and optimization of program code/application
CN103237225A (en) * 2013-05-10 2013-08-07 上海国茂数字技术有限公司 Method for correcting video encoding and decoding errors through utilizing luma and chroma (YUV) and red, green and blue (RGB) space union
CN103237225B (en) * 2013-05-10 2016-04-20 上海国茂数字技术有限公司 YUV is utilized to combine the method revising coding and decoding video error with rgb space
US9934043B2 (en) 2013-08-08 2018-04-03 Linear Algebra Technologies Limited Apparatus, systems, and methods for providing computational imaging pipeline
US10360040B2 (en) 2013-08-08 2019-07-23 Movidius, LTD. Apparatus, systems, and methods for providing computational imaging pipeline
US11042382B2 (en) 2013-08-08 2021-06-22 Movidius Limited Apparatus, systems, and methods for providing computational imaging pipeline
US11567780B2 (en) 2013-08-08 2023-01-31 Movidius Limited Apparatus, systems, and methods for providing computational imaging pipeline
US11768689B2 (en) 2013-08-08 2023-09-26 Movidius Limited Apparatus, systems, and methods for low power computational imaging

Also Published As

Publication number Publication date
AU4326699A (en) 1999-12-20

Similar Documents

Publication Publication Date Title
US6757019B1 (en) Low-power parallel processor and imager having peripheral control circuitry
CN107657581B (en) Convolutional neural network CNN hardware accelerator and acceleration method
KR100283161B1 (en) Motion evaluation coprocessor
US6728862B1 (en) Processor array and parallel data processing methods
US20050160406A1 (en) Programmable digital image processor
US6070003A (en) System and method of memory access in apparatus having plural processors and plural memories
US7098437B2 (en) Semiconductor integrated circuit device having a plurality of photo detectors and processing elements
US7580567B2 (en) Method and apparatus for two dimensional image processing
US5197140A (en) Sliced addressing multi-processor and method of operation
US6948050B1 (en) Single integrated circuit embodying a dual heterogenous processors with separate instruction handling hardware
US5696836A (en) Motion estimation processor architecture for full search block matching
US20060002472A1 (en) Various methods and apparatuses for motion estimation
WO1999063751A1 (en) Low-power parallel processor and imager integrated circuit
US20110173416A1 (en) Data processing device and parallel processing unit
US20080320273A1 (en) Interconnections in Simd Processor Architectures
US7073041B2 (en) Virtual memory translation unit for multimedia accelerators
Kim et al. An 81.6 GOPS object recognition processor based on NoC and visual image processing memory
US20030222877A1 (en) Processor system with coprocessor
Kyo et al. An integrated memory array processor for embedded image recognition systems
Hinrichs et al. A 1.3-GOPS parallel DSP for high-performance image-processing applications
Hsieh et al. Low-power MPEG2 encoder architecture for digital CMOS camera
Chen A cost-effective three-step hierarchical search block-matching chip for motion estimation
WO2010113340A1 (en) Single instruction multiple data (simd) processor having a plurality of processing elements interconnected by a ring bus
US20040047422A1 (en) Motion estimation using logarithmic search
Lai et al. An efficient array architecture with data-rings for 3-step hierarchical search block matching algorithm

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

NENP Non-entry into the national phase

Ref country code: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase