WO2013149132A1 - System and method for multi-core hardware video encoding and decoding - Google Patents

System and method for multi-core hardware video encoding and decoding Download PDF

Info

Publication number
WO2013149132A1
WO2013149132A1 PCT/US2013/034581 US2013034581W WO2013149132A1 WO 2013149132 A1 WO2013149132 A1 WO 2013149132A1 US 2013034581 W US2013034581 W US 2013034581W WO 2013149132 A1 WO2013149132 A1 WO 2013149132A1
Authority
WO
WIPO (PCT)
Prior art keywords
core
video data
memory
cores
associated memory
Prior art date
Application number
PCT/US2013/034581
Other languages
French (fr)
Inventor
Aki KUUSELA
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Publication of WO2013149132A1 publication Critical patent/WO2013149132A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access

Definitions

  • a video encoder can get a new video frame along with reference video frame(s) as inputs, and output a compressed video bitstream.
  • a video decoder can get a compressed video bitstream as input, and output uncompressed (or decoded) video frames. When decoding inter-frames, previous (reference) frames are used for decoding.
  • One aspect of the disclosed embodiments is a method for performing a coding operation on video data using a computing device that includes primary memory, a plurality of cores each having an associated memory, and a bus coupling the primary memory to one or more of the plurality of cores.
  • the method includes storing the video data in the primary memory, loading, via the bus, at least a first portion of the video data from the primary memory into the associated memory of a first core of the plurality of cores, performing a coding operation, by the first core, on the first portion of the video data, loading at least part of a first reference portion from the first core into the associated memory of a second core of the plurality of cores, wherein the first reference portion is loaded directly without being stored in the primary memory, loading, via the bus, at least a second portion of the video data from the primary memory into the associated memory of the second core of the plurality of cores, and performing the coding operation, by the second core, on the second portion of the video data using the first reference portion as a reference.
  • the computing device includes a plurality of cores, each core of the plurality of cores having an associated memory, a primary memory coupled to the associated memory of two or more of the plurality of cores by respective input lines of an internal bus, wherein the first core of the plurality of cores is configured to perform a video data coding operation on a first portion of video data loaded into its associated memory from the primary memory that includes generating a first reference portion, and wherein the second core of the plurality of cores is configured to perform a video data coding operation on a second portion of video data loaded into its associated memory from the primary memory using the first reference portion that is loaded into the associated memory of the second core directly from the associated memory of the first core.
  • FIG. 1 depicts schematically a hardware implementation of a video decoder
  • FIG. 2A depicts a timing diagram for a traditional single core video processor
  • FIG. 2B depicts a timing diagram for a staged three-core video processor
  • FIG. 3 illustrates a synchronization technique in accordance with an implementation of this disclosure
  • FIG. 4 depicts a multi-core computing device in accordance with an implementation of this disclosure
  • FIG. 5 depicts a multi-core computing device in accordance with another implementation of this disclosure.
  • FIG. 6 depicts a process in accordance with an implementation of this disclosure
  • FIG. 7 depicts a process in accordance with the implementation of FIG. 6.
  • FIG. 8 depicts a schematic of a multi-core computing device in accordance with an implementation of this disclosure.
  • FIG. 1 depicts schematically a hardware implementation of a video decoder.
  • Video decoder 100 can get video data 110 as its input (e.g., a video bitstream), and can output decoded frames (e.g., decoded frame 120).
  • decoded frames e.g., decoded frame 120
  • reference frame(s) 130 can be used for decoding.
  • a hardware implementation of a video encoder can get new image and reference video data as inputs, and can output compressed video data.
  • a multi-core computing device in accordance with an implementation of this disclosure has two or more processors (called cores) placed within the same integrated circuit.
  • Each of the cores can perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data.
  • a multi-core computing device can perform coding operations for a variety of video compression standards.
  • these standards can include, but are not limited to, Motion JPEG 2000, H.264/MPEG4-AVC, DV and VP8.
  • a hardware accelerator module e.g., cores
  • ASIC application specific integrated circuit
  • the modules can be used to process different portions of a video bitstream (e.g., frames or macroblocks of video data) at the same time.
  • the multi-core solution can effectively multiply data throughput.
  • FIG. 2A depicts a timing diagram for a traditional single core video processor.
  • FIG. 2A shows frames n, n+1 and n+2 processed in a sequential fashion (e.g., a frame (such as frame n+1) is processed after a last frame (such as frame n) has been processed).
  • FIG. 2B depicts a timing diagram for a staged three-core video processor.
  • frames n through n+5 are processed in a concurrent, but staged fashion.
  • Frames n and n+3 are processed by a first core
  • frames n+1 and n+4 are processed by a second core
  • frames n+2 and n+5 are processed by a third core.
  • Frames n, n+1 and n+2 can be processed concurrently, with the processing of each frame being started at a staged time.
  • Encoding and/or decoding of video data can involve accessing previously processed reference data (e.g., for motion search in an encoder and motion compensation in a decoder).
  • processing by the different cores can be staged and synchronized so that the cores can access the previously processed reference data while different frames are processed concurrently. This can occur by, for example, delaying the processing of a later frame until the reference data from a previous frame is available.
  • synchronization among the individual cores can be solved by reference to the motion search area size.
  • the encoder's motion search area is +/- 32 pixels around a current macroblock (MB).
  • MB macroblock
  • each encoder instance can be started when the instance handling the previous frame is at least 32 pixel rows (two MB rows) ahead. This solution is particularly useful when the requirement for the available reference frame area is directly dependent on the motion search area size.
  • synchronization can be handled by checking the status of a previous encoder's progress (e.g., at the beginning of each MB row). An implementation of this aspect is explained in further detail with reference to FIG. 3.
  • the reference data generated by a first encoder is fed directly to the next encoder, and the synchronization is handled by managing the flow of data. An implementation of this aspect is explained in further detail with reference to FIGS. 4 and 5.
  • FIG. 3 illustrates a synchronization technique in accordance with an implementation of this disclosure.
  • the synchronization technique uses a reference frame buffer for MB row synchronization in an example where two frames are being simultaneously encoded.
  • an encoder can encode a current frame (e.g., frame N+l) using a reference frame (e.g., frame N) at the same time as encoding the reference frame.
  • the current frame may be encoded using one core, while the reference frame may be encoded using another core.
  • control software can write a keyword (e.g., 0x007FAB 10), such as keywords 302-314, to an address (e.g., a first address) of each macroblock row (e.g., row 300) in reference frame memory 340.
  • a keyword e.g., 0x007FAB 10
  • an address e.g., a first address
  • each macroblock row e.g., row 300
  • FIG. 3 some blocks of current frame N+l have been encoded, including block 320.
  • a current block 322 is a next block to be encoded from current frame N+l.
  • some blocks of reference frame N that have been encoded such as block 324, and some blocks of reference frame N that have not been encoded, such as block 326.
  • the blocks from reference frame N and current frame N+l are shown together for reference only. In practice, the blocks from a reference frame and a current frame can be represented and stored separately in memory, such as a primary memory and/or a memory associated with a core.
  • an encoder can read the keyword memory location at the beginning of each MB row within a motion estimation search area of a current block that is being encoded from the current frame and a MB row immediately below the motion estimation search area of the current block.
  • the motion estimation search area can extend, for example, two blocks above and below the current block. If the encoder does find the keyword in any of the locations described above, then the encoder can determine that the lowest MB row in the reference frame belonging to the motion search area has not been processed. If the keyword exists, then the encoder may enter a polling mode 330 where it can wait for the keyword to change from the keyword value.
  • Synchronization for a decoder can be done in an analogous fashion, for example, by using a keyword check during motion compensation.
  • a decoder keyword check may be done before a motion vector is used for decoding.
  • a keyword value can be written to each macroblock row in one or more reference frame buffers.
  • a determination can be made as to whether a motion vector references a reference block in a macroblock row that has not been previously used for decoding.
  • the motion vector can reference a reference block in a macroblock row in a reference frame that is lower than one or more previously referenced rows.
  • the decoder can read the memory location of the keyword in the macroblock row in which the reference block is located to determine if the reference block is available for use. If the reference block is available for use, the decoder can proceed with decoding using the reference block. If the reference block is not available for use, the decoder can enter a polling state until the keyword is overwritten.
  • synchronization of the cores of a multi-core computing device is done using a memory-mapped register interface.
  • each of the cores can broadcast its progress (e.g., the current macroblock line number) in its memory-mapped registers, which can be read through the system bus as if they were addresses in an external memory.
  • This approach can, in some cases, save the overhead of writing the keywords in the reference frames.
  • the cores are configured such that each is able to read the other core's registers to maintain synchronization.
  • FIG. 4 illustrates frame data transfer of a multi-core system without chaining
  • FIG. 5 illustrates frame data transfer of a multi-core system with chaining.
  • FIG. 4 depicts a multi-core computing device in accordance with an implementation of this disclosure.
  • the cores of multi-core computing device 400 are not chained.
  • Computing device 400 includes control processor 410, primary memory 420, input/output port 430 and internal bus 440.
  • Internal bus 440 may be a standard bus interface such as an Advanced Microcontroller Bus Architecture (AMBA) Advanced extensible Interface (AXI), which can be used as an on-chip bus in SoC designs.
  • ABA Advanced Microcontroller Bus Architecture
  • AXI Advanced extensible Interface
  • Control processor 410 can interconnect and communicate with the other components of computing device 400 via internal bus 440.
  • Computing device 400 may include primary memory 420, which can represent volatile memory devices and/or non-volatile memory devices. Although primary memory 420 is illustrated for simplicity as a single unit, it can include multiple physical units of memory that may be physically distributed. In an implementation, the volatile memory may be or include dynamic random access memory (DRAM). Computing device 400 may access a computer application program stored in non- volatile internal memory or stored in external memory. External memory may be coupled to computing device 400 via input/output (I/O) port 430. A DRAM controller (not shown) can connect I/O port 430 to internal bus 440. A portion of video data may be received via I/O port 430 and stored in primary memory 420.
  • DRAM dynamic random access memory
  • video data (e.g., reference frames) can be stored in external (off-chip) memory.
  • decoding 1080p video may require around 9 Mbytes of RAM, the cost of which can be commercially undesirable if implemented as on-chip SRAM rather than off-chip DRAM.
  • Computing device 400 can also include two or more cores, here processors
  • Each of the processors can have an associated memory.
  • each of the processors may have an associated on-chip cache memory.
  • some or all of the cores are associated with a shared on-chip cache memory or on-chip buffer memory.
  • the memory locations in the shared on-chip cache memory can be segmented such that each processor has exclusive access to a portion of the shared memory, memory locations can be accessible by more than one processor, or a combination thereof.
  • Each of the processors can have a read new input video data line 452, 462,
  • processors 450, 460, 470, 480 can execute executable instructions that cause the processor to perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data received via read new input video data line 452, 462, 472, ..., 482 and stored within an associated memory of each of the processors.
  • processors 450, 460, 470, 480 also includes an output video data line (not shown).
  • the output video data lines can be used to write video data output by the coding operation(s) performed by the processor(s) to, for example, the primary memory 420.
  • processors 450, 460, 470, 480 can write video data output to primary memory 420 via internal bus 440.
  • Each core of computing device 400 may read input data (e.g., a new frame) and reference data (e.g., a reference frame) from primary memory 420 via internal bus 440 coupled to its respective read new input video data line 452, 462, 472, ..., 482 and read reference data line 454, 464, 474, ..., 484.
  • each processor also may write reference data (e.g., a reference frame) to primary memory 420 via its respective write reference data line 456, 466, 476, ..., 486.
  • Read new input video data lines 452, 462, 472, ..., 482, write reference data lines 456, 466, 476, ..., 486, and read reference data lines 454, 464, 474, ..., 484 can represent data flow via the standard bus interface such as AXI.
  • each core 450, 460, 470, ..., 480 might have one read data channel and one write data channel through which all data may transferred and by which the core connects to internal bus 440.
  • FIG. 5 depicts a multi-core computing device in accordance with another implementation of this disclosure.
  • the cores of multi-core computing device 500 are chained.
  • Computing device 500 can include control processor 510, primary memory 520, I/O port 530 and internal bus 540.
  • the structure of each of these components can correspond to the description above with regard to like components of computing device 400.
  • Computing device 500 can include two or more cores, here processors 550,
  • processors 550, 560, 570, 580 may execute executable instructions that cause the processor to perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data received via read new input video data lines 552, 562, 572, ..., 582.
  • Read new input video data lines 552, 562, 572, ..., 582 can be implemented as channels on the standard bus interface.
  • Video data received using read new input video data lines 552, 562, 572, ..., 582 can be stored within an associated memory of each of the processors.
  • each of processors 550, 560, 570, ..., 580 also includes an output video data line (not shown).
  • the output video data lines can be used to write video data output by the coding operation(s) performed by the processor(s) to, for example, primary memory 520.
  • processors 550, 560, 570, ..., 580 write video data output to primary memory 520 via internal bus 540.
  • computing device 500 synchronizes operation of processors 550, 560, 570, ..., 580 by connecting a write reference output of a first processor to the read reference input of a second processor via a write reference line, and by connecting the write reference output of the second processor to the read reference input of a third processor via a write reference line, and so on to the Nth processor.
  • computing device 500 can synchronize operation of processors 550, 560, 570, 580 by connecting a write reference output of processor 550 to the read reference input of processor 560 via write reference line 556, and by connecting the write reference output of processor 560 to the read reference input of processor 570 via write reference line 566, and so on to the Nth processor.
  • connections from one core to another may be actual physical connections that are additional to the standard data buses of the internal bus system.
  • the Nth processor can have a direct output reference 586 that provides its reference to primary memory 520 via the bus interface 540, which may be a standard bus interface such as AMB A AXI.
  • the configuration of computing device 500 may allow for a processor (e.g., an encoder core) processing earlier video data (e.g., an earlier frame, macroblock, macroblock row, slice, etc.) to feed its output directly to the processor processing the next portion of video data (e.g., to another encoder core processing the next frame, macroblock, etc.). Feeding its output directly to the next processor refers to providing the output to the processor without writing and reading data to/from primary memory 520.
  • a processor e.g., an encoder core
  • processing earlier video data e.g., an earlier frame, macroblock, macroblock row, slice, etc.
  • Feeding its output directly to the next processor refers to providing the output to the processor without writing and reading data to/from primary memory 520.
  • a latter encoder core in the succession can begin its encoding task when it has collected sufficient data to fill its internal search area memory.
  • the first encoder core of the chain might not write out reference data unless the next encoder in the line is ready to receive it.
  • the cores can be synchronized by way of the reference data they submit and receive and additional control level synchronization logic can be avoided.
  • the slowest encoder core in the succession can determine the overall speed of the system.
  • three frames' worth of data may need to be transferred over a system bus to encode one frame: (1) a new input frame to be read by the encoder core; (2) at least one reference frame to be read by the encoder for use in typical inter frame coding, for example; and (3) at least one reference frame to be written by the encoder core for subsequent processing.
  • the first encoder core in the succession does not write its reference frame to primary memory 520
  • the second encoder core does not read its reference from primary memory 520. Therefore, instead of transferring six frames worth of data to encode the two frames being processed by the two encoder cores, only four frames are transferred.
  • N processors can operate in a similar manner. That is, N processors can read new input video data from a memory, e.g., primary memory 520.
  • Processor 1 can also read reference data from the memory, and processor N can write reference data to the memory.
  • the processors between processor 1 and processor N can avoid reading/writing reference data from/to the primary memory.
  • N new input video frames of data are to be available for encoding before any of the N processors finish encoding a frame, after which a burst of N compressed frames can be output in a very short period.
  • FIG. 6 depicts a process in accordance with an implementation of this disclosure. Specifically, FIG. 6 depicts process 600 for performing a coding operation on video data using a computing device having a plurality of processors, each having an associated memory.
  • video data can be stored in a primary memory of the computing device.
  • At least a first portion of the video data can then be loaded into the associated memory of a first processor at step 610.
  • the first processor can perform a coding operation on this first portion of video data at step 615.
  • At least part of a first reference from the first processor can be loaded at step
  • a second portion of video data can be loaded from the primary memory into the associated memory of the second processor.
  • the second processor can then perform the coding operation on the second portion of the video data using the first reference portion as a reference at step 630.
  • Process 600 optionally continues at bubble A to process 700 (FIG. 7).
  • FIG. 7 depicts process 700 for performing the coding operation on the computing device, where three or more processors may be implemented.
  • process 700 can load at least a part of a second reference from the second processor into a third processor's associated memory.
  • a third portion of the video data can be loaded into the associated memory of the third processor at step 710.
  • the third processor can perform the coding operation, at step 715, on the third portion of video data using the second reference portion as a reference.
  • the post-coding operation video data from the first, second, and third processors can be stored in the primary memory at step 720.
  • the post-coding operation video data can be stored as the individual processor completes the coding operation on its respective portion of video data.
  • process 600 and 700 are depicted and described as a series of steps. However, steps in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, steps in accordance with this disclosure may occur with other steps not presented and described herein. Furthermore, some of the described steps may not be required in some implementations.
  • encoding quality can be increased by using multiple reference frames.
  • a motion search for another reference frame further in the past such as one encoded at a particularly good quality (e.g., a long-term reference frame in the H.264 coding scheme or a golden frame in the VPx coding scheme).
  • a particularly good quality e.g., a long-term reference frame in the H.264 coding scheme or a golden frame in the VPx coding scheme.
  • some or all encoding cores of a multi-core encoder can be configured to read this same additional reference frame.
  • the additional reference frame is read by only the first encoder core in the chain, and a delay buffer is inserted within each encoder core through which the additional reference frame propagates.
  • FIG. 8 depicts a schematic of a multi-core computing device in accordance with one implementation of this disclosure.
  • Computing device 800 includes N-cores in this example, namely processor 1 through processor N.
  • processor 1 is coupled to a memory (not shown) via line 852.
  • Processor 1 and processor 2 are coupled via lines 856 and 856';
  • processor 2 and processor 3 are coupled via lines 866 and 866' and 866" ; and so forth.
  • lines 852, 856, 856', 866, 866', 866", etc. may be physical lines connecting the corresponding processors or may be representative, e.g., of channels, data flow or data transmissions between the processors. In the latter case, lines 856 and 856', for example, may representative two logically different data transmissions between processor 1 and processor 2, but the data transmissions may occur along the same physical line.
  • processor 1 receives reference video data (e.g., a reference frame) from memory (e.g., a DRAM) along line 852.
  • reference frame is referred to in this paragraph as RFO.
  • Processor 1 which in this example is the first core in the chain, uses the reference frame RFO received from the memory to encode a video frame.
  • Processor 1 outputs data to processor 2 using line 856.
  • the data e.g., a reconstruction of the frame encoded by processor 1 can be used by processor 2 as a reference frame.
  • this data is referred to in this paragraph as RF1.
  • Processor 1 also outputs the reference frame RFO it received from the memory to processor 2 using line 856'.
  • Processor 2 can use reference frame RFO as an additional reference frame to encode a video frame.
  • Processor 2 outputs data to processor 3 using line 866.
  • the data can be used by processor 3 as a reference frame. For convenience, this data is referred to in this paragraph as RF2.
  • Processor 2 also passes along the data, reference frames RFO and RFl, it received from processor 1.
  • Processor 3 can use RFO, RFl and RF2 to encode a video frame. This can continue for additional cores in the chain up to processor N.
  • the first core in the chain can use one reference frame
  • the second core can use the output of the previous core as well as the input to the previous processor
  • the third core can use the output of the previous core as well as the input to the two previous processors, and so on.
  • Encoders further in the succession may provide higher compression rates than the earlier encoders, as they may have the capability of finding better motion search matches due to the availability of more reference frames.
  • the performance of the multi-core accelerator can be expressed as:
  • N is the number of processors.
  • synchronization of the cores can introduce a latency component, which can be dependent on, for an encoder, the number of encoding cores in the encoding device, or, for a decoder, the maximum downwards pointing decoded motion vector.
  • the maximum can refer to a maximum positive/lower offset between a current block and a reference block referred to by a motion vector. For example, if a maximum downwards pointing decoded motion vector references a reference block in, for example, a substantially lower macroblock row, the latency component can be increased.
  • Some implementations of the disclosed techniques and devices can enable, for example, computing devices 400, 500, and/or 800 to encode and/or decode high video resolutions, such as those greater than 1080p.
  • the ability for a computing device to process video data is based at least in part on a number of clock cycles required to process a unit of video data (e.g., a macroblock) and a clock rate of the core(s) used to perform the processing (i.e., cycles per second).
  • the required processing rate for a particular video resolution can be determined based on a number of units per frame (8,160 macroblocks in the case of 1080p, for example) and a frame rate (e.g. 24 frames per second).
  • Some implementations capable of high resolution processing can include reducing a number of clock cycles required to process a unit of video data (e.g., a
  • macroblock in one or more cores, increasing a clock rate of one or more cores, splitting operations at a macroblock level, splitting operations at a macroblock row level, splitting operations at a slice level, or a combination thereof to enable a computing device to achieve the required processing rate for a given resolution and frame rate.
  • Splitting operations can include concurrent processing of portions of video data using separate processing cores.
  • slices can be processed in groups according to a number of available processing cores. For example, if four processing cores are available, the first four slices to be processed can each be processed using a different core. Subsequent groups of four slices can also each be processed using a different core.
  • example or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example' or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations.
  • the processors described herein can be any type of device, or multiple devices, capable of manipulating or processing information now-existing or hereafter developed, including, for example, optical processors, quantum and/or molecular processors, general purpose processors, special purpose processors, IP cores, ASICS, programmable logic arrays, programmable logic controllers, microcode, firmware, microcontrollers, microprocessors, digital signal processors, memory, or any combination of the foregoing.
  • the terms "processor,” “core,” and “controller” should be understood as including any of the foregoing, either singly or in combination.
  • a processor of those described herein may be illustrated for simplicity as a single unit, it can include multiple processors or cores.
  • a computer program application stored in non-volatile memory or computer-readable medium may include code or executable instructions that when executed may instruct or cause a controller or processor to perform methods discussed herein such as a method for performing a coding operation on video data using a computing device containing a plurality of processors in accordance with an embodiment of the invention.
  • the computer-readable medium may be a non-transitory computer-readable media including all forms and types of memory and all computer-readable media except for a transitory, propagating signal.
  • the non- volatile memory or computer- readable medium may be external memory.

Abstract

Coding operations on video data using a computing device having a plurality of cores are disclosed. At least a first portion of the video data is loaded from a primary memory into an associated memory of a first core of the plurality of cores. A coding operation is performed by the first core on the first portion of the video data, at least part of a first reference portion from the first core is directly loaded into the associated memory of a second core of the plurality of cores, at least a second portion of the video data from the primary memory is loaded into the associated memory of the second core of the plurality of cores, the coding operation is performed by the second core on the second portion of the video data using the first reference portion as a reference.

Description

SYSTEM AND METHOD FOR MULTI-CORE HARDWARE
VIDEO ENCODING AND DECODING
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No.
61/618,189, filed March 30, 2012, and to U.S. Parent Application No. 13/460,024, filed April 30, 2012, each of which is hereby incorporated in its entirety by reference.
BACKGROUND
[0002] A video encoder can get a new video frame along with reference video frame(s) as inputs, and output a compressed video bitstream. A video decoder can get a compressed video bitstream as input, and output uncompressed (or decoded) video frames. When decoding inter-frames, previous (reference) frames are used for decoding.
[0003] Video resolution and frame rate requirements are getting higher and higher.
Beyond 1080p, there can be challenges to provide the required data throughput using fixed- function hardware accelerators whose performance is limited by the maximum clock frequency at which the logic circuits can run.
SUMMARY
[0004] Disclosed herein are embodiments of systems, methods, and apparatuses for multi-core hardware video encoding and decoding.
[0005] One aspect of the disclosed embodiments is a method for performing a coding operation on video data using a computing device that includes primary memory, a plurality of cores each having an associated memory, and a bus coupling the primary memory to one or more of the plurality of cores. The method includes storing the video data in the primary memory, loading, via the bus, at least a first portion of the video data from the primary memory into the associated memory of a first core of the plurality of cores, performing a coding operation, by the first core, on the first portion of the video data, loading at least part of a first reference portion from the first core into the associated memory of a second core of the plurality of cores, wherein the first reference portion is loaded directly without being stored in the primary memory, loading, via the bus, at least a second portion of the video data from the primary memory into the associated memory of the second core of the plurality of cores, and performing the coding operation, by the second core, on the second portion of the video data using the first reference portion as a reference.
[0006] Another aspect of the disclosed embodiments is a computing device. The computing device includes a plurality of cores, each core of the plurality of cores having an associated memory, a primary memory coupled to the associated memory of two or more of the plurality of cores by respective input lines of an internal bus, wherein the first core of the plurality of cores is configured to perform a video data coding operation on a first portion of video data loaded into its associated memory from the primary memory that includes generating a first reference portion, and wherein the second core of the plurality of cores is configured to perform a video data coding operation on a second portion of video data loaded into its associated memory from the primary memory using the first reference portion that is loaded into the associated memory of the second core directly from the associated memory of the first core.
[0007] These and other embodiments will be described in additional detail hereafter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:
[0009] FIG. 1 depicts schematically a hardware implementation of a video decoder;
[0010] FIG. 2A depicts a timing diagram for a traditional single core video processor;
[0011] FIG. 2B depicts a timing diagram for a staged three-core video processor;
[0012] FIG. 3 illustrates a synchronization technique in accordance with an implementation of this disclosure;
[0013] FIG. 4 depicts a multi-core computing device in accordance with an implementation of this disclosure;
[0014] FIG. 5 depicts a multi-core computing device in accordance with another implementation of this disclosure;
[0015] FIG. 6 depicts a process in accordance with an implementation of this disclosure;
[0016] FIG. 7 depicts a process in accordance with the implementation of FIG. 6; and
[0017] FIG. 8 depicts a schematic of a multi-core computing device in accordance with an implementation of this disclosure. DETAILED DESCRIPTION
[0018] FIG. 1 depicts schematically a hardware implementation of a video decoder.
Video decoder 100 can get video data 110 as its input (e.g., a video bitstream), and can output decoded frames (e.g., decoded frame 120). When decoding inter-frames, reference frame(s) 130 can be used for decoding. Analogously, a hardware implementation of a video encoder can get new image and reference video data as inputs, and can output compressed video data.
[0019] A multi-core computing device in accordance with an implementation of this disclosure has two or more processors (called cores) placed within the same integrated circuit. Each of the cores can perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data.
[0020] A multi-core computing device can perform coding operations for a variety of video compression standards. By way of example, these standards can include, but are not limited to, Motion JPEG 2000, H.264/MPEG4-AVC, DV and VP8.
[0021] In an implementation of a multi-core solution, several copies of a hardware accelerator module (e.g., cores) can be placed on the same application specific integrated circuit (ASIC), and the modules can be used to process different portions of a video bitstream (e.g., frames or macroblocks of video data) at the same time. In implementations where the bus and memory architecture is not a performance limiting factor, the multi-core solution can effectively multiply data throughput.
[0022] FIG. 2A depicts a timing diagram for a traditional single core video processor.
A traditional single core video processor processes video frames sequentially. For example, FIG. 2A shows frames n, n+1 and n+2 processed in a sequential fashion (e.g., a frame (such as frame n+1) is processed after a last frame (such as frame n) has been processed).
[0023] FIG. 2B depicts a timing diagram for a staged three-core video processor. In the implementation shown in FIG. 2B, frames n through n+5 are processed in a concurrent, but staged fashion. Frames n and n+3 are processed by a first core, frames n+1 and n+4 are processed by a second core, and frames n+2 and n+5 are processed by a third core. Frames n, n+1 and n+2 can be processed concurrently, with the processing of each frame being started at a staged time. In this example, the processing of frame n is started first, the processing of frame n+1 is started after a portion of the processing of frame n is completed, and the processing of frame n+2 is started after a portion of the processing of frame n+1 is completed. [0024] Encoding and/or decoding of video data can involve accessing previously processed reference data (e.g., for motion search in an encoder and motion compensation in a decoder). In a multi-core solution, processing by the different cores can be staged and synchronized so that the cores can access the previously processed reference data while different frames are processed concurrently. This can occur by, for example, delaying the processing of a later frame until the reference data from a previous frame is available.
[0025] In one implementation of a multi-core computing device performing video encoding operations, synchronization among the individual cores can be solved by reference to the motion search area size. As an example, assume that the encoder's motion search area is +/- 32 pixels around a current macroblock (MB). In such an example, each encoder instance can be started when the instance handling the previous frame is at least 32 pixel rows (two MB rows) ahead. This solution is particularly useful when the requirement for the available reference frame area is directly dependent on the motion search area size.
[0026] In one aspect of this disclosure, synchronization can be handled by checking the status of a previous encoder's progress (e.g., at the beginning of each MB row). An implementation of this aspect is explained in further detail with reference to FIG. 3. In another aspect of this disclosure, the reference data generated by a first encoder is fed directly to the next encoder, and the synchronization is handled by managing the flow of data. An implementation of this aspect is explained in further detail with reference to FIGS. 4 and 5.
[0027] FIG. 3 illustrates a synchronization technique in accordance with an implementation of this disclosure. In FIG. 3, the synchronization technique uses a reference frame buffer for MB row synchronization in an example where two frames are being simultaneously encoded. In an implementation, an encoder can encode a current frame (e.g., frame N+l) using a reference frame (e.g., frame N) at the same time as encoding the reference frame. The current frame may be encoded using one core, while the reference frame may be encoded using another core.
[0028] In an implementation consistent with FIG. 3, before starting to encode a new frame in hardware, control software can write a keyword (e.g., 0x007FAB 10), such as keywords 302-314, to an address (e.g., a first address) of each macroblock row (e.g., row 300) in reference frame memory 340. For example, if operating at 1080p resolution, there can be 1088/16 = 68 macroblock rows and associated write operations per frame.
[0029] As shown in FIG. 3, some blocks of current frame N+l have been encoded, including block 320. A current block 322 is a next block to be encoded from current frame N+l. Also shown are some blocks of reference frame N that have been encoded, such as block 324, and some blocks of reference frame N that have not been encoded, such as block 326. The blocks from reference frame N and current frame N+l are shown together for reference only. In practice, the blocks from a reference frame and a current frame can be represented and stored separately in memory, such as a primary memory and/or a memory associated with a core.
[0030] To maintain synchronization (e.g., to ensure that reference data needed to encode the current frame is available), an encoder can read the keyword memory location at the beginning of each MB row within a motion estimation search area of a current block that is being encoded from the current frame and a MB row immediately below the motion estimation search area of the current block.
[0031] The motion estimation search area can extend, for example, two blocks above and below the current block. If the encoder does find the keyword in any of the locations described above, then the encoder can determine that the lowest MB row in the reference frame belonging to the motion search area has not been processed. If the keyword exists, then the encoder may enter a polling mode 330 where it can wait for the keyword to change from the keyword value.
[0032] Synchronization for a decoder can be done in an analogous fashion, for example, by using a keyword check during motion compensation. In an implementation, a decoder keyword check may be done before a motion vector is used for decoding. As described with respect to the encoder, a keyword value can be written to each macroblock row in one or more reference frame buffers. During decoding, a determination can be made as to whether a motion vector references a reference block in a macroblock row that has not been previously used for decoding.
[0033] For example, the motion vector can reference a reference block in a macroblock row in a reference frame that is lower than one or more previously referenced rows. In this case, the decoder can read the memory location of the keyword in the macroblock row in which the reference block is located to determine if the reference block is available for use. If the reference block is available for use, the decoder can proceed with decoding using the reference block. If the reference block is not available for use, the decoder can enter a polling state until the keyword is overwritten.
[0034] In an implementation of this disclosure, synchronization of the cores of a multi-core computing device is done using a memory-mapped register interface. In such an implementation, each of the cores can broadcast its progress (e.g., the current macroblock line number) in its memory-mapped registers, which can be read through the system bus as if they were addresses in an external memory. This approach can, in some cases, save the overhead of writing the keywords in the reference frames. In a system-on-a-chip (SoC) implementation, the cores are configured such that each is able to read the other core's registers to maintain synchronization.
[0035] Referring now to FIGS. 4 and 5, a technique for synchronizing staged cores
(e.g., encoder cores) is to have a core processing an earlier frame feed output directly, i.e., without writing and reading reference frame data to/from primary memory (e.g., a DRAM), to the core processing the next frame. Generally, FIG. 4 illustrates frame data transfer of a multi-core system without chaining and FIG. 5 illustrates frame data transfer of a multi-core system with chaining.
[0036] More specifically, FIG. 4 depicts a multi-core computing device in accordance with an implementation of this disclosure. In FIG. 4, the cores of multi-core computing device 400 are not chained. Computing device 400 includes control processor 410, primary memory 420, input/output port 430 and internal bus 440. Internal bus 440 may be a standard bus interface such as an Advanced Microcontroller Bus Architecture (AMBA) Advanced extensible Interface (AXI), which can be used as an on-chip bus in SoC designs. Control processor 410 can interconnect and communicate with the other components of computing device 400 via internal bus 440.
[0037] Computing device 400 may include primary memory 420, which can represent volatile memory devices and/or non-volatile memory devices. Although primary memory 420 is illustrated for simplicity as a single unit, it can include multiple physical units of memory that may be physically distributed. In an implementation, the volatile memory may be or include dynamic random access memory (DRAM). Computing device 400 may access a computer application program stored in non- volatile internal memory or stored in external memory. External memory may be coupled to computing device 400 via input/output (I/O) port 430. A DRAM controller (not shown) can connect I/O port 430 to internal bus 440. A portion of video data may be received via I/O port 430 and stored in primary memory 420. In accordance with a SoC implementation of this disclosure, video data (e.g., reference frames) can be stored in external (off-chip) memory. For example, decoding 1080p video may require around 9 Mbytes of RAM, the cost of which can be commercially undesirable if implemented as on-chip SRAM rather than off-chip DRAM. [0038] Computing device 400 can also include two or more cores, here processors
450, 460, 470, 480. Each of the processors (e.g., cores) can have an associated memory. For example, each of the processors may have an associated on-chip cache memory. In another example, some or all of the cores are associated with a shared on-chip cache memory or on-chip buffer memory. The memory locations in the shared on-chip cache memory can be segmented such that each processor has exclusive access to a portion of the shared memory, memory locations can be accessible by more than one processor, or a combination thereof.
[0039] Each of the processors can have a read new input video data line 452, 462,
472, ..., 482; a read reference data line 454, 464, 474, ..., 484; and a write reference data line 456, 466, 476, 486. Each of processors 450, 460, 470, 480 can execute executable instructions that cause the processor to perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data received via read new input video data line 452, 462, 472, ..., 482 and stored within an associated memory of each of the processors. In one implementation, each of processors 450, 460, 470, 480 also includes an output video data line (not shown). The output video data lines can be used to write video data output by the coding operation(s) performed by the processor(s) to, for example, the primary memory 420. In an alternative implementation, processors 450, 460, 470, 480 can write video data output to primary memory 420 via internal bus 440.
[0040] Each core of computing device 400 may read input data (e.g., a new frame) and reference data (e.g., a reference frame) from primary memory 420 via internal bus 440 coupled to its respective read new input video data line 452, 462, 472, ..., 482 and read reference data line 454, 464, 474, ..., 484. Similarly, each processor also may write reference data (e.g., a reference frame) to primary memory 420 via its respective write reference data line 456, 466, 476, ..., 486. Read new input video data lines 452, 462, 472, ..., 482, write reference data lines 456, 466, 476, ..., 486, and read reference data lines 454, 464, 474, ..., 484 can represent data flow via the standard bus interface such as AXI. Generally, that is, each core 450, 460, 470, ..., 480 might have one read data channel and one write data channel through which all data may transferred and by which the core connects to internal bus 440.
[0041] FIG. 5 depicts a multi-core computing device in accordance with another implementation of this disclosure. In FIG. 5, the cores of multi-core computing device 500 are chained. Computing device 500 can include control processor 510, primary memory 520, I/O port 530 and internal bus 540. The structure of each of these components can correspond to the description above with regard to like components of computing device 400.
[0042] Computing device 500 can include two or more cores, here processors 550,
560, 570, 580. Each of processors 550, 560, 570, 580 may execute executable instructions that cause the processor to perform a coding operation (e.g., encoding, decoding or transcoding) on some portion of input video data received via read new input video data lines 552, 562, 572, ..., 582. Read new input video data lines 552, 562, 572, ..., 582 can be implemented as channels on the standard bus interface. Video data received using read new input video data lines 552, 562, 572, ..., 582 can be stored within an associated memory of each of the processors.
[0043] In one implementation, each of processors 550, 560, 570, ..., 580 also includes an output video data line (not shown). The output video data lines can be used to write video data output by the coding operation(s) performed by the processor(s) to, for example, primary memory 520. In an alternative implementation, processors 550, 560, 570, ..., 580 write video data output to primary memory 520 via internal bus 540.
[0044] In an implementation, computing device 500 synchronizes operation of processors 550, 560, 570, ..., 580 by connecting a write reference output of a first processor to the read reference input of a second processor via a write reference line, and by connecting the write reference output of the second processor to the read reference input of a third processor via a write reference line, and so on to the Nth processor. For example, computing device 500 can synchronize operation of processors 550, 560, 570, 580 by connecting a write reference output of processor 550 to the read reference input of processor 560 via write reference line 556, and by connecting the write reference output of processor 560 to the read reference input of processor 570 via write reference line 566, and so on to the Nth processor. In some cases, the connections from one core to another (e.g.., the chained write reference output / read reference input) may be actual physical connections that are additional to the standard data buses of the internal bus system. The Nth processor can have a direct output reference 586 that provides its reference to primary memory 520 via the bus interface 540, which may be a standard bus interface such as AMB A AXI.
[0045] The configuration of computing device 500 may allow for a processor (e.g., an encoder core) processing earlier video data (e.g., an earlier frame, macroblock, macroblock row, slice, etc.) to feed its output directly to the processor processing the next portion of video data (e.g., to another encoder core processing the next frame, macroblock, etc.). Feeding its output directly to the next processor refers to providing the output to the processor without writing and reading data to/from primary memory 520.
[0046] With this approach, a latter encoder core in the succession can begin its encoding task when it has collected sufficient data to fill its internal search area memory. The first encoder core of the chain might not write out reference data unless the next encoder in the line is ready to receive it. Using such a technique, the cores can be synchronized by way of the reference data they submit and receive and additional control level synchronization logic can be avoided. In such an implementation, the slowest encoder core in the succession can determine the overall speed of the system.
[0047] By way of example, for the case of a single core encoder, three frames' worth of data may need to be transferred over a system bus to encode one frame: (1) a new input frame to be read by the encoder core; (2) at least one reference frame to be read by the encoder for use in typical inter frame coding, for example; and (3) at least one reference frame to be written by the encoder core for subsequent processing. In accordance with an implementation of this disclosure with two or more encoder cores chained together, as described with reference to computing device 500, the first encoder core in the succession does not write its reference frame to primary memory 520, and the second encoder core does not read its reference from primary memory 520. Therefore, instead of transferring six frames worth of data to encode the two frames being processed by the two encoder cores, only four frames are transferred.
[0048] A generalization for N processors can operate in a similar manner. That is, N processors can read new input video data from a memory, e.g., primary memory 520.
Processor 1 can also read reference data from the memory, and processor N can write reference data to the memory. The processors between processor 1 and processor N can avoid reading/writing reference data from/to the primary memory. Hence, when the data includes frames and the process is encoding, the number of frames to transfer FT for encoding N frames with N processors becomes:
[0049] FT = N+2 [Equation 1]
[0050] The more processors chained together in computer device 500, the more efficient memory usage can become, for example:
[0051] FT = 3, when N=l;
[0052] FT = 4, when N=2 (reducing memory bandwidth by 33% compared to single processor processing); [0053] FT = 5, when N=3 (reducing memory bandwidth by 44% compared to single processor processing);
[0054] FT = 6, when N=4 (reducing memory bandwidth by 50% compared to single processor processing); and so forth.
[0055] In one implementation, N new input video frames of data are to be available for encoding before any of the N processors finish encoding a frame, after which a burst of N compressed frames can be output in a very short period.
[0056] FIG. 6 depicts a process in accordance with an implementation of this disclosure. Specifically, FIG. 6 depicts process 600 for performing a coding operation on video data using a computing device having a plurality of processors, each having an associated memory. At step 605, video data can be stored in a primary memory of the computing device. At least a first portion of the video data can then be loaded into the associated memory of a first processor at step 610. The first processor can perform a coding operation on this first portion of video data at step 615.
[0057] At least part of a first reference from the first processor can be loaded at step
620 into a second processor's associated memory. At next step 625, a second portion of video data can be loaded from the primary memory into the associated memory of the second processor. The second processor can then perform the coding operation on the second portion of the video data using the first reference portion as a reference at step 630.
[0058] Process 600 optionally continues at bubble A to process 700 (FIG. 7). FIG. 7 depicts process 700 for performing the coding operation on the computing device, where three or more processors may be implemented. At step 705, process 700 can load at least a part of a second reference from the second processor into a third processor's associated memory. Next, a third portion of the video data can be loaded into the associated memory of the third processor at step 710. The third processor can perform the coding operation, at step 715, on the third portion of video data using the second reference portion as a reference. Finally, the post-coding operation video data from the first, second, and third processors can be stored in the primary memory at step 720. Alternatively, the post-coding operation video data can be stored as the individual processor completes the coding operation on its respective portion of video data.
[0059] For simplicity of explanation, process 600 and 700 are depicted and described as a series of steps. However, steps in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, steps in accordance with this disclosure may occur with other steps not presented and described herein. Furthermore, some of the described steps may not be required in some implementations.
[0060] In one aspect of this disclosure, encoding quality can be increased by using multiple reference frames. For example, in real-time video conferencing with a fixed camera position, it can be beneficial to do a motion search for another reference frame further in the past, such as one encoded at a particularly good quality (e.g., a long-term reference frame in the H.264 coding scheme or a golden frame in the VPx coding scheme). In one
implementation, some or all encoding cores of a multi-core encoder can be configured to read this same additional reference frame. In one implementation with chained encoder cores, the additional reference frame is read by only the first encoder core in the chain, and a delay buffer is inserted within each encoder core through which the additional reference frame propagates.
[0061] In one implementation using multiple reference frames, an increasing number of reference frames are employed by cores in the chain. This can be further appreciated with reference to FIG. 8.
[0062] FIG. 8 depicts a schematic of a multi-core computing device in accordance with one implementation of this disclosure. Computing device 800 includes N-cores in this example, namely processor 1 through processor N. In FIG. 8, processor 1 is coupled to a memory (not shown) via line 852. Processor 1 and processor 2 are coupled via lines 856 and 856'; processor 2 and processor 3 are coupled via lines 866 and 866' and 866" ; and so forth. It shall be understood that lines 852, 856, 856', 866, 866', 866", etc., may be physical lines connecting the corresponding processors or may be representative, e.g., of channels, data flow or data transmissions between the processors. In the latter case, lines 856 and 856', for example, may representative two logically different data transmissions between processor 1 and processor 2, but the data transmissions may occur along the same physical line.
[0063] In use, processor 1 receives reference video data (e.g., a reference frame) from memory (e.g., a DRAM) along line 852. For convenience, this reference frame is referred to in this paragraph as RFO. Processor 1, which in this example is the first core in the chain, uses the reference frame RFO received from the memory to encode a video frame. Processor 1 outputs data to processor 2 using line 856. The data (e.g., a reconstruction of the frame encoded by processor 1) can be used by processor 2 as a reference frame. For convenience, this data is referred to in this paragraph as RF1. Processor 1 also outputs the reference frame RFO it received from the memory to processor 2 using line 856'. Processor 2 can use reference frame RFO as an additional reference frame to encode a video frame. Processor 2 outputs data to processor 3 using line 866. The data can be used by processor 3 as a reference frame. For convenience, this data is referred to in this paragraph as RF2. Processor 2 also passes along the data, reference frames RFO and RFl, it received from processor 1. Processor 3 can use RFO, RFl and RF2 to encode a video frame. This can continue for additional cores in the chain up to processor N.
[0064] Accordingly, in the example above, the first core in the chain can use one reference frame, the second core can use the output of the previous core as well as the input to the previous processor, the third core can use the output of the previous core as well as the input to the two previous processors, and so on. Encoders further in the succession may provide higher compression rates than the earlier encoders, as they may have the capability of finding better motion search matches due to the availability of more reference frames.
Additionally, generally, no additional system bus bandwidth usage is incurred for this increased encoding compression; however, each core further in the chain may employ more internal computational logic.
[0065] With regard to the performance of computing device 800, assuming each core performs at the same speed, the performance of the multi-core accelerator can be expressed as:
[0066] P_multi-core = P * N ; wherein [Equation 2]
P is the performance of a single processor; and
N is the number of processors.
[0067] In addition, synchronization of the cores can introduce a latency component, which can be dependent on, for an encoder, the number of encoding cores in the encoding device, or, for a decoder, the maximum downwards pointing decoded motion vector. In such cases, the maximum can refer to a maximum positive/lower offset between a current block and a reference block referred to by a motion vector. For example, if a maximum downwards pointing decoded motion vector references a reference block in, for example, a substantially lower macroblock row, the latency component can be increased.
[0068] Some implementations of the disclosed techniques and devices can enable, for example, computing devices 400, 500, and/or 800 to encode and/or decode high video resolutions, such as those greater than 1080p. The ability for a computing device to process video data is based at least in part on a number of clock cycles required to process a unit of video data (e.g., a macroblock) and a clock rate of the core(s) used to perform the processing (i.e., cycles per second). The required processing rate for a particular video resolution can be determined based on a number of units per frame (8,160 macroblocks in the case of 1080p, for example) and a frame rate (e.g. 24 frames per second).
[0069] Some implementations capable of high resolution processing can include reducing a number of clock cycles required to process a unit of video data (e.g., a
macroblock) in one or more cores, increasing a clock rate of one or more cores, splitting operations at a macroblock level, splitting operations at a macroblock row level, splitting operations at a slice level, or a combination thereof to enable a computing device to achieve the required processing rate for a given resolution and frame rate.
[0070] Splitting operations can include concurrent processing of portions of video data using separate processing cores. In one example, at the slice level, slices can be processed in groups according to a number of available processing cores. For example, if four processing cores are available, the first four slices to be processed can each be processed using a different core. Subsequent groups of four slices can also each be processed using a different core.
[0071] The words "example" or "exemplary" are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "example' or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words "example" or "exemplary" is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then "X includes A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term "an embodiment" or "one embodiment" or "an implementation" or "one implementation" throughout is not intended to mean the same embodiment or implementation unless described as such.
[0072] The processors described herein can be any type of device, or multiple devices, capable of manipulating or processing information now-existing or hereafter developed, including, for example, optical processors, quantum and/or molecular processors, general purpose processors, special purpose processors, IP cores, ASICS, programmable logic arrays, programmable logic controllers, microcode, firmware, microcontrollers, microprocessors, digital signal processors, memory, or any combination of the foregoing. In the claims, the terms "processor," "core," and "controller" should be understood as including any of the foregoing, either singly or in combination. Although a processor of those described herein may be illustrated for simplicity as a single unit, it can include multiple processors or cores.
[0073] In accordance with an embodiment of the invention, a computer program application stored in non-volatile memory or computer-readable medium (e.g., register memory, processor cache, RAM, ROM, hard drive, flash memory, CD ROM, magnetic media, etc.) may include code or executable instructions that when executed may instruct or cause a controller or processor to perform methods discussed herein such as a method for performing a coding operation on video data using a computing device containing a plurality of processors in accordance with an embodiment of the invention.
[0074] The computer-readable medium may be a non-transitory computer-readable media including all forms and types of memory and all computer-readable media except for a transitory, propagating signal. In one implementation, the non- volatile memory or computer- readable medium may be external memory.
[0075] Although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with embodiments of the invention. Thus, while there have been shown, described and pointed out fundamental novel features of the invention as applied to several embodiments, it will be understood that various omissions, substitutions and changes in the form and details of the illustrated embodiments, and in their operation, may be made by those skilled in the art without departing from the scope of the invention. Substitutions of elements from one embodiment to another are also fully intended and contemplated. The invention is defined solely with regard to the claims appended hereto, and equivalents of the recitations therein.

Claims

What is claimed is:
1. A method for performing a coding operation on video data using a computing device that includes primary memory, a plurality of cores each having an associated memory, and a bus coupling the primary memory to one or more of the plurality of cores, the method comprising:
storing the video data in the primary memory;
loading, via the bus, at least a first portion of the video data from the primary memory into the associated memory of a first core of the plurality of cores;
performing a coding operation, by the first core, on the first portion of the video data;
loading a first reference portion from the first core into the associated memory of a second core of the plurality of cores, wherein the first reference portion is loaded directly without being stored in the primary memory;
loading, via the bus, at least a second portion of the video data from the primary memory into the associated memory of the second core of the plurality of cores; and performing the coding operation, by the second core, on the second portion of the video data using the first reference portion as a reference.
2. The method of claim 1, further including:
loading at least part of a second reference portion from the second core into the associated memory of a third core of the plurality of cores;
loading, via the bus, at least a third portion of the video data from the primary memory into the associated memory of the third core of the plurality of cores; and
performing the coding operation, by the third core, on the third portion of the video data using the second reference portion.
3. The method of claim 2, further including storing in the primary memory output video data from the first, second, and third cores.
4. The method of claim 2 or claim 3, wherein the coding operation of the second core and the third core begins after the respective associated memory of the second and third cores has loaded an amount of video data from the primary memory that is greater than a threshold.
5. The method of claim 2 or claim 3, wherein respective first and second reference portions are loaded after the respective second and third cores each provide an indication of being ready to receive a reference portion.
6. The method of claim 2 or claim 3, further comprising: loading at least part of the first reference portion from the first core into the associated memory of the third core from the associated memory of the second core; and wherein performing the coding operation by the third core includes using the first reference portion.
7. The method of claim 1 or claim 2, wherein the coding operations of the first core and the second core are synchronized using a memory-mapped register interface.
8. The method of claim 1 or claim 2, wherein the coding operations of the first core and the second core are synchronized using keywords written to a reference frame buffer.
9. The method of claim 1 or claim 2, wherein performing the coding operation by the second core includes:
identifying a current block of the second portion of the video data to be encoded;
identifying a search area in the first reference portion that is associated with the current block, the search area associated with a plurality of macroblock rows of the first reference portion;
reading a keyword memory location associated with each of the plurality of macroblock rows;
determining that none of the read keyword memory locations includes a keyword value; and
encoding the current block using the search area.
10. The method of claim 1 or claim 2, wherein performing the coding operation by the second core includes:
identifying a current block of the second portion of the video data to be encoded;
identifying a search area in the first reference portion that is associated with the current block, the search area associated with a plurality of macroblock rows of the first reference portion;
reading a keyword memory location associated with each of the plurality of macroblock rows;
determining that at least one of the read keyword memory locations includes a keyword value;
polling the read keyword memory location that includes the keyword value until the location does not include the keyword value; and
encoding the current block using the search area after the polling is completed.
11. A computing device comprising:
a plurality of cores, each core of the plurality of cores having an associated memory;
a primary memory coupled to the associated memory of two or more of the plurality of cores by respective lines of an internal bus;
wherein the first core of the plurality of cores is configured to perform a video data coding operation on a first portion of video data loaded into its associated memory from the primary memory; and
wherein the second core of the plurality of cores is configured to perform a video data coding operation on a second portion of video data loaded into its associated memory from the primary memory using a first reference portion that is loaded into the associated memory of the second core directly from the associated memory of the first core.
12. The computing device of claim 11, further comprising a first video data reference line connecting the first core and the second core; wherein the computing device is configured to load the first reference portion into the associated memory of the second core directly from the associated memory of the first core using the first video data reference line.
13. The computing device of claim 12, further comprising a second video data reference line connecting the second core and a third core; wherein the second core is configured to generate a second reference portion; wherein the computing device is configured to load the second reference portion into the associated memory of the third core directly from the associated memory of the second core using the second video data reference line; and
wherein the third core of the plurality of cores is configured to perform a video data coding operation on a third portion of video data loaded into its associated memory from the primary memory using the second reference portion.
14. The computing device of claim 13, further comprising: a third video data reference line connecting the second core and the third core; wherein the computing device is configured to load the first reference portion into the associated memory of the third core directly from the associated memory of the second core using the third video data reference line; and
wherein the third core of the plurality of cores is further configured to perform the video data coding operation using the first reference portion.
15. The computing device of claim 11 or claim 12, further comprising a plurality of respective output lines coupling the primary memory and the plurality of cores;
wherein each of the plurality of cores is configured to write output video data to the respective output lines for storage in the primary memory.
16. The computing device of claim 11 or claim 12, wherein the associated memory of the second core includes a reference frame buffer having a plurality of macroblock rows and the second core is configured to use respective keyword memory locations of the plurality of macroblock rows to synchronize its video data coding operation with the video data coding operation of the first core.
PCT/US2013/034581 2012-03-30 2013-03-29 System and method for multi-core hardware video encoding and decoding WO2013149132A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261618189P 2012-03-30 2012-03-30
US61/618,189 2012-03-30
US13/460,024 US20130259137A1 (en) 2012-03-30 2012-04-30 System and Method for Multi-Core Hardware Video Encoding And Decoding
US13/460,024 2012-04-30

Publications (1)

Publication Number Publication Date
WO2013149132A1 true WO2013149132A1 (en) 2013-10-03

Family

ID=49235012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/034581 WO2013149132A1 (en) 2012-03-30 2013-03-29 System and method for multi-core hardware video encoding and decoding

Country Status (2)

Country Link
US (1) US20130259137A1 (en)
WO (1) WO2013149132A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311111B2 (en) 2008-09-11 2012-11-13 Google Inc. System and method for decoding using parallel processing
US9100657B1 (en) 2011-12-07 2015-08-04 Google Inc. Encoding time management in parallel real-time video encoding
CA3125705C (en) * 2013-04-23 2022-02-15 Ab Initio Technology Llc Controlling tasks performed by a computing system
CN106921862A (en) * 2014-04-22 2017-07-04 联发科技股份有限公司 Multi-core decoder system and video encoding/decoding method
RU2595559C2 (en) * 2014-12-16 2016-08-27 Общество с ограниченной ответственностью "Аби Девелопмент" System and method of using previous frame data for optical character recognition of frames of video materials
US9794574B2 (en) 2016-01-11 2017-10-17 Google Inc. Adaptive tile data size coding for video and image compression
US10542258B2 (en) 2016-01-25 2020-01-21 Google Llc Tile copying for video compression
KR102347598B1 (en) * 2017-10-16 2022-01-05 삼성전자주식회사 Video encoding device and encoder

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719642A (en) * 1996-05-07 1998-02-17 National Science Council Of R.O.C. Full-search block matching motion estimation processor
US20020039386A1 (en) * 2000-07-13 2002-04-04 Tae-Hee Han Block matching processor and method for block matching motion estimation in video compression

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8416857B2 (en) * 2007-03-29 2013-04-09 James Au Parallel or pipelined macroblock processing
US8175161B1 (en) * 2008-09-12 2012-05-08 Arecont Vision, Llc. System and method for motion estimation
US8855191B2 (en) * 2008-11-24 2014-10-07 Broadcast International, Inc. Parallelization of high-performance video encoding on a single-chip multiprocessor
JP2011041037A (en) * 2009-08-12 2011-02-24 Sony Corp Image processing apparatus and method
EP2618580B1 (en) * 2010-09-16 2018-08-01 Panasonic Intellectual Property Management Co., Ltd. Image decoding device and image encoding device, methods therefor, programs thereof, integrated circuit, and transcoding device
US8331703B2 (en) * 2011-02-18 2012-12-11 Arm Limited Parallel image encoding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719642A (en) * 1996-05-07 1998-02-17 National Science Council Of R.O.C. Full-search block matching motion estimation processor
US20020039386A1 (en) * 2000-07-13 2002-04-04 Tae-Hee Han Block matching processor and method for block matching motion estimation in video compression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DE VOS L ET AL: "PARAMETERIZABLE VLSI ARCHITECTURES FOR THE FULL-SEARCH BLOCK-MATCHING ALGORITHM", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, IEEE INC. NEW YORK, US, vol. 36, no. 10, 1 October 1989 (1989-10-01), pages 1309 - 1316, XP000085318, DOI: 10.1109/31.44347 *
TASDIZEN O ET AL: "A high performance reconfigurable Motion Estimation hardware architecture", DESIGN, AUTOMATION&TEST IN EUROPE CONFERENCE&EXHIBITION, 2009. DATE '09, IEEE, PISCATAWAY, NJ, USA, 20 April 2009 (2009-04-20), pages 882 - 885, XP032317611, ISBN: 978-1-4244-3781-8, DOI: 10.1109/DATE.2009.5090787 *

Also Published As

Publication number Publication date
US20130259137A1 (en) 2013-10-03

Similar Documents

Publication Publication Date Title
US20130259137A1 (en) System and Method for Multi-Core Hardware Video Encoding And Decoding
US20160373775A1 (en) Hevc video encoder and decoder for multi-core
US11223838B2 (en) AI-assisted programmable hardware video codec
US9832477B2 (en) Data encoding with sign data hiding
US20200236405A1 (en) Optimized edge order for de-blocking filter
US10070134B2 (en) Analytics assisted encoding
US10757430B2 (en) Method of operating decoder using multiple channels to reduce memory usage and method of operating application processor including the decoder
JP2011120244A (en) System for processing images
US10440359B2 (en) Hybrid video encoder apparatus and methods
US20130322552A1 (en) Capturing Multiple Video Channels for Video Analytics and Encoding
US8045021B2 (en) Memory organizational scheme and controller architecture for image and video processing
US8737467B2 (en) Information processing apparatus and method
US20130329137A1 (en) Video Encoding in Video Analytics
WO2013062514A1 (en) Multiple stream processing for video analytics and encoding
KR20110065335A (en) System for video processing
US10146679B2 (en) On die/off die memory management
US20100239018A1 (en) Video processing method and video processor
US9179156B2 (en) Memory controller for video analytics and encoding
US20220094984A1 (en) Unrestricted intra content to improve video quality of real-time encoding
US20130322551A1 (en) Memory Look Ahead Engine for Video Analytics
US20090304076A1 (en) Memory arrangement method and system for ac/dc prediction in video compression applications based on parallel processing
Hilgenstock et al. A single-chip video signal processing system with embedded DRAM
RU142700U1 (en) IMAGE PROCESSING SYSTEM
Lee et al. Memory bandwidth reduction using frame pipeline in video codec chips
WO2022017747A1 (en) Leak-free gradual decoding refresh without restrictions on coding units in clean areas

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13716666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13716666

Country of ref document: EP

Kind code of ref document: A1