US20050105621A1 - Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof - Google Patents

Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof Download PDF

Info

Publication number
US20050105621A1
US20050105621A1 US10/605,882 US60588203A US2005105621A1 US 20050105621 A1 US20050105621 A1 US 20050105621A1 US 60588203 A US60588203 A US 60588203A US 2005105621 A1 US2005105621 A1 US 2005105621A1
Authority
US
United States
Prior art keywords
interpolation
global motion
macroblock
motion vector
motion compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/605,882
Inventor
Chi-cheng Ju
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to US10/605,882 priority Critical patent/US20050105621A1/en
Assigned to MEDIATEK INC. reassignment MEDIATEK INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JU, CHI-CHENG
Priority to DE102004021854A priority patent/DE102004021854A1/en
Priority to TW093130409A priority patent/TWI248313B/en
Priority to CNB2004100859134A priority patent/CN1290342C/en
Publication of US20050105621A1 publication Critical patent/US20050105621A1/en
Priority to US12/652,747 priority patent/US9332270B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy

Definitions

  • the invention relates to digital video, and more particularly, to decoding a coded video bit stream having both macroblocks encoded using block-matching motion compensation and macroblocks encoded using global motion compensation.
  • Full-motion video displays using analog video signals have long been available in the form of television. With recent advances in computer processing capabilities and affordability, full-motion video displays using digital video signals are becoming more widely available. Digital video systems provide significant improvements over conventional analog video systems in creating, modifying, transmitting, storing, and playing full-motion video sequences.
  • Video compression techniques utilize still image compression techniques, referred to as intraframe correlation, of the individual image frames as well as similarities between successive image frames, referred to as interframe correlation, to encode the digital video information and provide a high compression ratio.
  • Block-matching (BM) motion compensation is a technique well known in the prior art for encoding digital video information. If an image sequence shows moving objects, then their motion within the sequence can be used to create a motion vector for a particular block containing the moving object, also referred to as a macroblock. This motion vector can be used to predict where the macroblock will be later in the sequence. Instead of transmitting a new image, the motion vectors for macroblocks containing the moving objects can be sent instead. Block-matching motion compensation greatly reduces the data that must be transmitted for image sequences containing moving objects. However, when the whole image is panning, expanding, contracting, or turning, the motion vectors of all of macroblocks must be transmitted, greatly decreasing the coding efficiency.
  • GMC global motion compensation
  • FIG. 1 shows a typical video decoder 100 according to the prior art as disclosed in U.S. Pat. No. 6,483,877.
  • the video decoder 100 receives a incoming coded video bit stream 102 that is separated through a demultiplexer 104 into quantized discrete cosine transform (DCT) coefficients 106 , macroblock motion vector and global motion parameters 108 , and an intraframe/interframe distinction flag 110 .
  • the quantized DCT coefficient 106 is decoded into an error image 116 through an inverse quantizer 112 and an inverse DCT processor 114 .
  • An output image 118 of an interframe/intraframe switching unit 120 is added to the error image 116 through an adder 122 to form a reconstructed image 124 .
  • the interframe/intraframe switching unit 120 switches its output 118 according to the interframe/intraframe coding distinction flag 110 .
  • a predicted image synthesizer 126 synthesizes a predicted image 128 that is used for executing the interframe coding.
  • the predicted image synthesizer 126 performs motion compensation operation and fetches prediction blocks from at least one decoded image 130 , which is a previously decoded frame stored in a frame memory 128 .
  • the predicted image synthesizer 126 performs either block-matching motion compensation or global motion compensation according to the encoding type used for a particular macroblock.
  • the interframe/intraframe switching unit 120 outputs the “0” signal 132 and the output of the predicted image synthesizer 126 is not used.
  • FIG. 2 shows a more detailed block diagram of the predicted image synthesizer 126 of FIG. 1 according to the prior art.
  • the predicted image synthesizer 126 processes global motion compensation and block matching motion compensation in parallel.
  • the macroblock motion vector and global motion parameters 108 are input to a demultiplexer 202 , which provides global motion parameters 204 , a macroblock motion vector 206 , and a selection signal 208 specifying block matching/global motion compensation to a GMC image synthesizer 210 , a BM image synthesizer 212 , and a switch 214 , respectively.
  • the BM image synthesizer 212 synthesizes the predicted image for blocks that are encoded using block-matching motion compensation
  • the GMC image synthesizer 210 synthesizes the predicted image for blocks that are encoded using global motion compensation.
  • the respective predicted image data 216 and 218 are output to the switch 214 , which selects one of these signals according to the selection signal 208 , received from the demultiplexer 202 .
  • the predicted image 128 is then output to the switching unit 120 , as shown in FIG. 1 .
  • video decoders supporting both block-matching motion compensation and global motion compensation require the use of two different image synthesizers.
  • a first image synthesizer 212 is used for block-matching motion compensation, and a second image synthesizer 210 is used for global motion compensation.
  • the GMC image synthesizer 210 is idle.
  • the BM image synthesizer 212 is idle. This non-optimal solution of requiring two image synthesizers increases the hardware complexity of the video decoder and results in a higher cost. It would be beneficial to combine the functionality of the GMC synthesizer 210 and the BM synthesizer 212 into an integrated unit.
  • an apparatus for performing motion compensation.
  • the apparatus is capable of decoding an incoming coded video bit stream including a plurality of frames. Each frame may include macroblocks encoded using block-matching motion compensation and/or macroblocks encoded using global motion compensation.
  • the apparatus further includes an interpolation unit.
  • the interpolation unit performs interpolation operations on each macroblock encoded with block-matching or global motion compensation in each frame of the incoming coded video bit stream.
  • the interpolation unit When processing a current macroblock, if the current macroblock is encoded using global motion compensation, the interpolation unit performs the interpolation operations according to a global motion vector translated from the global motion parameters.
  • a predicted image synthesizer in a video decoder for decoding a video bit stream and generating a predicted image.
  • the video bit stream includes a plurality of frames having first macroblocks encoded using block-matching compensation and second macroblocks encoded using global motion compensation.
  • the video bit stream includes macroblock motion vectors indicating motion vectors of the first macroblocks and global motion parameters associated with the plurality of frames indicating a motion vector of each pixel in the second macroblocks.
  • the predicted image synthesizer comprises a translation unit receiving the global motion parameters, and translating the global motion parameters into a global motion vector which is in a form substantially identical to that of the macroblock motion vector, and a interpolation unit for receiving at least one prediction block in a decoded image 130 , which is a previously decoded frame, receiving the global motion vector, performing interpolation operations, and generating the prediction image.
  • FIG. 1 is a typical video decoder according to the prior art.
  • FIG. 2 is a block diagram of the predicted image synthesizer of FIG. 1 according to the prior art.
  • FIG. 3 is a block diagram of a predicted image synthesizer according to the present invention.
  • FIG. 4 is a diagram showing half-pel (half pixel) interpolation for luminance and chrominance block-matching compensation performed by the predicted image synthesizer of FIG. 3 .
  • FIG. 5 is a matrix showing half-pel (half pixel) interpolation for the diagram of FIG. 4 .
  • FIG. 6 is a diagram showing global motion compensation according to the prior art.
  • FIG. 7 is a matrix showing half-pel (half pixel) interpolation for luminance global motion compensation performed by the predicted image synthesizer of FIG. 3 .
  • FIG. 8 is a matrix showing quarter-pel (quarter pixel) interpolation for chrominance global motion compensation performed by the predicted image synthesizer of FIG. 3 .
  • FIG. 9 is a flowchart describing a method of processing an incoming coded video bit stream including a plurality of frames according to the present invention.
  • FIG. 3 is a block diagram of a predicted image synthesizer 300 according to the present invention.
  • the predicted image synthesizer 300 can be used in a video decoder such as the video decoder 100 shown in FIG. 1 .
  • all signals that contain the same information as signals in FIG. 1 are labeled using the same numerical labels as in FIG. 1 .
  • the predicted image synthesizer 300 includes a demultiplexer 302 , a translation unit 304 , a macroblock MV (motion vector) storage unit 306 , a global MV (motion vector) storage unit 308 , a switching unit 310 , and an interpolation unit 312 .
  • the demultiplexer 302 receives the macroblock motion vector and the global motion parameters 108 derived from the incoming coded video bit stream.
  • the global motion parameters 307 are passed to the translation unit 304 .
  • the translation unit 304 converts the global motion parameters 307 into a global motion vector for luminance (luminance global motion vector) and a global motion vector for chrominance (chrominance global motion vector).
  • luminance global motion vector and the chrominance global motion vector can be used during interpolation operations at the macroblock level for all macroblocks encoded using global motion compensation throughout the frame.
  • the luminance global motion vector and the chrominance global motion vector are stored in the global MV storage unit 308 .
  • the macroblock motion vector 305 is contained within the macroblock motion vectors and global motion parameters 108 , and is passed to the predicted image synthesizer 300 .
  • the macroblock motion vector 305 is then stored in the macroblock MV storage unit 306 .
  • the macroblock motion vector 305 is actually a macroblock motion vector for luminance (luminance macroblock motion vector).
  • a macroblock motion vector for chrominance (chrominance macroblock motion vector) can be obtained by performing calculation on the luminance macroblock motion vector.
  • the macroblock motion vector 305 (that is, the luminance macroblock motion vector) is stored in the Macroblock MV storage 306 .
  • the chrominance macroblock motion vector will be calculated and obtained in the Interpolation unit 312 in this embodiment.
  • the macroblock MV storage unit 306 may also be replaced by a straight connection to the switching unit 310 in another embodiment of the present invention.
  • the switching unit 310 is used to either pass the macroblock motion vector from the Macroblock MV storage 306 or the luminance/chrominance global motion vector from the global MV storage unit 308 to the interpolation unit 312 according to the motion compensation type 316 .
  • the interpolation unit 312 reads at least one prediction block in a decoded image 130 , and the location of the prediction block in the decoded image is determined by the motion vector 314 received from the switching unit 310 .
  • the interpolation unit 312 then performs both luminance and chrominance interpolation operations of the corresponding prediction block.
  • the received motion vector 314 may be the macroblock motion vector retrieved from the macroblock MV storage unit 306 or the luminance/chrominance global motion vector retrieved from the global MV storage unit 308 .
  • the interpolation unit 312 receives the decoded image 130 and the motion vector 314 , performs interpolation operations, and outputs a predicted image 128 that is used for executing the inter-frame coding in a video decoder such as the video decoder 100 shown in FIG. 1 .
  • the interpolation unit 312 may further include a buffer 313 for temporarily storing the interpolation operation result corresponding to the different macroblock motion vectors in a macroblock so as to properly perform the interpolation operations of a macroblock.
  • the interpolation unit 312 may temporarily store the forward prediction interpolation result in the buffer 313 , combine with the later backward prediction interpolation result, and then obtain the final bi-directional interpolation result.
  • FIG. 4 is a diagram showing half-pel (half pixel) interpolation for luminance and chrominance block-matching compensation performed by the predicted image synthesizer 300 of FIG. 3 .
  • FIG. 5 is a matrix showing the resulting half-pel (half pixel) bilinear interpolation for the diagram of FIG. 4 .
  • FIG. 4 there are four integer pixel positions (A, B, C, D), labeled in FIG. 5 as (I A , I B , I C , and I D ), as well as five half-pixel positions, labeled in FIG.
  • the present invention uses the same half-pel (half pixel) bilinear interpolation process of the prior art. To briefly describe the bilinear interpolation calculations, the following formulas show the interpolation calculations for the pixel positions: I A , H 1 , H 2 , and H 3 .
  • H 2 ( I A +I C +1 rounding_control)/2
  • H 3 ( I A +I B +I C +I D +2 rounding_control)/4 . . . , where the rounding_control parameter is a value of 0 or 1, and is derived from the incoming coded video bit stream.
  • FIG. 6 shows a diagram showing global motion compensation according to the prior art.
  • Global motion compensation according to the prior art involves converting the global motion parameters into an individual motion vector for each pixel in each macroblock encoded using global motion compensation.
  • FIG. 6 there are again four integer pixel positions (Y 00 , Y 01 , Y 10 , and Y 11 ) as well as a non-integer pixel position Y.
  • the non-integer pixel position Y has a vertical distance of (r j /s) and a horizontal position of (r i /s), where s is specified by sprite_warping_accuracy, as defined in the MPEG-4 (ISO/IEC 14496-2) specification.
  • the present invention takes advantage of the fact that when the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1, the global motion parameters can be converted into a global motion vector having the same value for all pixels in the frame.
  • the present invention can also perform global motion compensation on a per-macroblock basis using a single motion vector for each macroblock. In this way, almost the same hardware as is used in the prior art BM image synthesizer 212 can be used to perform both block-matching motion compensation and global motion compensation in the present invention.
  • FIG. 7 shows a diagram illustrating half-pel (half pixel) bilinear interpolation for luminance global motion compensation performed by the interpolation unit 312 of FIG. 3 .
  • the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1
  • the luminance interpolation operations of FIG. 6 can be reduced to the matrix shown in FIG. 7 .
  • the matrix shown in FIG. 7 is equivalent to the matrix shown in FIG. 5 and this implies the luminance and chrominance interpolation operations for block-matching compensation and the luminance interpolation operations for global motion compensation can be performed using the same interpolation unit 312 at a half-pel resolution.
  • FIG. 8 is a diagram showing quarter-pel (quarter pixel) bilinear interpolation for chrominance global motion compensation performed by the interpolation unit 312 of FIG. 3 .
  • the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1
  • the chrominance interpolation operations of FIG. 5 can be reduced to the matrix shown in FIG. 7 .
  • FIG. 7 is simply a quarter-pel matrix which is solved using the same bilinear interpolation process as the half-pel matrix but at twice the resolution.
  • I A I A
  • Q 1 (3 I A +I B +2 rounding_control)/4
  • H 2 ( I A +I B +1 rounding_control)/2
  • Q 4 (3 I A +I C +2 rounding_control)/4
  • Q 5 (9 I A +3 I B +3 I C +I D +8 rounding_control)/16
  • Q 10 (3 I A +I B +3 I C +I D +4 rounding_control)/8
  • H 11 ( I A +I B +I C +I D +1 rounding_control)/4 . . . , where the rounding_-control parameter is a value of 0 or 1, and is derived from the incoming coded video bit stream.
  • the translation unit 304 converts the global motion parameters 307 received for each frame into a luminance global motion vector and a chrominance global motion vector for the frame.
  • the following formulas describe the conversion process implemented by the translation unit 304 to covert the global motion parameters into the luminance/chrominance global motion vectors used during interpolation operations for macroblocks encoded using global motion compensation with no_of_sprite_warping_point equal to 0 and no_of_sprite_warping_point equal to 1, respectively.
  • the luminance global motion vector is a half-pel precision motion vector and the chrominance global motion vector is a quarter-pel precision motion vector.
  • these macroblocks encoded using global motion compensation can be treated as if they were block matching macroblocks with prediction mode of “frame_prediction” and with the following luminance/chrominance global motion vectors respectively for the luminance/chrominance components.
  • ( j 0 ′& 1 )) where ( i 0 ′,j 0 ′) (( s/ 2) du[ 0],( s/ 2) dv[ 0]) where du[0] and dv
  • FIG. 9 shows a flowchart describing a method of processing an incoming coded video bit stream comprising a plurality of frames according to the present invention.
  • Each frame may include a plurality of macroblocks encoded using block-matching motion compensation and/or a plurality of macroblocks encoded using global motion compensation.
  • the flowchart contains the following steps:
  • Step 900 For each frame received in the incoming video stream, convert the global motion parameters associated with the frame into a luminance global motion vector and a chrominance global motion vector. Store the luminance global motion vector and the chrominance global motion vector for later global motion compensation luminance/chrominance interpolation operations. Proceed to step 902 .
  • Step 902 When decoding a current macroblock, determine whether the current macroblock is encoded using block-matching motion compensation or global motion compensation. If block-matching motion compensation, proceed to step 904 , otherwise if global motion compensation, proceed to step 908 .
  • Step 904 Extract the macroblock motion vectors stored in the current macroblock.
  • Step 906 Perform the luminance and chrominance bilinear interpolation operations according to the macroblock motion vectors extracted in step 904 . Use a half-pel resolution for both luminance and chrominance. Processing is complete.
  • Step 908 Perform the luminance bilinear interpolation operation according to the luminance global motion vector stored in step 900 . Use half-pel resolution and proceed to step 910 when finished.
  • Step 910 Perform the chrominance bilinear interpolation operation according to the chrominance global motion vector stored in step 900 . Use quarter-pel resolution and when finished, processing is complete.
  • the present invention performs bilinear interpolation operations on macroblocks encoded using global motion compensation according to a luminance global motion vector and a chrominance global motion vector so that the block-matching motion compensation and global motion compensation can be integrated into a single unit.
  • the luminance/chrominance global motion vectors are converted from a set of global motion parameters transmitted with each frame in the incoming coded video bit stream.
  • the global motion compensation calculations can be simplified to resemble the interpolations operations normally performed for block-matching.
  • a quarter-pel resolution is used for luminance and chrominance interpolation operations for macroblocks encoded using block-matching motion compensation and for luminance interpolation operations for macroblocks encoded using global motion compensation.
  • a frame when a frame includes only macroblocks encoded using block-matching motion compensation, there may be no global motion parameters associated with the frame. In such case, the present invention will not perform the conversion process of converting the global motion parameters into a luminance global motion vector and a chrominance global motion vector because there are no global motion parameters associated with in the frame.

Abstract

An interpolation unit receives an incoming video bit stream comprising a plurality of frames including first macroblocks encoded using block-matching motion compensation and second macroblocks encoded using global motion compensation. A translation unit converts global motion parameters included in a current frame of the incoming video bit stream into a global motion vector. The interpolation unit performs luminance and chrominance interpolation operations on each macroblock contained in each frame of the incoming video bit stream. When processing a current macroblock, if the current macroblock is encoded using global motion compensation, the interpolation unit performs the luminance interpolation operations according to the global motion vector at half-pel resolution, and performs the chrominance interpolation operations at quarter-pel resolution. If the current macroblock is encoded using block-matching motion compensation, the interpolation unit performs the luminance and chrominance interpolation operations according to the macroblock motion vector contained in the current macroblock at half-pel resolution.

Description

    BACKGROUND OF INVENTION
  • 1. Field of the Invention
  • The invention relates to digital video, and more particularly, to decoding a coded video bit stream having both macroblocks encoded using block-matching motion compensation and macroblocks encoded using global motion compensation.
  • 2. Description of the Prior Art
  • Full-motion video displays using analog video signals have long been available in the form of television. With recent advances in computer processing capabilities and affordability, full-motion video displays using digital video signals are becoming more widely available. Digital video systems provide significant improvements over conventional analog video systems in creating, modifying, transmitting, storing, and playing full-motion video sequences.
  • However, the amounts of raw digital information included in video sequences are massive. Storage and transmission of these amounts of video information is infeasible with conventional personal computer equipment. Consider, for example, a digitized form of a relatively low resolution VHS image format having a 320×480 pixel resolution. A full-length motion picture of two hours in duration at this resolution corresponds to 100 gigabytes of digital video information. By comparison, conventional CD-ROM disks have capacities of about 0.7 gigabytes, and DVD disks have capacities of up to 8 gigabytes.
  • To address the limitations in storing and transmitting such massive amounts of digital video information, various video compression standards or processes have been established, including MPEG-1, MPEG-2, MPEG-4, and H.26X. These video compression techniques utilize still image compression techniques, referred to as intraframe correlation, of the individual image frames as well as similarities between successive image frames, referred to as interframe correlation, to encode the digital video information and provide a high compression ratio.
  • Block-matching (BM) motion compensation is a technique well known in the prior art for encoding digital video information. If an image sequence shows moving objects, then their motion within the sequence can be used to create a motion vector for a particular block containing the moving object, also referred to as a macroblock. This motion vector can be used to predict where the macroblock will be later in the sequence. Instead of transmitting a new image, the motion vectors for macroblocks containing the moving objects can be sent instead. Block-matching motion compensation greatly reduces the data that must be transmitted for image sequences containing moving objects. However, when the whole image is panning, expanding, contracting, or turning, the motion vectors of all of macroblocks must be transmitted, greatly decreasing the coding efficiency. To solve this problem, global motion compensation (GMC) techniques are well known in the prior art, such as the “sprite” coding techniques used in MPEG-4 (i.e. ISO/IEC 14496-2). These global motion compensation techniques take into account global image changes between a previous frame and the current frame. Global motion parameters associated with each frame are used to specify individual motion vectors for all pixels in each macroblocks encoded using global motion compensation. In this way, only one set of global motion parameters is required for each frame, increasing the encoding efficiency for video sequences having global image changes.
  • FIG. 1 shows a typical video decoder 100 according to the prior art as disclosed in U.S. Pat. No. 6,483,877. The video decoder 100 receives a incoming coded video bit stream 102 that is separated through a demultiplexer 104 into quantized discrete cosine transform (DCT) coefficients 106, macroblock motion vector and global motion parameters 108, and an intraframe/interframe distinction flag 110. The quantized DCT coefficient 106 is decoded into an error image 116 through an inverse quantizer 112 and an inverse DCT processor 114. An output image 118 of an interframe/intraframe switching unit 120 is added to the error image 116 through an adder 122 to form a reconstructed image 124.
  • The interframe/intraframe switching unit 120 switches its output 118 according to the interframe/intraframe coding distinction flag 110. A predicted image synthesizer 126 synthesizes a predicted image 128 that is used for executing the interframe coding. The predicted image synthesizer 126 performs motion compensation operation and fetches prediction blocks from at least one decoded image 130, which is a previously decoded frame stored in a frame memory 128. The predicted image synthesizer 126 performs either block-matching motion compensation or global motion compensation according to the encoding type used for a particular macroblock. In the case of intraframe coding, the interframe/intraframe switching unit 120 outputs the “0” signal 132 and the output of the predicted image synthesizer 126 is not used.
  • FIG. 2 shows a more detailed block diagram of the predicted image synthesizer 126 of FIG. 1 according to the prior art. The predicted image synthesizer 126 processes global motion compensation and block matching motion compensation in parallel. The macroblock motion vector and global motion parameters 108 are input to a demultiplexer 202, which provides global motion parameters 204, a macroblock motion vector 206, and a selection signal 208 specifying block matching/global motion compensation to a GMC image synthesizer 210, a BM image synthesizer 212, and a switch 214, respectively. The BM image synthesizer 212 synthesizes the predicted image for blocks that are encoded using block-matching motion compensation, and the GMC image synthesizer 210 synthesizes the predicted image for blocks that are encoded using global motion compensation. The respective predicted image data 216 and 218are output to the switch 214, which selects one of these signals according to the selection signal 208, received from the demultiplexer 202. The predicted image 128 is then output to the switching unit 120, as shown in FIG. 1.
  • As can be seen from the above description, video decoders supporting both block-matching motion compensation and global motion compensation require the use of two different image synthesizers. A first image synthesizer 212 is used for block-matching motion compensation, and a second image synthesizer 210 is used for global motion compensation. When processing blocks encoded using block-matching image compensation, the GMC image synthesizer 210 is idle. Likewise, when processing blocks encoded using global motion compensation, the BM image synthesizer 212 is idle. This non-optimal solution of requiring two image synthesizers increases the hardware complexity of the video decoder and results in a higher cost. It would be beneficial to combine the functionality of the GMC synthesizer 210 and the BM synthesizer 212 into an integrated unit.
  • SUMMARY OF INVENTION
  • It is therefore an objective of the invention to provide an apparatus capable of performing motion compensation for both macroblocks encoded with block-matching and macroblocks encoded with global motion compensation, to solve the above-mentioned problems.
  • According to one embodiment of the invention, an apparatus is disclosed for performing motion compensation. The apparatus is capable of decoding an incoming coded video bit stream including a plurality of frames. Each frame may include macroblocks encoded using block-matching motion compensation and/or macroblocks encoded using global motion compensation. The apparatus further includes an interpolation unit. The interpolation unit performs interpolation operations on each macroblock encoded with block-matching or global motion compensation in each frame of the incoming coded video bit stream. When processing a current macroblock, if the current macroblock is encoded using global motion compensation, the interpolation unit performs the interpolation operations according to a global motion vector translated from the global motion parameters.
  • Also according to the present invention, a predicted image synthesizer in a video decoder is disclosed for decoding a video bit stream and generating a predicted image. The video bit stream includes a plurality of frames having first macroblocks encoded using block-matching compensation and second macroblocks encoded using global motion compensation. The video bit stream includes macroblock motion vectors indicating motion vectors of the first macroblocks and global motion parameters associated with the plurality of frames indicating a motion vector of each pixel in the second macroblocks. The predicted image synthesizer comprises a translation unit receiving the global motion parameters, and translating the global motion parameters into a global motion vector which is in a form substantially identical to that of the macroblock motion vector, and a interpolation unit for receiving at least one prediction block in a decoded image 130, which is a previously decoded frame, receiving the global motion vector, performing interpolation operations, and generating the prediction image.
  • These and other objectives of the claimed invention will become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a typical video decoder according to the prior art.
  • FIG. 2 is a block diagram of the predicted image synthesizer of FIG. 1 according to the prior art.
  • FIG. 3 is a block diagram of a predicted image synthesizer according to the present invention.
  • FIG. 4 is a diagram showing half-pel (half pixel) interpolation for luminance and chrominance block-matching compensation performed by the predicted image synthesizer of FIG. 3.
  • FIG. 5 is a matrix showing half-pel (half pixel) interpolation for the diagram of FIG. 4.
  • FIG. 6 is a diagram showing global motion compensation according to the prior art.
  • FIG. 7 is a matrix showing half-pel (half pixel) interpolation for luminance global motion compensation performed by the predicted image synthesizer of FIG. 3.
  • FIG. 8 is a matrix showing quarter-pel (quarter pixel) interpolation for chrominance global motion compensation performed by the predicted image synthesizer of FIG. 3.
  • FIG. 9 is a flowchart describing a method of processing an incoming coded video bit stream including a plurality of frames according to the present invention.
  • DETAILED DESCRIPTION
  • FIG. 3 is a block diagram of a predicted image synthesizer 300 according to the present invention. The predicted image synthesizer 300 can be used in a video decoder such as the video decoder 100 shown in FIG. 1. In FIG. 3, all signals that contain the same information as signals in FIG. 1 are labeled using the same numerical labels as in FIG. 1. The predicted image synthesizer 300 includes a demultiplexer 302, a translation unit 304, a macroblock MV (motion vector) storage unit 306, a global MV (motion vector) storage unit 308, a switching unit 310, and an interpolation unit 312. The demultiplexer 302 receives the macroblock motion vector and the global motion parameters 108 derived from the incoming coded video bit stream. At the beginning of each frame, the global motion parameters 307 are passed to the translation unit 304. The translation unit 304 converts the global motion parameters 307 into a global motion vector for luminance (luminance global motion vector) and a global motion vector for chrominance (chrominance global motion vector). The luminance global motion vector and the chrominance global motion vector can be used during interpolation operations at the macroblock level for all macroblocks encoded using global motion compensation throughout the frame. The luminance global motion vector and the chrominance global motion vector are stored in the global MV storage unit 308. For all macroblocks that are encoded using block-matching motion compensation, at least one macroblock motion vector is included within the macroblock. The macroblock motion vector 305 is contained within the macroblock motion vectors and global motion parameters 108, and is passed to the predicted image synthesizer 300. The macroblock motion vector 305 is then stored in the macroblock MV storage unit 306. According to the MPEG-4 specification, the macroblock motion vector 305 is actually a macroblock motion vector for luminance (luminance macroblock motion vector). A macroblock motion vector for chrominance (chrominance macroblock motion vector) can be obtained by performing calculation on the luminance macroblock motion vector. To save the space of the Macroblock MV storage 306, only the macroblock motion vector 305 (that is, the luminance macroblock motion vector) is stored in the Macroblock MV storage 306. The chrominance macroblock motion vector will be calculated and obtained in the Interpolation unit 312 in this embodiment. It should be noted that the macroblock MV storage unit 306 may also be replaced by a straight connection to the switching unit 310 in another embodiment of the present invention. The switching unit 310 is used to either pass the macroblock motion vector from the Macroblock MV storage 306 or the luminance/chrominance global motion vector from the global MV storage unit 308 to the interpolation unit 312 according to the motion compensation type 316.
  • The interpolation unit 312 reads at least one prediction block in a decoded image 130, and the location of the prediction block in the decoded image is determined by the motion vector 314 received from the switching unit 310. The interpolation unit 312 then performs both luminance and chrominance interpolation operations of the corresponding prediction block. The received motion vector 314 may be the macroblock motion vector retrieved from the macroblock MV storage unit 306 or the luminance/chrominance global motion vector retrieved from the global MV storage unit 308. The interpolation unit 312 receives the decoded image 130 and the motion vector 314, performs interpolation operations, and outputs a predicted image 128 that is used for executing the inter-frame coding in a video decoder such as the video decoder 100 shown in FIG. 1. The interpolation unit 312 may further include a buffer 313 for temporarily storing the interpolation operation result corresponding to the different macroblock motion vectors in a macroblock so as to properly perform the interpolation operations of a macroblock. For example, when performing bi-directional interpolation for a block matching macroblock, the interpolation unit 312 may temporarily store the forward prediction interpolation result in the buffer 313, combine with the later backward prediction interpolation result, and then obtain the final bi-directional interpolation result.
  • To further explain the luminance and chrominance interpolation operations performed by the interpolation unit 312, please refer to FIG. 4 and FIG. 5. FIG. 4 is a diagram showing half-pel (half pixel) interpolation for luminance and chrominance block-matching compensation performed by the predicted image synthesizer 300 of FIG. 3. FIG. 5 is a matrix showing the resulting half-pel (half pixel) bilinear interpolation for the diagram of FIG. 4. In FIG. 4 there are four integer pixel positions (A, B, C, D), labeled in FIG. 5 as (IA, IB, IC, and ID), as well as five half-pixel positions, labeled in FIG. 5 as (H1, H2, H3, H4, H5). When performing motion compensation for macroblocks encoded using block-matching motion compensation, the present invention uses the same half-pel (half pixel) bilinear interpolation process of the prior art. To briefly describe the bilinear interpolation calculations, the following formulas show the interpolation calculations for the pixel positions: IA, H1, H2, and H3. (The remaining pixel positions are calculated in the same manner, as is well known in the prior art.)
    IA=IA
    H 1=(I A +I B+1 rounding_control)/2
    H 2=(I A +I C+1 rounding_control)/2
    H 3=(I A +I B +I C +I D+2 rounding_control)/4
    . . . , where the rounding_control parameter is a value of 0 or 1, and is derived from the incoming coded video bit stream.
  • FIG. 6 shows a diagram showing global motion compensation according to the prior art. Global motion compensation according to the prior art involves converting the global motion parameters into an individual motion vector for each pixel in each macroblock encoded using global motion compensation. In FIG. 6 there are again four integer pixel positions (Y00, Y01, Y10, and Y11) as well as a non-integer pixel position Y. The non-integer pixel position Y has a vertical distance of (rj/s) and a horizontal position of (ri/s), where s is specified by sprite_warping_accuracy, as defined in the MPEG-4 (ISO/IEC 14496-2) specification. The present invention takes advantage of the fact that when the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1, the global motion parameters can be converted into a global motion vector having the same value for all pixels in the frame. This means that instead of doing global motion compensation on a per-pixel bases, as is done in the prior art, the present invention can also perform global motion compensation on a per-macroblock basis using a single motion vector for each macroblock. In this way, almost the same hardware as is used in the prior art BM image synthesizer 212 can be used to perform both block-matching motion compensation and global motion compensation in the present invention.
  • FIG. 7 shows a diagram illustrating half-pel (half pixel) bilinear interpolation for luminance global motion compensation performed by the interpolation unit 312 of FIG. 3. When the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1, the luminance interpolation operations of FIG. 6 can be reduced to the matrix shown in FIG. 7. The matrix shown in FIG. 7 is equivalent to the matrix shown in FIG. 5 and this implies the luminance and chrominance interpolation operations for block-matching compensation and the luminance interpolation operations for global motion compensation can be performed using the same interpolation unit 312 at a half-pel resolution.
  • FIG. 8 is a diagram showing quarter-pel (quarter pixel) bilinear interpolation for chrominance global motion compensation performed by the interpolation unit 312 of FIG. 3. When the no_of_sprite_warping_point parameter in MPEG-4 is set to a value of 0 or a value of 1, the chrominance interpolation operations of FIG. 5 can be reduced to the matrix shown in FIG. 7. FIG. 7 is simply a quarter-pel matrix which is solved using the same bilinear interpolation process as the half-pel matrix but at twice the resolution. To briefly describe the interpolation calculations at quarter-pel, the following formulas show the interpolation calculations for the pixel positions: IA, Q1, H2, Q4, Q5, Q6, H9, Q10, and H11. (The remaining pixel positions are calculated in the same manner.)
    IA=IA
    Q 1=(3 I A +I B+2 rounding_control)/4
    H 2=(I A +I B+1 rounding_control)/2
    Q 4=(3 I A +I C+2 rounding_control)/4
    Q 5=(9 I A+3 I B+3 I C +I D+8 rounding_control)/16
    Q 6=(3 I A+3 I B +I C +I D+4 rounding_control)/8
    H9=(I A +I C+1 rounding_control)/2
    Q10=(3 I A +I B+3 I C +I D+4 rounding_control)/8
    H11=(I A +I B +I C +I D+1 rounding_control)/4
    . . . , where the rounding_-control parameter is a value of 0 or 1, and is derived from the incoming coded video bit stream.
  • It should be noted that the translation unit 304 converts the global motion parameters 307 received for each frame into a luminance global motion vector and a chrominance global motion vector for the frame. The following formulas describe the conversion process implemented by the translation unit 304 to covert the global motion parameters into the luminance/chrominance global motion vectors used during interpolation operations for macroblocks encoded using global motion compensation with no_of_sprite_warping_point equal to 0 and no_of_sprite_warping_point equal to 1, respectively. The luminance global motion vector is a half-pel precision motion vector and the chrominance global motion vector is a quarter-pel precision motion vector. Hence, these macroblocks encoded using global motion compensation can be treated as if they were block matching macroblocks with prediction mode of “frame_prediction” and with the following luminance/chrominance global motion vectors respectively for the luminance/chrominance components.
  • For GMC macroblocks with (no_of_sprite_warping_point==0), (sprite_enable==GMC), and (video_object_layer_shape==rectangle):
    MV GMC Y=(MV x GMC Y , MV y GMC Y)=(0,0)
    MV GMC CbCr=(MV x GMC CbCr , MV y GMC CbCr)=(0,0)
  • For GMC macroblocks with (no_of_sprite_warping_point==1), (sprite_enable==GMC), and (video_object_layer_shape==rectangle”):
    MV GMCY=(MV x GMCY , MV y GMCY)=(i 0 ′, j 0′)=((s/2)du[0], (s/2)dv[0])
    MV GMC CbCr=(MV x GMC CbCr ,MV y GMC CbCr)=((i 0′>>1)|(i 0 ′& 1),(j 0′>>1)|(j 0 ′& 1))
    where (i 0 ′,j 0′)=((s/2)du[0],(s/2)dv[0])
    where du[0] and dv[0] are the global motion parameters derived from the incoming coded video bit stream and s is specified by sprite_warping_accuracy, as defined in the MPEG-4 (ISO/IEC 14496-2) specification.
  • It is noted that the above equations apply when (no_of_sprite_warping_point==0 or 1), (sprite_enable==GMC), and (video_object_layer_shape==rectangle). But the present invention can also be used when the values of sprite_-enable and video_object_layer_shape are different. People skilled in the art will appreciate that as long as the (no_of_sprite_warping_point==0 or 1), this invention applies even when the values of sprite_enable and video_object_layer_shape are different from the above and the equations become different.
  • FIG. 9 shows a flowchart describing a method of processing an incoming coded video bit stream comprising a plurality of frames according to the present invention. Each frame may include a plurality of macroblocks encoded using block-matching motion compensation and/or a plurality of macroblocks encoded using global motion compensation. The flowchart contains the following steps:
  • Step 900: For each frame received in the incoming video stream, convert the global motion parameters associated with the frame into a luminance global motion vector and a chrominance global motion vector. Store the luminance global motion vector and the chrominance global motion vector for later global motion compensation luminance/chrominance interpolation operations. Proceed to step 902.
  • Step 902: When decoding a current macroblock, determine whether the current macroblock is encoded using block-matching motion compensation or global motion compensation. If block-matching motion compensation, proceed to step 904, otherwise if global motion compensation, proceed to step 908.
  • Step 904: Extract the macroblock motion vectors stored in the current macroblock.
  • Step 906: Perform the luminance and chrominance bilinear interpolation operations according to the macroblock motion vectors extracted in step 904. Use a half-pel resolution for both luminance and chrominance. Processing is complete.
  • Step 908: Perform the luminance bilinear interpolation operation according to the luminance global motion vector stored in step 900. Use half-pel resolution and proceed to step 910 when finished.
  • Step 910: Perform the chrominance bilinear interpolation operation according to the chrominance global motion vector stored in step 900. Use quarter-pel resolution and when finished, processing is complete.
  • In contrast to the prior art, the present invention performs bilinear interpolation operations on macroblocks encoded using global motion compensation according to a luminance global motion vector and a chrominance global motion vector so that the block-matching motion compensation and global motion compensation can be integrated into a single unit. The luminance/chrominance global motion vectors are converted from a set of global motion parameters transmitted with each frame in the incoming coded video bit stream. For an MPEG-4 compliant coded video bit stream having GMC macroblocks with the no_of_sprite_warping_point parameter set to either 0 or 1, the global motion compensation calculations can be simplified to resemble the interpolations operations normally performed for block-matching. The difference is that for the chrominance interpolation operations for macroblocks encoded using global motion compensation, a quarter-pel resolution is used. For luminance and chrominance interpolation operations for macroblocks encoded using block-matching motion compensation and for luminance interpolation operations for macroblocks encoded using global motion compensation, a half-pel resolution is used.
  • It should be noted that when a frame includes only macroblocks encoded using block-matching motion compensation, there may be no global motion parameters associated with the frame. In such case, the present invention will not perform the conversion process of converting the global motion parameters into a luminance global motion vector and a chrominance global motion vector because there are no global motion parameters associated with in the frame.
  • Those skilled in the art will readily observe that numerous modifications and alterations of the device may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims (31)

1. An apparatus for performing motion compensation when decoding an incoming video bit stream including a plurality of frames having first macroblocks encoded using block-matching motion compensation and second macroblocks encoded using global motion compensation, the apparatus comprising:
an interpolation unit for performing interpolation operations on each macroblock contained in each frame of the incoming video stream;
wherein when processing a current macroblock, if the current macroblock is encoded using global motion compensation, the interpolation unit performs the interpolation operations according to a global motion vector on a per-macroblock basis.
2. The apparatus of claim 1, further comprising a translation unit for converting global motion parameters associated with a current frame of the incoming video stream into the global motion vector for use by the interpolation unit.
3. The apparatus of claim 2, wherein when processing the current macroblock, if the current macroblock is encoded using block-matching motion compensation, the interpolation unit performs the interpolation operations according to at least one macroblock motion vector contained in the current macroblock.
4. The apparatus of claim 3, further comprising:
a block-matching motion vector storage unit for storing the macroblock motion vector extracted from each macroblock encoded using block-matching motion compensation;
a global motion vector storage unit for storing the global motion vector output by the translation unit; and
a multiplexer for selecting whether the interpolation unit uses the macroblock motion vector or the global motion vector;
wherein when performing the interpolation operations on macroblocks encoded using block-matching motion compensation, the multiplexer outputs the macroblock motion vector stored in the macroblock motion vector storage unit to the interpolation unit, and when performing the interpolation operations on macroblocks encoded using global motion compensation, the multiplexer outputs the global motion vector stored in the global motion vector storage unit to the interpolation unit.
5. The apparatus of claim 1, wherein the interpolation operations comprise luminance and chrominance interpolation operations.
6. The apparatus of claim 5, wherein when performing the luminance interpolation operations on macroblocks encoded using block-matching motion compensation, the interpolation unit uses half-pel precision.
7. The apparatus of claim 5, wherein when performing the chrominance interpolation operations on macroblocks encoded using block-matching motion compensation, the interpolation unit uses half-pel precision.
8. The apparatus of claim 5, wherein when performing the luminance interpolation operations on macroblocks encoded using global motion compensation, the interpolation unit uses half-pel precision.
9. The apparatus of claim 5, wherein when performing the chrominance interpolation operations on macroblocks encoded using global matching compensation, the interpolation unit uses quarter-pel precision.
10. The apparatus of claim 1, wherein the video decoder is capable of processing an incoming MPEG-4 video stream.
11. The apparatus of claim 10, wherein the video decoder is capable of processing an incoming MPEG-4 video stream having a no_of_sprite_warping_point parameter set to either 0 or 1.
12. The apparatus of claim 1, wherein when performing the interpolation operations, the interpolation unit uses a bilinear interpolation process.
13. A method of processing an incoming video bit stream comprising a plurality of frames, the plurality of frames including first macroblocks encoded using block-matching motion compensation and second macroblocks encoded using global motion compensation, the video bit stream including macroblock motion vectors indicating motion vectors of the first macroblocks and global motion parameters associated with the plurality of frames indicating a motion vector of each pixel in the second macroblocks, the method comprising:
if a current macroblock is encoded using global motion compensation, performing the interpolation operations according to a global motion vector which is derived from the video bit stream and is in a form substantially identical to that of the macroblock motion vector.
14. The method of claim 13, further comprising converting global motion parameters associated with a current frame of the incoming video stream into the global motion vector.
15. The method of claim 13, further comprising if the current macroblock is encoded using block-matching motion compensation, performing the interpolation operations according to a macroblock motion vector contained in the current macroblock.
16. The method of claim 13, wherein the interpolation operations comprise luminance and chrominance interpolation operations.
17. The method of claim 16, wherein when performing the luminance interpolation operations on macroblocks encoded using block-matching motion compensation, using half-pel precision.
18. The method of claim 16, wherein when performing the chrominance interpolation operations on macroblocks encoded using block-matching motion compensation, using half-pel precision.
19. The method of claim 16, wherein when performing the luminance interpolation operations on macroblocks encoded using global motion compensation, using half-pel precision.
20. The method of claim 16, wherein when performing the chrominance interpolation operations on macroblocks encoded using global matching compensation, using quarter-pel precision.
21. The method of claim 13, wherein the method is capable of processing an incoming MPEG-4 video stream.
22. The method of claim 13, wherein the method is capable of processing an incoming MPEG-4 video stream having a no_of_sprite_warping_point parameter set to either 0 or 1.
23. The method of claim 13, wherein when performing the interpolation operations, using a bilinear interpolation process.
24. A predicted image synthesizer in a video decoder for decoding a video bit stream and generating a predicted image, the video bit stream including a plurality of frames having first macroblocks encoded using block-matching compensation and second macroblocks encoded using global motion compensation, the video bit stream including macroblock motion vectors indicating motion vectors of the first macroblocks and global motion parameters associated with the plurality of frames indicating a motion vector of each pixel in the second macroblocks, the predicted image synthesizer comprising:
a translation unit receiving the global motion parameters, and translating the global motion parameters into a global motion vector which is in a form substantially identical to that of the macroblock motion vector, and a interpolation unit for receiving a decoded image which is a previously decoded frame, receiving the global motion vector, performing interpolation operations, and generating the prediction image.
25. The predicted image synthesizer of claim 24, further comprising a demultiplexer receiving the macroblock motion vectors and global motion parameters, and respectively outputting the macroblock motion vectors and the global motion parameters, the global motion parameters are sent to the translation unit and translated into a global motion vector which is in a form substantially identical to that of the macroblock motion vector, and the interpolation unit selectively receiving the macroblock motion vector or the global motion vector to perform the interpolation operations.
26. The predicted image synthesizer of claim 25, wherein the interpolation unit receives the global motion vector when a current macroblock is encoded using global motion compensation.
27. The predicted image synthesizer of claim 25, wherein the interpolation operations include a luminance interpolation operation and a chrominance interpolation operation, the interpolation unit uses a first resolution to perform the luminance interpolation operation and uses a second resolution to perform the chrominance interpolation operation.
28. The predicted image synthesizer of claim 27, wherein the first resolution is a half-pel resolution, and the second resolution is a quarter-pel resolution.
29. The predicted image synthesizer of claim 25, wherein the interpolation unit receives the macroblock motion vector when a current macroblock is encoded using block-matching motion compensation.
30. The predicted image synthesizer of claim 29, wherein the interpolation operations include a luminance interpolation operation and a chrominance interpolation operation, and the interpolation unit uses a half-pel resolution to perform both the luminance interpolation operation and the chrominance interpolation operation.
31. The predicted image synthesizer of claim 30, wherein when performing the interpolation operations, the interpolation unit uses a bilinear interpolation process.
US10/605,882 2003-11-04 2003-11-04 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof Abandoned US20050105621A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US10/605,882 US20050105621A1 (en) 2003-11-04 2003-11-04 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
DE102004021854A DE102004021854A1 (en) 2003-11-04 2004-05-04 Device for both block-matching motion compensation and global motion compensation, and method therefor
TW093130409A TWI248313B (en) 2003-11-04 2004-10-07 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
CNB2004100859134A CN1290342C (en) 2003-11-04 2004-10-25 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
US12/652,747 US9332270B2 (en) 2003-11-04 2010-01-06 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/605,882 US20050105621A1 (en) 2003-11-04 2003-11-04 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/652,747 Continuation US9332270B2 (en) 2003-11-04 2010-01-06 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof

Publications (1)

Publication Number Publication Date
US20050105621A1 true US20050105621A1 (en) 2005-05-19

Family

ID=34573095

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/605,882 Abandoned US20050105621A1 (en) 2003-11-04 2003-11-04 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
US12/652,747 Active 2027-12-10 US9332270B2 (en) 2003-11-04 2010-01-06 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/652,747 Active 2027-12-10 US9332270B2 (en) 2003-11-04 2010-01-06 Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof

Country Status (4)

Country Link
US (2) US20050105621A1 (en)
CN (1) CN1290342C (en)
DE (1) DE102004021854A1 (en)
TW (1) TWI248313B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267289A1 (en) * 2006-01-11 2008-10-30 Huawei Technologies Co., Ltd. Method And Device For Performing Interpolation In Scalable Video Coding
US20090024666A1 (en) * 2006-02-10 2009-01-22 Koninklijke Philips Electronics N.V. Method and apparatus for generating metadata
WO2009044356A2 (en) * 2007-10-05 2009-04-09 Nokia Corporation Video coding with pixel-aligned directional adaptive interpolation filters
US20090097547A1 (en) * 2007-10-14 2009-04-16 Nokia Corporation Fixed-Point Implementation of an Adaptive Image Filter with High Coding Efficiency
US20100208814A1 (en) * 2007-10-15 2010-08-19 Huawei Technologies Co., Ltd. Inter-frame prediction coding method and device
US20110142135A1 (en) * 2009-12-14 2011-06-16 Madhukar Budagavi Adaptive Use of Quarter-Pel Motion Compensation
US20110211642A1 (en) * 2008-11-11 2011-09-01 Samsung Electronics Co., Ltd. Moving picture encoding/decoding apparatus and method for processing of moving picture divided in units of slices
US20150071352A1 (en) * 2010-04-09 2015-03-12 Lg Electronics Inc. Method and apparatus for processing video data
US9083974B2 (en) 2010-05-17 2015-07-14 Lg Electronics Inc. Intra prediction modes
WO2017087751A1 (en) * 2015-11-20 2017-05-26 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
US10404996B1 (en) * 2015-10-13 2019-09-03 Marvell International Ltd. Systems and methods for using multiple frames to adjust local and global motion in an image
US20190335197A1 (en) * 2016-11-22 2019-10-31 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium having bitstream stored thereon

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI444047B (en) * 2006-06-16 2014-07-01 Via Tech Inc Deblockings filter for video decoding , video decoders and graphic processing units
TWI399094B (en) * 2009-06-30 2013-06-11 Silicon Integrated Sys Corp Device and method for adaptive blending motion compensation interpolation in frame rate up-conversion
US20110200108A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Chrominance high precision motion filtering for motion interpolation
US9237355B2 (en) * 2010-02-19 2016-01-12 Qualcomm Incorporated Adaptive motion resolution for video coding
US9055305B2 (en) * 2011-01-09 2015-06-09 Mediatek Inc. Apparatus and method of sample adaptive offset for video coding
US10327008B2 (en) 2010-10-13 2019-06-18 Qualcomm Incorporated Adaptive motion vector resolution signaling for video coding
CN105684409B (en) 2013-10-25 2019-08-13 微软技术许可有限责任公司 Each piece is indicated using hashed value in video and image coding and decoding
CN105684441B (en) 2013-10-25 2018-09-21 微软技术许可有限责任公司 The Block- matching based on hash in video and image coding
KR102185245B1 (en) 2014-03-04 2020-12-01 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Hash table construction and availability checking for hash-based block matching
US10368092B2 (en) 2014-03-04 2019-07-30 Microsoft Technology Licensing, Llc Encoder-side decisions for block flipping and skip mode in intra block copy prediction
WO2015196322A1 (en) 2014-06-23 2015-12-30 Microsoft Technology Licensing, Llc Encoder decisions based on results of hash-based block matching
CN115665423A (en) 2014-09-30 2023-01-31 微软技术许可有限责任公司 Hash-based encoder decisions for video encoding
US10136155B2 (en) 2016-07-27 2018-11-20 Cisco Technology, Inc. Motion compensation using a patchwork motion field
US10390039B2 (en) 2016-08-31 2019-08-20 Microsoft Technology Licensing, Llc Motion estimation for screen remoting scenarios
US11095877B2 (en) 2016-11-30 2021-08-17 Microsoft Technology Licensing, Llc Local hash-based motion estimation for screen remoting scenarios
US11202085B1 (en) 2020-06-12 2021-12-14 Microsoft Technology Licensing, Llc Low-cost hash table construction and hash-based block matching for variable-size blocks
CN114915791B (en) * 2021-02-08 2023-10-20 荣耀终端有限公司 Point cloud sequence encoding and decoding method and device based on two-dimensional regularized plane projection

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5473379A (en) * 1993-11-04 1995-12-05 At&T Corp. Method and apparatus for improving motion compensation in digital video coding
US6008852A (en) * 1996-03-18 1999-12-28 Hitachi, Ltd. Video coder with global motion compensation
US6256343B1 (en) * 1996-10-30 2001-07-03 Hitachi, Ltd. Method and apparatus for image coding
US20010050957A1 (en) * 1997-06-09 2001-12-13 Yuichiro Nakaya Image decoding method
US6385245B1 (en) * 1997-09-23 2002-05-07 Us Philips Corporation Motion estimation and motion-compensated interpolition
US20030202594A1 (en) * 2002-03-15 2003-10-30 Nokia Corporation Method for coding motion in a video sequence
US20030202607A1 (en) * 2002-04-10 2003-10-30 Microsoft Corporation Sub-pixel interpolation in motion estimation and compensation
US6775326B2 (en) * 1997-02-13 2004-08-10 Mitsubishi Denki Kabushiki Kaisha Moving image estimating system
US20040223550A1 (en) * 2001-06-06 2004-11-11 Norihisa Hagiwara Decoding apparatus, decoding method, lookup table, and decoding program
US20050013497A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation Intraframe and interframe interlace coding and decoding
US7006571B1 (en) * 1997-06-03 2006-02-28 Hitachi, Ltd. Method of synthesizing interframe predicted image, and image coding and decoding method and device therefore

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2231225B (en) * 1989-04-27 1993-10-20 Sony Corp Motion dependent video signal processing
KR950014862B1 (en) 1992-02-08 1995-12-16 삼성전자주식회사 Motion estimation method and apparatus
DE69817460T2 (en) 1997-06-09 2004-06-09 Hitachi, Ltd. Image sequence decoding method
US7206346B2 (en) * 1997-06-25 2007-04-17 Nippon Telegraph And Telephone Corporation Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US7050500B2 (en) 2001-08-23 2006-05-23 Sharp Laboratories Of America, Inc. Method and apparatus for motion vector coding with global motion parameters
US20030123738A1 (en) 2001-11-30 2003-07-03 Per Frojdh Global motion compensation for video pictures
US7602848B2 (en) * 2002-03-26 2009-10-13 General Instrument Corporation Methods and apparatus for efficient global motion compensation encoding and associated decoding

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5473379A (en) * 1993-11-04 1995-12-05 At&T Corp. Method and apparatus for improving motion compensation in digital video coding
US6483877B2 (en) * 1996-03-18 2002-11-19 Hitachi, Ltd. Method of coding and decoding image
US6008852A (en) * 1996-03-18 1999-12-28 Hitachi, Ltd. Video coder with global motion compensation
US6256343B1 (en) * 1996-10-30 2001-07-03 Hitachi, Ltd. Method and apparatus for image coding
US20030202595A1 (en) * 1996-10-30 2003-10-30 Yoshinori Suzuki Method and apparatus for image coding
US6775326B2 (en) * 1997-02-13 2004-08-10 Mitsubishi Denki Kabushiki Kaisha Moving image estimating system
US7006571B1 (en) * 1997-06-03 2006-02-28 Hitachi, Ltd. Method of synthesizing interframe predicted image, and image coding and decoding method and device therefore
US20010050957A1 (en) * 1997-06-09 2001-12-13 Yuichiro Nakaya Image decoding method
US6385245B1 (en) * 1997-09-23 2002-05-07 Us Philips Corporation Motion estimation and motion-compensated interpolition
US20040223550A1 (en) * 2001-06-06 2004-11-11 Norihisa Hagiwara Decoding apparatus, decoding method, lookup table, and decoding program
US20030202594A1 (en) * 2002-03-15 2003-10-30 Nokia Corporation Method for coding motion in a video sequence
US20030202607A1 (en) * 2002-04-10 2003-10-30 Microsoft Corporation Sub-pixel interpolation in motion estimation and compensation
US20050013497A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation Intraframe and interframe interlace coding and decoding

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267289A1 (en) * 2006-01-11 2008-10-30 Huawei Technologies Co., Ltd. Method And Device For Performing Interpolation In Scalable Video Coding
US20090024666A1 (en) * 2006-02-10 2009-01-22 Koninklijke Philips Electronics N.V. Method and apparatus for generating metadata
WO2009044356A2 (en) * 2007-10-05 2009-04-09 Nokia Corporation Video coding with pixel-aligned directional adaptive interpolation filters
WO2009044356A3 (en) * 2007-10-05 2009-06-04 Nokia Corp Video coding with pixel-aligned directional adaptive interpolation filters
US20100296587A1 (en) * 2007-10-05 2010-11-25 Nokia Corporation Video coding with pixel-aligned directional adaptive interpolation filters
US20090097547A1 (en) * 2007-10-14 2009-04-16 Nokia Corporation Fixed-Point Implementation of an Adaptive Image Filter with High Coding Efficiency
US8416861B2 (en) 2007-10-14 2013-04-09 Nokia Corporation Fixed-point implementation of an adaptive image filter with high coding efficiency
US20100208814A1 (en) * 2007-10-15 2010-08-19 Huawei Technologies Co., Ltd. Inter-frame prediction coding method and device
US20110211642A1 (en) * 2008-11-11 2011-09-01 Samsung Electronics Co., Ltd. Moving picture encoding/decoding apparatus and method for processing of moving picture divided in units of slices
US9432687B2 (en) 2008-11-11 2016-08-30 Samsung Electronics Co., Ltd. Moving picture encoding/decoding apparatus and method for processing of moving picture divided in units of slices
US9042456B2 (en) * 2008-11-11 2015-05-26 Samsung Electronics Co., Ltd. Moving picture encoding/decoding apparatus and method for processing of moving picture divided in units of slices
US20110142135A1 (en) * 2009-12-14 2011-06-16 Madhukar Budagavi Adaptive Use of Quarter-Pel Motion Compensation
US20150071352A1 (en) * 2010-04-09 2015-03-12 Lg Electronics Inc. Method and apparatus for processing video data
US9426472B2 (en) * 2010-04-09 2016-08-23 Lg Electronics Inc. Method and apparatus for processing video data
US10841612B2 (en) 2010-04-09 2020-11-17 Lg Electronics Inc. Method and apparatus for processing video data
US11695954B2 (en) * 2010-04-09 2023-07-04 Lg Electronics Inc. Method and apparatus for processing video data
US9918106B2 (en) 2010-04-09 2018-03-13 Lg Electronics Inc. Method and apparatus for processing video data
US10321156B2 (en) 2010-04-09 2019-06-11 Lg Electronics Inc. Method and apparatus for processing video data
US20220060749A1 (en) * 2010-04-09 2022-02-24 Lg Electronics Inc. Method and apparatus for processing video data
US11197026B2 (en) 2010-04-09 2021-12-07 Lg Electronics Inc. Method and apparatus for processing video data
US9083974B2 (en) 2010-05-17 2015-07-14 Lg Electronics Inc. Intra prediction modes
US10404996B1 (en) * 2015-10-13 2019-09-03 Marvell International Ltd. Systems and methods for using multiple frames to adjust local and global motion in an image
US11082713B2 (en) 2015-11-20 2021-08-03 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
WO2017087751A1 (en) * 2015-11-20 2017-05-26 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
US20190335197A1 (en) * 2016-11-22 2019-10-31 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium having bitstream stored thereon

Also Published As

Publication number Publication date
CN1615025A (en) 2005-05-11
US20100104020A1 (en) 2010-04-29
US9332270B2 (en) 2016-05-03
TW200516993A (en) 2005-05-16
DE102004021854A1 (en) 2005-06-09
CN1290342C (en) 2006-12-13
TWI248313B (en) 2006-01-21

Similar Documents

Publication Publication Date Title
US9332270B2 (en) Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
US9083985B2 (en) Motion image encoding apparatus, motion image decoding apparatus, motion image encoding method, motion image decoding method, motion image encoding program, and motion image decoding program
US6625215B1 (en) Methods and apparatus for context-based inter/intra coding mode selection
WO2017036417A1 (en) Method and apparatus of adaptive inter prediction in video coding
US20040252901A1 (en) Spatial scalable compression
JPH11275592A (en) Moving image code stream converter and its method
US9729869B2 (en) Adaptive partition subset selection module and method for use therewith
JP2006279573A (en) Encoder and encoding method, and decoder and decoding method
EP1449384A2 (en) Reduced-complexity video decoding using larger pixel-grid motion compensation
JP2006518568A (en) Video encoding
JP2007524309A (en) Video decoding method
US20050265444A1 (en) Moving image encoding/decoding apparatus and method
JP2001508632A (en) Motion compensated prediction image coding and decoding
US5761423A (en) Scalable picture storage access architecture for video decoding
US9420308B2 (en) Scaled motion search section with parallel processing and method for use therewith
US20100246682A1 (en) Scaled motion search section with downscaling and method for use therewith
KR19980033415A (en) Apparatus and method for coding / encoding moving images and storage media for storing moving images
JP5313223B2 (en) Moving picture decoding apparatus and moving picture encoding apparatus
KR100900058B1 (en) Method and Circuit Operation for Motion Estimation of Deverse Multimedia Codec
KR100757832B1 (en) Method for compressing moving picture using 1/4 pixel motion vector
KR100757830B1 (en) Method for compressing moving picture using 1/4 pixel motion vector
EP1790166A2 (en) A method and apparatus for motion estimation
US20120002720A1 (en) Video encoder with video decoder reuse and method for use therewith
US20120002719A1 (en) Video encoder with non-syntax reuse and method for use therewith
JPH07203460A (en) Locus display device

Legal Events

Date Code Title Description
AS Assignment

Owner name: MEDIATEK INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JU, CHI-CHENG;REEL/FRAME:014098/0223

Effective date: 20031022

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION