US20040264572A1 - Motion prediction compensating device and its method - Google Patents

Motion prediction compensating device and its method Download PDF

Info

Publication number
US20040264572A1
US20040264572A1 US10/832,085 US83208504A US2004264572A1 US 20040264572 A1 US20040264572 A1 US 20040264572A1 US 83208504 A US83208504 A US 83208504A US 2004264572 A1 US2004264572 A1 US 2004264572A1
Authority
US
United States
Prior art keywords
search range
motion
prediction
motion vector
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/832,085
Other versions
US7746930B2 (en
Inventor
Kazushi Sato
Toshiharu Tsuchiya
Yoichi Yagasaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SATO, KAZUSHI, TSUCHIYA, TOSHIHARU, YAGASAKI, YOICHI
Publication of US20040264572A1 publication Critical patent/US20040264572A1/en
Application granted granted Critical
Publication of US7746930B2 publication Critical patent/US7746930B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search

Definitions

  • the present invention relates to a motion prediction compensating device and a motion prediction compensating method, and is suitably applied to a coding device with a predetermined coding method.
  • An image coding device makes a coding process conforming to a predetermined image coding method for the dynamic image data supplied from the outside to generate the coded data with reduced data amount of the dynamic image data.
  • Such image coding methods include an image coding method called MPEG that is standardized to encode a general-purpose image by ISO/IEC Moving Picture Experts Group (MPEG) and an image coding method called H.26x (H.261, H.263, . . . ) that is standardized to encode a television conference image by the ITU group, which are well known.
  • MPEG Moving Picture Experts Group
  • H.26x H.261, H.263, . . .
  • JVT Joint Model of Enhanced-Compression Video Coding
  • a process for predicting a motion amount of an object frame in the consecutive frames (dynamic images) employing a reference frame in the future or past for the object frame (hereinafter referred to as a motion prediction compensation process) is performed by searching the motion vector in the pixel block of prediction object (hereinafter referred to as a motion compensation block) at each of four block sizes, including 16 ⁇ 16 pixels, 8 ⁇ 16 pixels, 16 ⁇ 8 pixels and 8 ⁇ 8 pixels, as shown in FIG. 1.
  • the motion vectors are provided independently for the motion compensation blocks of four block sizes.
  • the motion vector is searched at each of four block sizes, including 8 ⁇ 8 pixels, 4 ⁇ 8 pixels, 8 ⁇ 4 pixels and 4 ⁇ 4 pixels, whereby the motion vectors are provided independently for the motion compensation blocks of four block sizes.
  • the motion vector is searched while sequentially changing the motion compensation block at plural block sizes, whereby a maximum of 16 motion vectors are provided (e.g., refer to non-patent document 1: DRAFT ISO/IEC 1/4 496-10: 2002(E)).
  • the prediction compensation process is performed for the motion compensation block within the object frame OF, employing a plurality of reference frames SF 2 and SF 5 , or for the motion compensation blocks at different positions within the object frame OF, employing separate reference frames SF 2 and SF 4 , in which the plurality of reference frames SF 1 to SF 5 are called a multi reference frame, as shown in FIG. 2 (e.g., refer to Non-Patent Document 1: DRAFT ISO/IEC 1/4 496-10: 2002(E))
  • the motion vector is searched employing the plurality of reference frames while sequentially changing the motion compensation block at plural block sizes, whereby the processing load in the motion prediction compensation process is increased over the already standardized coding method.
  • the coding device conforming to the already standardized coding method has typically the greatest processing load in the motion prediction compensation process of the coding process.
  • an object of this invention is to provide a prediction compensating device and its method in which the processing efficiency over the overall coding process is enhanced.
  • a motion prediction compensating device comprising address detecting means for detecting the address of an object pixel block serving as a pixel block of prediction object among plural pixel blocks, search range deciding means for deciding a first search range or a second search range narrower than the first search range as the search range of a motion vector for the object pixel block in the reference frame in accordance with the address detected by the address detecting means, and motion vector searching means for searching for the motion vector from the search range decided by the search range deciding means around a predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block.
  • this invention provides a motion prediction compensating method comprising a first step of detecting an address of an object pixel block serving as a pixel block of prediction object among the plural pixel blocks, a second step of deciding a first search range or a second search range narrower than the first search range as the search range of a motion vector for the object pixel block in the reference frame in accordance with the address detected at the first step, and a third step of searching the motion vector from the search range decided at the second step around a predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block.
  • FIG. 1 is a diagrammatic view for explaining a motion compensation block
  • FIG. 2 is a diagrammatic view for explaining a multi reference frame
  • FIG. 3 is a diagrammatic view for explaining the motion compensation at 1 ⁇ 4 pixel precision for a brightness signal component
  • FIG. 4 is a diagrammatic view for explaining the motion compensation at 1 ⁇ 4 pixel precision for a color difference signal component
  • FIG. 5 is a diagrammatic view for explaining the generation of a predictor for the motion vector
  • FIG. 6 is a diagrammatic view for explaining a search sequence of the motion vector
  • FIG. 7 is a diagrammatic view for explaining a motion search at fractional pixel precision
  • FIG. 8 is a block diagram showing the configuration of an image coding device
  • FIG. 9 is a block diagram showing the configuration of a motion prediction compensation processing portion
  • FIG. 10 is a diagrammatic view for explaining the detection of macro block address
  • FIG. 11 is a diagrammatic view for explaining the decision of motion search range.
  • FIG. 12 is a diagrammatic view for explaining the decision of motion search range according to another embodiment of the invention.
  • a motion prediction compensation process is performed while sequentially changing the motion compensation block having block sizes less than or equal to the size of macro block, as shown in FIG. 1 and described above.
  • This motion compensation prediction process is largely classified into a process for detecting the motion vector for the motion compensation block (hereinafter referred to a motion prediction process) and a process for shifting the pixels of the motion compensation block in accordance with the detected motion vector (hereinafter referred to as a motion compensation process).
  • this motion compensation process is performed at 1 ⁇ 4 pixel precision or 1 ⁇ 8 pixel precision.
  • the motion compensation process at 1 ⁇ 4 pixel precision for a brightness component signal will be described below.
  • the block with slant is represented as integer pixel
  • the block without slant is represented as 1 ⁇ 2 pixel.
  • the FIR filter is employed to generate the pixel value of 1 ⁇ 2 pixel, six tap filter coefficients being defined according to the following Formula.
  • the pixel values b and h of 1 ⁇ 2 pixel are calculated by adding up employing the filter coefficients of Formula (1) according to the following Formula,
  • This Clip 1 indicates the clip in (0, 255).
  • the pixel value j is calculated by generating the pixel values aa, bb, cc, dd, ff, gg and hh in the same way as b and h, adding up according to the following formula
  • the pixel values a, c, d, n, f, i, k and q are calculated through linear interpolation of the pixel values of integer pixel and the pixel values of 1 ⁇ 2 pixel according to the following Formula.
  • the pixel values e, g, p and r are calculated through linear interpolation employing the pixel values of 1 ⁇ 2 pixel according to the following Formula
  • the motion vector information generated through this motion prediction process is represented as the difference information between the predictor based on the motion vector already detected for the surrounding motion compensation blocks adjacent to the motion compensation block of prediction object and the motion vector for the motion compensation block of prediction object.
  • FIG. 5 a method for generating the predictor of motion vector for the motion compensation block E will be described below.
  • the alphabets as denoted in FIG. 5 have no relevance with the alphabets as denoted in FIG. 3.
  • the predictor is generated based on the value of motion vector regarding the motion compensation block D and the reference frame index.
  • the predictor is generated based on the value of motion vector regarding the motion compensation block D and the reference frame index.
  • the motion compensation block E when the adjacent macro block is intra coded data or does not exist within the picture or slice, the value of motion vector for the motion compensation block E is zero, whereby the motion compensation block E is regarded to refer to the reference frame different from the motion compensation blocks A, B, C and D.
  • the motion vector of the motion compensation block A, B or C that refers to the same reference frame as the motion compensation block E is generated as the predictor.
  • the median of the motion vectors for the motion compensation blocks A, B and C is generated as the predictor.
  • FIG. 6 a motion search at integer pixel precision will be firstly described below for the searching method for the motion vector as defined in the JVT coding method.
  • “0” to “24” are represented as the integer pixels, and indicate a searching sequence in the motion search.
  • pixel 0 indicates the center of the motion vector search.
  • the predictor generated by the above method as shown in FIG. 5 is located at the center of search, and the motion vector is searched for spirally around this center by the amount of ⁇ Search_Range.
  • the motion search is also performed at fractional pixel precision.
  • this motion search of fractional pixel precision will be described below.
  • “A” to “I” are represented as the integer pixels
  • “1” to “8” are represented as 1 ⁇ 2 pixels
  • “a” to “h” are represented as 1 ⁇ 4 pixels.
  • the alphabets and numerals as denoted in FIG. 7 have no relevance with the alphabets and numerals as denoted in other figures.
  • the motion vector is searched for employing a plurality of reference frames while sequentially changing each of the motion compensation blocks of block sizes, thereby increasing the processing load in the motion prediction compensation process. Therefore, in this invention, the processing load of the motion prediction compensation process is reduced without deteriorating the image quality as much as possible.
  • FIG. 8 1 designates an image coding device conforming to the JVT coding method as a whole, in which data of moving picture (hereinafter referred to as dynamic image data) D 1 formed by a plurality of frames is input from the outside, and the data amount of dynamic image data D 1 is reduced efficiently.
  • dynamic image data data of moving picture
  • the image coding device 1 once stores the dynamic image data D 1 supplied from the outside in an image rearrangement buffer 2 , in which the frames of the dynamic image data D 1 are rearranged in the coding order in accordance with a Group of Pictures (GOP) structure, and then read sequentially as the frame data D 2 from the image rearrangement buffer 2 .
  • GOP Group of Pictures
  • the image coding device 1 sends the frame data D 2 to an intra prediction portion 4 , when the image type of frame data D 2 read from the image rearrangement buffer 2 is I (Intra) frame.
  • the intra prediction portion 4 divides the frame based on the frame data D 2 into pixel blocks of basis units (hereinafter referred to as macro blocks), and performs a process for predicting sequentially the pixel values for the macro blocks of prediction object, employing the macro blocks in the past, and sends the pixel values obtained sequentially as a result of the process as the prediction data D 3 to an adder 3 .
  • macro blocks pixel blocks of basis units
  • the image coding device 1 sends the frame data D 2 to a motion prediction compensation processing portion 5 , when the image type of frame data D 2 read from the image rearrangement buffer 2 is other than I (Intra) frame.
  • the motion prediction compensation processing portion 5 divides the frame based on the frame data D 2 into macro blocks, and performs a motion prediction compensation process for each of the motion compensation blocks of plural block sizes as described above and shown in FIG. 5, employing plural reference frames (FIG. 2) in the future or the past for the current frame.
  • the motion prediction compensation processing portion 5 detects a motion amount between the motion compensation block of prediction object and the motion compensation block within a predetermined reference frame most approximate to the pixel value of the motion compensation block as the motion vector, and performs a motion prediction process for calculating a difference value between the detected motion vector and the predictor (e.g., median value) based on the motion compensation block adjacent to the motion compensation block in the above way as shown in FIG. 1, whereby the difference value is sequentially sent as the motion vector information MVD to a reversible coding portion 9 .
  • a difference value between the detected motion vector and the predictor e.g., median value
  • the motion prediction compensation processing portion 5 performs a motion compensation process for shifting the pixel of motion compensation block in accordance with the motion vector, and sends sequentially the pixel value of shifted pixel as the predicted data D 4 to the adder 3 .
  • the adder 3 calculates the prediction residual by subtracting the prediction data D 3 or D 4 sequentially supplied from the intra prediction portion 4 or the motion prediction compensation processing portion 5 as a result of prediction made by the prediction method according to the image type in this way from the corresponding frame data D 2 , and sends this prediction residual as the difference data D 5 to an orthogonal transformation portion 6 .
  • the orthogonal transformation portion 6 performs an orthogonal transformation process such as a discrete cosine transform for the difference data D 5 to generate the orthogonal transformation coefficient data D 6 , which is then sent to a quantization portion 7 .
  • an orthogonal transformation process such as a discrete cosine transform for the difference data D 5 to generate the orthogonal transformation coefficient data D 6 , which is then sent to a quantization portion 7 .
  • the quantization portion 7 performs a quantization process for the orthogonal transformation coefficient data D 6 in accordance with a quantization parameter D 8 given through a predetermined feedback control process of a rate control portion 8 to generate the quantization data D 7 , which is then sent to a reversible coding processing portion 9 and an inverse quantization portion 11 .
  • the reversible coding processing portion 9 performs a reversible coding process such as reversible coding and arithmetical coding for the quantization data D 7 and the corresponding motion vector information MVD to generate the coded data D 9 , which is stored in a storage buffer 10 .
  • This coded data D 9 is read from the storage buffer 10 at a predetermined timing, and sent to the rate control portion 8 or the outside.
  • the image coding device 1 generates the coded data D 9 with smaller data amount than the dynamic image data D 1 by performing various processes using the high correlation between spatially or temporally adjacent pixels.
  • the inverse quantization portion 11 performs an inverse quantization process for the quantization data D 7 given from the quantization portion 7 to restore the orthogonal transformation coefficient data D 11 corresponding to the orthogonal transformation coefficient data D 6 , which is then sent to an inverse orthogonal transformation portion 12 .
  • the inverse orthogonal transformation portion 12 performs an inverse orthogonal transformation process for the orthogonal transformation coefficient data D 11 , and restores the difference data D 12 corresponding to the difference data D 5 , which is then sent to an adder 13 .
  • the adder 13 sequentially adds the prediction data D 3 or D 4 corresponding to the difference data D 12 to locally regenerate the frame data D 13 corresponding to the frame data D 2 , which is then sent to a deblock filter 14 .
  • the deblock filter 14 filters a distorted part in the frame based on the frame data D 13 to be smoother and stores the frame data as the data of reference frame (hereinafter referred to as reference frame data) D 14 that is a part of the multi reference frame as described above and shown in FIG. 2 in a frame memory 15 , as needed, when there is a distortion between adjacent blocks divided by the motion prediction compensation processing process 5 .
  • This reference frame data D 14 is read by the motion prediction compensation portion 5 and employed to sequentially predict the pixel value of each motion compensation block in the frame data D 2 .
  • the pixel values of motion compensation block belonging to the frame of processing object are predicted employing the plurality of reference frames that are temporally different.
  • the reference frame immediately before the frame of processing object is unpredictable due to a camera flash, it can be predicted employing other frames as the reference frames. Consequently, the coding efficiency is improved by avoiding a wasteful amount of operation in the prediction.
  • the image coding device 1 makes the prediction employing the reference frame smoothed by removing beforehand the distortion, avoiding the lower prediction precision due to distortion. Consequently, the coding efficiency is improved by avoiding a wasteful amount of operation in the prediction.
  • the processing contents of the motion prediction process are functionally classified into a motion predicting portion 20 for generating the motion vector information MVD of the motion compensation block of prediction object (hereinafter referred to as a prediction block), a macro block address detecting portion 21 for detecting the address of macro block containing the prediction block, and a search range deciding portion 22 for deciding a range for searching for the motion vector of the prediction block (hereinafter referred to as a motion search range), as shown in FIG. 9.
  • a motion predicting portion 20 for generating the motion vector information MVD of the motion compensation block of prediction object
  • a macro block address detecting portion 21 for detecting the address of macro block containing the prediction block
  • a search range deciding portion 22 for deciding a range for searching for the motion vector of the prediction block
  • the motion predicting portion 20 stores the frame data D 2 given from the image rearrangement buffer 2 (FIG. 8) and one or more reference frame data D 14 stored in the frame memory 15 at this time in an internal memory, and divides the frame based on the frame data D 2 and the reference frame based on the reference frame data D 14 into macro blocks.
  • the motion predicting portion 20 sequentially generates the motion vector information MVD for each motion compensation block of each block sizes as shown in FIG. 1, employing the plurality of reference frames (FIG. 2), for each macro block within the frame.
  • the motion predicting portion 20 in generating the motion vector information MVD for the certain prediction block having the same block size as the macro block, for example, the motion predicting portion 20 firstly generates the predictor of the motion vector for the prediction block, based on the motion vector already detected for the surrounding motion compensation blocks of block size corresponding to the prediction block, in the above way as shown in FIG. 5. Accordingly, this predictor reflects the tendency of motion in the dynamic image blocks around the prediction block.
  • the motion predicting portion 20 decides the predictor directly as the search center, when the position of image block on the reference frame corresponding to the position of prediction block (i.e., position where the motion vector is (0, 0), hereinafter referred to as a motion zero position) is not included within the motion search range.
  • the motion predicting portion 20 performs a process for shifting the predictor (hereinafter referred to as a predictor correcting process) until the motion zero position is included within the motion search range, and decides the corrected predictor (hereinafter referred to as a corrected predictor) as the search center.
  • a predictor correcting process a process for shifting the predictor (hereinafter referred to as a predictor correcting process) until the motion zero position is included within the motion search range, and decides the corrected predictor (hereinafter referred to as a corrected predictor) as the search center.
  • the motion predicting portion 20 detects, as the optimal motion vector, the motion vector for the motion compensation block most approximate to the pixel value of prediction block, such as the motion vector for the motion compensation block with the least sum of absolute values of differences from the pixel value of prediction block, in the motion search range with the search center decided in this way, and generates the difference value between the detected motion vector and the predictor of prediction block as the motion vector information MVD, which is then sent to the reversible coding processing portion 9 (FIG. 8).
  • the motion predicting portion 20 searches for the motion vector in a state where the motion zero position is included within the search range at any time. Thereby, even when the actual motion around the prediction block is dispersed in multiple directions, the motion vector for the prediction block can be detected at high precision.
  • the macro block address detecting portion 21 monitors the frame data D 2 stored in the internal memory of the motion predicting portion 20 at any time, detects the horizontal (x direction) and vertical (y direction) address (MB_x, MB_y) of the macro block including the prediction block of processing object at present in the motion predicting portion 20 with reference to a left upper corner of the frame based on the frame data D 2 , the detected result being sent as the data (hereinafter referred to as address detection data) D 21 to the search range deciding portion 22 , as shown in FIG. 10.
  • the search range deciding portion 22 decides a first motion search range or a second motion search range narrower than the first motion search range, which is preset, based on the address (MB_x, MB_y) represented in the address detection data D 21 , employing a function f according to the address (MB_x, MB_y) as given in the following.
  • the search range deciding portion 22 has the function f as defined in the Formula (9) in accordance with the following formula
  • the search range deciding portion 22 decides the first search range SR 1 when the residuals %2 of diving by 2 the MB address in the x direction (MB_x) and the MB address in the y direction (MB_y) are divisible (i.e.,“0”) or indivisible (i.e., “1”), as a result of calculation from Formula (10) and Formula (11), or otherwise decides the second search range SR 2 , in which the decision is sent as the search range decided data D 22 to the motion predicting portion 20 .
  • the search range deciding portion 22 allows the motion predicting portion 20 to search each macro block within the frame in the second search range SR 2 narrower than the first search range SR 1 at every other pixel in the horizontal direction (x direction) and the vertical direction (y direction), as shown in FIG. 11.
  • the motion prediction compensation portion 20 performs the motion prediction process at every other pixel in the second search range narrower than the first search range SR 1 , reducing the processing load in sequentially searching the macro block for the motion vector in each prediction block of each block size.
  • the motion prediction compensation portion 20 searches the motion search range SR 1 or SR 2 with the predictor reflecting the tendency of motion around the prediction block as the search center, the motion vector is detected even when the motion of pixel in the prediction block is large. Thereby, the motion vector is searched for in the second motion search range SR 2 narrower than the first motion search range SR 1 without lower prediction precision.
  • the search range deciding portion 22 when deciding the second search range SR 2 according to the Formula (10) or (11), the search range deciding portion 22 generates the control data D 23 to stop the predictor correcting process of the motion predicting portion 20 and sends it together with the search range decision data D 22 to the motion predicting portion 20 .
  • the motion prediction compensation portion 20 has the reduced processing load in searching for the motion vector, even if the pixel value of prediction block is not included in the motion search range SR 2 , when the predictor is located at the search center of the second motion search range SR 2 , because the predictor correcting process is not performed.
  • the motion prediction compensation portion 20 searches for the motion vector in the motion search range SR 1 wider than the motion search range SR 2 , for the macro blocks around the macro block containing each motion compensation block searched in the motion search range SR 2 , whereby the predictor correcting process is covered in the first search range SR 1 around the second motion search range SR 2 , even if it is not performed in the motion search range SR 2 , as shown in FIG. 11.
  • the motion prediction compensation processing portion 5 reduces the processing load of the motion prediction process by adaptively switching between the motion search ranges SR 1 and SR 2 and the presence or absence of the predictor correcting process according to the address of the macro block of processing object, without causing lower prediction precision.
  • this motion prediction compensation processing portion 5 detects the address (MB_x, MB_y) of macro block contained in the prediction block, and decides the motion search range SR 1 or SR 2 according to the detected address (MB_x, MB_y).
  • the motion prediction compensation processing portion 5 generates the predictor based on the motion vector already detected for the surrounding motion compensation block of block size corresponding to the prediction block, and searches for the motion vector in the motion search range SR 1 or SR 2 with this predictor as the center.
  • this motion prediction compensation processing portion 5 performs the motion prediction process in the second search range SR 2 narrower than the first motion search range SR 1 , greatly reducing the processing load in sequentially searching for the motion vector for each motion compensation block of each block size for the macro block.
  • the motion prediction compensation processing portion 5 searches for the motion vector with the predictor reflecting the tendency of motion around the prediction block as the search center, the motion vector is detected even when the motion of pixels of the prediction block is large. Hence the lower prediction precision is prevented even if the motion vector is searched for in the second motion search range SR 2 narrower than the first motion search range SR 1 .
  • the motion prediction compensation processing portion 5 decides the second search range SR 2 only when both the horizontal and vertical addresses (MB_x, MB_y) are not even or odd, whereby the search range SR 1 or SR 2 is decided with a smaller amount of computation.
  • the prediction is covered by the first search range SR 1 wider than the second search range SR 2 even if the second search range SR 2 is very narrow.
  • the difference between the first search range SR 1 and the second search range SR 2 can be relatively large, so that the processing load in searching the second search range SR 2 is further reduced while suppressing the lower prediction precision.
  • the motion prediction compensation processing portion 5 stops the predictor correcting process of the motion predicating portion 20 , when the second search range SR 2 is decided.
  • the motion prediction compensation processing portion 5 does not perform the predictor correcting process, further reducing the processing load in searching the second search range SR 2 .
  • the motion search range SR 1 or SR 2 is decided in accordance with the address (MB_x, MB_y) of macro block contained in the prediction block, the motion vector is searched for in the motion search range SR 1 or SR 2 around the predictor based on the surrounding motion compensation block corresponding to the prediction block, whereby the processing load is reduced by performing the prediction process in the second search range SR 2 narrower than the first motion search range SR 1 .
  • the motion vector is searched for with the predictor reflecting the tendency of motion around the prediction block as the search center, whereby the lower prediction precision is prevented and the processing efficiency as the overall coding process is enhanced.
  • the address detecting means for detecting the address of object pixel block the address (MB_x, MB_y) of the macro block containing the prediction block of processing object at present in the motion predicating portion 20 is detected with reference to the left upper corner of the frame based on the frame data D 2 stored in the internal memory of the motion predicating portion 20 .
  • the invention is not limited thereto, but the address (MB_x, MB_y) of the macro block containing the prediction block may be detected with reference to other than the left upper corner.
  • the pixel block other than the macro block may be detected.
  • the search range deciding means for deciding the first search range or the second search range in accordance with the address of object pixel block the search range deciding portion 22 for deciding alternately the first search range SR 1 or the second search range SR 2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention.
  • the invention is not limited thereto, but the first search range SR 1 may be decided for the macro blocks around the frame, and the second search range SR 2 may be decided for other macro blocks, as shown in FIG. 12.
  • the processing load is smaller than in the above embodiment, because the first search range SR 1 or the second search range SR 2 is switched less frequently.
  • the search range deciding means for deciding the first search range or the second search range in accordance with the address of object pixel block the search range deciding portion 22 for deciding alternately the first search range SR 1 or the second search range SR 2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention.
  • the invention is not limited thereto, but the first search range SR 1 may be decided for the macro block corresponding to the initial position of the slice, and the second search range SR 2 may be decided for other macro blocks.
  • the processing load is smaller than in the above embodiment, because the first search range SR 1 or the second search range SR 2 is switched less frequently.
  • the search range deciding means for deciding the first search range or, the second search range in accordance with the address of object pixel block the search range deciding portion 22 for deciding alternately the first search range SR 1 or the second search range SR 2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention.
  • the invention is not limited thereto, but a search range wider than the first search range SR 1 may be decided when the address is “0” (i.e., macro block for which the motion prediction compensation process is firstly performed among the macro blocks within the object frame). In this case, the prediction precision is further enhanced.
  • the motion predicting portion 20 for performing a process for correcting the prediction value is applied in the invention, if the position (motion zero position) of the pixel block on the reference frame corresponding to the position of prediction block is not included in the first search range SR 1 , when the predictor is located at the center of the search range.
  • this invention is not limited thereto, but the correcting process may not be performed.
  • the rate at which the position of predictor is shifted may be changed according to the rate of difference between the first search range SR 1 and the second search range SR 2 . More specifically, as there is greater difference between the first search range SR 1 and the second search range SR 2 , the position (motion zero position) of pixel block on the reference frame corresponding to the position of prediction block is approached more closely, thereby reducing the processing load in searching the second search range SR 2 while suppressing the lower prediction precision properly.
  • the motion prediction compensation device for predicting and compensating the motion amount of each of the plurality of pixel blocks into which the object frame in consecutive frames is divided, employing the reference frame that is the frame in the future or past for the object frame, the prediction compensation processing portion 5 for performing the prediction compensation process conforming to the JVT coding method is applied in this invention.
  • this invention is not limited thereto, but may be applied to the motion prediction compensation device for performing the motion prediction compensation process conforming to various other coding methods, such as MPEG2.
  • the address of object pixel block serving as the pixel block of prediction object among the plurality of pixel blocks is detected, the first search range or the second search range narrower than the first search range is decided as the search range of the motion vector for the object pixel block on the reference frame according to the detected address, and the motion vector is searched for from the decided search range around the predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block, whereby the processing load is reduced because the motion vector is searched for in the second search range narrower than the first motion search range, and the lower prediction precision is prevented because the search is made around the predictor reflecting the tendency of motion around the prediction block, so that the processing efficiency of the overall coding process is enhanced.

Abstract

An object of the invention is to provide a prediction compensation device and its method in which the processing efficiency of the overall coding process is enhanced. The address of an object pixel block serving as a pixel block of prediction object among a plurality of pixel blocks is detected, a first search range or a second search range narrower than the first search range is decided as the search range of a motion vector for the object pixel block on the reference frame according to the detected address, and the motion vector is searched for from the decided search range around the predictor of the motion vector based on the surrounding pixel blocks adjacent to the object pixel block.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a motion prediction compensating device and a motion prediction compensating method, and is suitably applied to a coding device with a predetermined coding method. [0001]
  • DESCRIPTION OF THE RELATED ART
  • An image coding device makes a coding process conforming to a predetermined image coding method for the dynamic image data supplied from the outside to generate the coded data with reduced data amount of the dynamic image data. [0002]
  • Such image coding methods include an image coding method called MPEG that is standardized to encode a general-purpose image by ISO/IEC Moving Picture Experts Group (MPEG) and an image coding method called H.26x (H.261, H.263, . . . ) that is standardized to encode a television conference image by the ITU group, which are well known. [0003]
  • In recent years, with the spread of portable terminal units such as portable telephone sets, there is a need for an image coding method that implements higher coding efficiency. At present, to cope with this need, an image coding method called Joint Model of Enhanced-Compression Video Coding (JVT) (hereinafter referred to as a JVT coding method) is standardized by the MPEG and ITU group. [0004]
  • In this JVT coding method, a process for predicting a motion amount of an object frame in the consecutive frames (dynamic images) employing a reference frame in the future or past for the object frame (hereinafter referred to as a motion prediction compensation process) is performed by searching the motion vector in the pixel block of prediction object (hereinafter referred to as a motion compensation block) at each of four block sizes, including 16×16 pixels, 8×16 pixels, 16×8 pixels and 8×8 pixels, as shown in FIG. 1. Thereby, the motion vectors are provided independently for the motion compensation blocks of four block sizes. [0005]
  • In addition, for the motion compensation block having a block size of 8×8 pixels, the motion vector is searched at each of four block sizes, including 8×8 pixels, 4×8 pixels, 8×4 pixels and 4×4 pixels, whereby the motion vectors are provided independently for the motion compensation blocks of four block sizes. [0006]
  • In this case, in the JVT coding method, after the frame is divided into macro blocks, for each of the macro blocks, the motion vector is searched while sequentially changing the motion compensation block at plural block sizes, whereby a maximum of 16 motion vectors are provided (e.g., refer to non-patent document 1: DRAFT ISO/IEC 1/4 496-10: 2002(E)). [0007]
  • Also, in the JVT coding method, the prediction compensation process is performed for the motion compensation block within the object frame OF, employing a plurality of reference frames SF[0008] 2 and SF5, or for the motion compensation blocks at different positions within the object frame OF, employing separate reference frames SF2 and SF4, in which the plurality of reference frames SF1 to SF5 are called a multi reference frame, as shown in FIG. 2 (e.g., refer to Non-Patent Document 1: DRAFT ISO/IEC 1/4 496-10: 2002(E))
  • By the way, in the coding device conforming to the JVT coding method, for all the macro blocks, the motion vector is searched employing the plurality of reference frames while sequentially changing the motion compensation block at plural block sizes, whereby the processing load in the motion prediction compensation process is increased over the already standardized coding method. [0009]
  • Also, the coding device conforming to the already standardized coding method has typically the greatest processing load in the motion prediction compensation process of the coding process. [0010]
  • Accordingly, if the processing load of this motion prediction compensation process is reduced without lowering the prediction precision, the processing efficiency over the overall coding process is possibly enhanced. [0011]
  • SUMMARY OF THE INVENTION
  • In view of the foregoing, an object of this invention is to provide a prediction compensating device and its method in which the processing efficiency over the overall coding process is enhanced. [0012]
  • The foregoing object and other objects of the invention have been achieved by the provision of a motion prediction compensating device comprising address detecting means for detecting the address of an object pixel block serving as a pixel block of prediction object among plural pixel blocks, search range deciding means for deciding a first search range or a second search range narrower than the first search range as the search range of a motion vector for the object pixel block in the reference frame in accordance with the address detected by the address detecting means, and motion vector searching means for searching for the motion vector from the search range decided by the search range deciding means around a predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block. [0013]
  • Consequently, in this motion prediction compensating device, the processing load is reduced because the motion vector is searched in the second search range narrower than the first motion search range, and the lower prediction precision is prevented because the motion vector is searched around the predictor reflecting the tendency of motion surrounding the prediction block. [0014]
  • Also, this invention provides a motion prediction compensating method comprising a first step of detecting an address of an object pixel block serving as a pixel block of prediction object among the plural pixel blocks, a second step of deciding a first search range or a second search range narrower than the first search range as the search range of a motion vector for the object pixel block in the reference frame in accordance with the address detected at the first step, and a third step of searching the motion vector from the search range decided at the second step around a predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block. [0015]
  • Consequently, in this motion prediction compensating method, the processing load is reduced because motion vector is searched in the second search range narrower than the first motion search range, and the lower prediction precision is prevented because the motion vector is searched around the predictor reflecting the tendency of motion surrounding the prediction block. [0016]
  • The nature, principle and utility of the invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings in which like parts are designated by like reference numerals or characters.[0017]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the accompanying drawings: [0018]
  • FIG. 1 is a diagrammatic view for explaining a motion compensation block; [0019]
  • FIG. 2 is a diagrammatic view for explaining a multi reference frame; [0020]
  • FIG. 3 is a diagrammatic view for explaining the motion compensation at ¼ pixel precision for a brightness signal component; [0021]
  • FIG. 4 is a diagrammatic view for explaining the motion compensation at ¼ pixel precision for a color difference signal component; [0022]
  • FIG. 5 is a diagrammatic view for explaining the generation of a predictor for the motion vector; [0023]
  • FIG. 6 is a diagrammatic view for explaining a search sequence of the motion vector; [0024]
  • FIG. 7 is a diagrammatic view for explaining a motion search at fractional pixel precision; [0025]
  • FIG. 8 is a block diagram showing the configuration of an image coding device; [0026]
  • FIG. 9 is a block diagram showing the configuration of a motion prediction compensation processing portion; [0027]
  • FIG. 10 is a diagrammatic view for explaining the detection of macro block address; [0028]
  • FIG. 11 is a diagrammatic view for explaining the decision of motion search range; and [0029]
  • FIG. 12 is a diagrammatic view for explaining the decision of motion search range according to another embodiment of the invention.[0030]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • Preferred embodiments of this invention will be described with reference to the accompanying drawings: [0031]
  • (1) Outline of JVT Coding Method [0032]
  • In the JVT coding method, after dividing the frame into macro blocks, a motion prediction compensation process is performed while sequentially changing the motion compensation block having block sizes less than or equal to the size of macro block, as shown in FIG. 1 and described above. [0033]
  • This motion compensation prediction process is largely classified into a process for detecting the motion vector for the motion compensation block (hereinafter referred to a motion prediction process) and a process for shifting the pixels of the motion compensation block in accordance with the detected motion vector (hereinafter referred to as a motion compensation process). [0034]
  • In the JVT coding method, this motion compensation process is performed at ¼ pixel precision or ⅛ pixel precision. [0035]
  • Referring to FIG. 3, the motion compensation process at ¼ pixel precision for a brightness component signal will be described below. In FIG. 3, the block with slant is represented as integer pixel, and the block without slant is represented as ½ pixel. [0036]
  • In the JVT coding method, the FIR filter is employed to generate the pixel value of ½ pixel, six tap filter coefficients being defined according to the following Formula. [0037]
  • {−1, 5, 20, 20, 5, 1}  (1)
  • In FIG. 3, the pixel values b and h of ½ pixel are calculated by adding up employing the filter coefficients of Formula (1) according to the following Formula, [0038]
  • b=(E−5F+20G+20H−5H+J)
  • h=(A−5C+20G+20M−5R+T)  (2)
  • and making the arithmetical operation according to the following Formula [0039]
  • b=Clip1((b+16)>>5)
  • h=Clip1((h+16)>>5)  (3)
  • This [0040] Clip 1 indicates the clip in (0, 255).
  • Also, the pixel value j is calculated by generating the pixel values aa, bb, cc, dd, ff, gg and hh in the same way as b and h, adding up according to the following formula [0041]
  • j=cc−5dd+20h+20m−5ee+ff
  • Or [0042]
  • j=aa−5bb+20b+20s−5rr+hh  (4)
  • and performing the clip process in the following way. [0043]
  • j=Clip1((j+512)>>10)  (5)
  • Also, the pixel values a, c, d, n, f, i, k and q are calculated through linear interpolation of the pixel values of integer pixel and the pixel values of ½ pixel according to the following Formula. [0044]
  • a(G+b+1)>>1
  • c=(H+b+1)>>1
  • d=(G+h+1)>>1
  • n=(M+h+1)>>1
  • f=(c+j+1)>>1
  • i=(h+j+1)>>1
  • k=(j+m+1)>>1
  • q=(j+s+1)>>1  (6)
  • Also, the pixel values e, g, p and r are calculated through linear interpolation employing the pixel values of ½ pixel according to the following Formula [0045]
  • e=(b+h+1)>>1
  • g=(b+m+1)>>1
  • p=(h+s+1)>>1
  • r=(m+s+1)>>1  (7)
  • On the other hand, the motion prediction process for color difference signal is performed through linear interpolation according to the [0046] Formula 8, as shown in FIG. 4, for both cases of the motion prediction process at ¼ pixel precision and the motion prediction process at ⅛ pixel precision, v = ( s - d x ) ( s - d y ) A + d x ( s - d y ) + ( s - d x ) d y C + d x d y D + s 2 / 2 s 2 ( 8 )
    Figure US20040264572A1-20041230-M00001
  • The motion prediction process as defined in the JVT coding method will be described below. [0047]
  • The motion vector information generated through this motion prediction process is represented as the difference information between the predictor based on the motion vector already detected for the surrounding motion compensation blocks adjacent to the motion compensation block of prediction object and the motion vector for the motion compensation block of prediction object. [0048]
  • Referring to FIG. 5, a method for generating the predictor of motion vector for the motion compensation block E will be described below. The alphabets as denoted in FIG. 5 have no relevance with the alphabets as denoted in FIG. 3. [0049]
  • In FIG. 5, when the motion compensation block C does not exist within the picture or slice, or when the information is not “available” in respect of the decoding sequence within the macro block, the predictor is generated based on the value of motion vector regarding the motion compensation block D and the reference frame index. [0050]
  • When none of the motion compensation blocks B, C and D exist within the picture or slice, the predictor is generated based on the value of motion vector regarding the motion compensation block D and the reference frame index. [0051]
  • In other than the above cases, when the adjacent macro block is intra coded data or does not exist within the picture or slice, the value of motion vector for the motion compensation block E is zero, whereby the motion compensation block E is regarded to refer to the reference frame different from the motion compensation blocks A, B, C and D. [0052]
  • Also, when any one of the motion compensation blocks A, B and C refers to the same reference frame as the motion compensation block E, the motion vector of the motion compensation block A, B or C that refers to the same reference frame as the motion compensation block E is generated as the predictor. [0053]
  • Moreover, in other than the above cases, the median of the motion vectors for the motion compensation blocks A, B and C is generated as the predictor. [0054]
  • Referring to FIG. 6, a motion search at integer pixel precision will be firstly described below for the searching method for the motion vector as defined in the JVT coding method. In FIG. 6, “0” to “24” are represented as the integer pixels, and indicate a searching sequence in the motion search. [0055]
  • In FIG. 6, [0056] pixel 0 indicates the center of the motion vector search. In the JVT coding method, the predictor generated by the above method as shown in FIG. 5 is located at the center of search, and the motion vector is searched for spirally around this center by the amount of ±Search_Range.
  • In this case, in the JVT coding method, the motion search is also performed at fractional pixel precision. Referring to FIG. 7, this motion search of fractional pixel precision will be described below. In FIG. 7, “A” to “I” are represented as the integer pixels, “1” to “8” are represented as ½ pixels, and “a” to “h” are represented as ¼ pixels. The alphabets and numerals as denoted in FIG. 7 have no relevance with the alphabets and numerals as denoted in other figures. [0057]
  • In FIG. 7, when the pixel E is detected as the optimal motion vector through the motion search of integer pixel precision, the ½ [0058] pixels 1 to 8 around the pixel E are searched in numerical order. As a result of this search, when the pixel 7 is detected as the optimal motion vector information, for example, the ¼ pixels a to h around the pixel E are searched in numerical order.
  • In this way, in the motion prediction compensation process of the JVT coding method, for all the macro blocks, the motion vector is searched for employing a plurality of reference frames while sequentially changing each of the motion compensation blocks of block sizes, thereby increasing the processing load in the motion prediction compensation process. Therefore, in this invention, the processing load of the motion prediction compensation process is reduced without deteriorating the image quality as much as possible. [0059]
  • One embodiment of the invention will be described below with reference to the drawings. [0060]
  • (2) Configuration of Image Coding Device [0061]
  • In FIG. 8, 1 designates an image coding device conforming to the JVT coding method as a whole, in which data of moving picture (hereinafter referred to as dynamic image data) D[0062] 1 formed by a plurality of frames is input from the outside, and the data amount of dynamic image data D1 is reduced efficiently.
  • More specifically, the [0063] image coding device 1 once stores the dynamic image data D1 supplied from the outside in an image rearrangement buffer 2, in which the frames of the dynamic image data D1 are rearranged in the coding order in accordance with a Group of Pictures (GOP) structure, and then read sequentially as the frame data D2 from the image rearrangement buffer 2.
  • Herein, the [0064] image coding device 1 sends the frame data D2 to an intra prediction portion 4, when the image type of frame data D2 read from the image rearrangement buffer 2 is I (Intra) frame.
  • The [0065] intra prediction portion 4 divides the frame based on the frame data D2 into pixel blocks of basis units (hereinafter referred to as macro blocks), and performs a process for predicting sequentially the pixel values for the macro blocks of prediction object, employing the macro blocks in the past, and sends the pixel values obtained sequentially as a result of the process as the prediction data D3 to an adder 3.
  • On the contrary, the [0066] image coding device 1 sends the frame data D2 to a motion prediction compensation processing portion 5, when the image type of frame data D2 read from the image rearrangement buffer 2 is other than I (Intra) frame.
  • The motion prediction [0067] compensation processing portion 5 divides the frame based on the frame data D2 into macro blocks, and performs a motion prediction compensation process for each of the motion compensation blocks of plural block sizes as described above and shown in FIG. 5, employing plural reference frames (FIG. 2) in the future or the past for the current frame.
  • In this case, the motion prediction [0068] compensation processing portion 5 detects a motion amount between the motion compensation block of prediction object and the motion compensation block within a predetermined reference frame most approximate to the pixel value of the motion compensation block as the motion vector, and performs a motion prediction process for calculating a difference value between the detected motion vector and the predictor (e.g., median value) based on the motion compensation block adjacent to the motion compensation block in the above way as shown in FIG. 1, whereby the difference value is sequentially sent as the motion vector information MVD to a reversible coding portion 9.
  • Also, the motion prediction [0069] compensation processing portion 5 performs a motion compensation process for shifting the pixel of motion compensation block in accordance with the motion vector, and sends sequentially the pixel value of shifted pixel as the predicted data D4 to the adder 3.
  • The [0070] adder 3 calculates the prediction residual by subtracting the prediction data D3 or D4 sequentially supplied from the intra prediction portion 4 or the motion prediction compensation processing portion 5 as a result of prediction made by the prediction method according to the image type in this way from the corresponding frame data D2, and sends this prediction residual as the difference data D5 to an orthogonal transformation portion 6.
  • The [0071] orthogonal transformation portion 6 performs an orthogonal transformation process such as a discrete cosine transform for the difference data D5 to generate the orthogonal transformation coefficient data D6, which is then sent to a quantization portion 7.
  • The [0072] quantization portion 7 performs a quantization process for the orthogonal transformation coefficient data D6 in accordance with a quantization parameter D8 given through a predetermined feedback control process of a rate control portion 8 to generate the quantization data D7, which is then sent to a reversible coding processing portion 9 and an inverse quantization portion 11.
  • The reversible [0073] coding processing portion 9 performs a reversible coding process such as reversible coding and arithmetical coding for the quantization data D7 and the corresponding motion vector information MVD to generate the coded data D9, which is stored in a storage buffer 10. This coded data D9 is read from the storage buffer 10 at a predetermined timing, and sent to the rate control portion 8 or the outside.
  • In this way, the [0074] image coding device 1 generates the coded data D9 with smaller data amount than the dynamic image data D1 by performing various processes using the high correlation between spatially or temporally adjacent pixels.
  • On the other hand, the [0075] inverse quantization portion 11 performs an inverse quantization process for the quantization data D7 given from the quantization portion 7 to restore the orthogonal transformation coefficient data D11 corresponding to the orthogonal transformation coefficient data D6, which is then sent to an inverse orthogonal transformation portion 12.
  • The inverse [0076] orthogonal transformation portion 12 performs an inverse orthogonal transformation process for the orthogonal transformation coefficient data D11, and restores the difference data D12 corresponding to the difference data D5, which is then sent to an adder 13.
  • The [0077] adder 13 sequentially adds the prediction data D3 or D4 corresponding to the difference data D12 to locally regenerate the frame data D13 corresponding to the frame data D2, which is then sent to a deblock filter 14.
  • The [0078] deblock filter 14 filters a distorted part in the frame based on the frame data D13 to be smoother and stores the frame data as the data of reference frame (hereinafter referred to as reference frame data) D14 that is a part of the multi reference frame as described above and shown in FIG. 2 in a frame memory 15, as needed, when there is a distortion between adjacent blocks divided by the motion prediction compensation processing process 5. This reference frame data D14 is read by the motion prediction compensation portion 5 and employed to sequentially predict the pixel value of each motion compensation block in the frame data D2.
  • In this way, in this [0079] image coding device 1, the pixel values of motion compensation block belonging to the frame of processing object are predicted employing the plurality of reference frames that are temporally different. Thereby, even when the reference frame immediately before the frame of processing object is unpredictable due to a camera flash, it can be predicted employing other frames as the reference frames. Consequently, the coding efficiency is improved by avoiding a wasteful amount of operation in the prediction.
  • In addition, the [0080] image coding device 1 makes the prediction employing the reference frame smoothed by removing beforehand the distortion, avoiding the lower prediction precision due to distortion. Consequently, the coding efficiency is improved by avoiding a wasteful amount of operation in the prediction.
  • (3) Motion Prediction Process of Motion Prediction [0081] Compensation Processing Portion 5
  • The processing contents of the motion prediction process in the motion prediction [0082] compensation processing portion 5 will be described below.
  • The processing contents of the motion prediction process are functionally classified into a [0083] motion predicting portion 20 for generating the motion vector information MVD of the motion compensation block of prediction object (hereinafter referred to as a prediction block), a macro block address detecting portion 21 for detecting the address of macro block containing the prediction block, and a search range deciding portion 22 for deciding a range for searching for the motion vector of the prediction block (hereinafter referred to as a motion search range), as shown in FIG. 9. In the following, the motion prediction portion 20, the macro block address detecting portion 21 and the search range deciding portion 22 will be described.
  • (3-1) Processing of [0084] Motion Predicting Portion 20
  • The [0085] motion predicting portion 20 stores the frame data D2 given from the image rearrangement buffer 2 (FIG. 8) and one or more reference frame data D14 stored in the frame memory 15 at this time in an internal memory, and divides the frame based on the frame data D2 and the reference frame based on the reference frame data D14 into macro blocks.
  • In this state, the [0086] motion predicting portion 20 sequentially generates the motion vector information MVD for each motion compensation block of each block sizes as shown in FIG. 1, employing the plurality of reference frames (FIG. 2), for each macro block within the frame.
  • In practice, in generating the motion vector information MVD for the certain prediction block having the same block size as the macro block, for example, the [0087] motion predicting portion 20 firstly generates the predictor of the motion vector for the prediction block, based on the motion vector already detected for the surrounding motion compensation blocks of block size corresponding to the prediction block, in the above way as shown in FIG. 5. Accordingly, this predictor reflects the tendency of motion in the dynamic image blocks around the prediction block.
  • When this predictor is located at the central position (hereinafter referred to as a search center) in predetermined motion search range, the [0088] motion predicting portion 20 decides the predictor directly as the search center, when the position of image block on the reference frame corresponding to the position of prediction block (i.e., position where the motion vector is (0, 0), hereinafter referred to as a motion zero position) is not included within the motion search range.
  • On the contrary, if the motion zero position is not included within the motion search range when the predictor is located at the search center in the motion search range, the [0089] motion predicting portion 20 performs a process for shifting the predictor (hereinafter referred to as a predictor correcting process) until the motion zero position is included within the motion search range, and decides the corrected predictor (hereinafter referred to as a corrected predictor) as the search center.
  • Then, the [0090] motion predicting portion 20 detects, as the optimal motion vector, the motion vector for the motion compensation block most approximate to the pixel value of prediction block, such as the motion vector for the motion compensation block with the least sum of absolute values of differences from the pixel value of prediction block, in the motion search range with the search center decided in this way, and generates the difference value between the detected motion vector and the predictor of prediction block as the motion vector information MVD, which is then sent to the reversible coding processing portion 9 (FIG. 8).
  • In this way, the [0091] motion predicting portion 20 searches for the motion vector in a state where the motion zero position is included within the search range at any time. Thereby, even when the actual motion around the prediction block is dispersed in multiple directions, the motion vector for the prediction block can be detected at high precision.
  • (3-2) Processing of Macro Block [0092] Address Detecting Portion 21
  • The macro block [0093] address detecting portion 21 monitors the frame data D2 stored in the internal memory of the motion predicting portion 20 at any time, detects the horizontal (x direction) and vertical (y direction) address (MB_x, MB_y) of the macro block including the prediction block of processing object at present in the motion predicting portion 20 with reference to a left upper corner of the frame based on the frame data D2, the detected result being sent as the data (hereinafter referred to as address detection data) D21 to the search range deciding portion 22, as shown in FIG. 10.
  • (3-3) Processing of Search [0094] Range Deciding Portion 22
  • The search [0095] range deciding portion 22 decides a first motion search range or a second motion search range narrower than the first motion search range, which is preset, based on the address (MB_x, MB_y) represented in the address detection data D21, employing a function f according to the address (MB_x, MB_y) as given in the following.
  • F(MB_x, MB_y)  (9)
  • More specifically, the search [0096] range deciding portion 22 has the function f as defined in the Formula (9) in accordance with the following formula
  • MB_x %2==0 and MB_y %2==0  (10)
  • or the following formula [0097]
  • MB_x %2==−1 and MB_y %2==1  (11)
  • where the residual of dividing by 2 is %2. [0098]
  • And the search [0099] range deciding portion 22 decides the first search range SR1 when the residuals %2 of diving by 2 the MB address in the x direction (MB_x) and the MB address in the y direction (MB_y) are divisible (i.e.,“0”) or indivisible (i.e., “1”), as a result of calculation from Formula (10) and Formula (11), or otherwise decides the second search range SR2, in which the decision is sent as the search range decided data D22 to the motion predicting portion 20.
  • Accordingly, the search [0100] range deciding portion 22 allows the motion predicting portion 20 to search each macro block within the frame in the second search range SR2 narrower than the first search range SR1 at every other pixel in the horizontal direction (x direction) and the vertical direction (y direction), as shown in FIG. 11.
  • As a result, the motion [0101] prediction compensation portion 20 performs the motion prediction process at every other pixel in the second search range narrower than the first search range SR1, reducing the processing load in sequentially searching the macro block for the motion vector in each prediction block of each block size.
  • Since the motion [0102] prediction compensation portion 20 searches the motion search range SR1 or SR2 with the predictor reflecting the tendency of motion around the prediction block as the search center, the motion vector is detected even when the motion of pixel in the prediction block is large. Thereby, the motion vector is searched for in the second motion search range SR2 narrower than the first motion search range SR1 without lower prediction precision.
  • In this embodiment, when deciding the second search range SR[0103] 2 according to the Formula (10) or (11), the search range deciding portion 22 generates the control data D23 to stop the predictor correcting process of the motion predicting portion 20 and sends it together with the search range decision data D22 to the motion predicting portion 20.
  • As a result, the motion [0104] prediction compensation portion 20 has the reduced processing load in searching for the motion vector, even if the pixel value of prediction block is not included in the motion search range SR2, when the predictor is located at the search center of the second motion search range SR2, because the predictor correcting process is not performed.
  • In this case, the motion [0105] prediction compensation portion 20 searches for the motion vector in the motion search range SR1 wider than the motion search range SR2, for the macro blocks around the macro block containing each motion compensation block searched in the motion search range SR2, whereby the predictor correcting process is covered in the first search range SR1 around the second motion search range SR2, even if it is not performed in the motion search range SR2, as shown in FIG. 11.
  • In this way, in predicting the pixel values of macro blocks, the motion prediction [0106] compensation processing portion 5 reduces the processing load of the motion prediction process by adaptively switching between the motion search ranges SR1 and SR2 and the presence or absence of the predictor correcting process according to the address of the macro block of processing object, without causing lower prediction precision.
  • (4) Operation and Effect [0107]
  • In the above constitution, this motion prediction [0108] compensation processing portion 5 detects the address (MB_x, MB_y) of macro block contained in the prediction block, and decides the motion search range SR1 or SR2 according to the detected address (MB_x, MB_y).
  • In this state, the motion prediction [0109] compensation processing portion 5 generates the predictor based on the motion vector already detected for the surrounding motion compensation block of block size corresponding to the prediction block, and searches for the motion vector in the motion search range SR1 or SR2 with this predictor as the center.
  • Accordingly, this motion prediction [0110] compensation processing portion 5 performs the motion prediction process in the second search range SR2 narrower than the first motion search range SR1, greatly reducing the processing load in sequentially searching for the motion vector for each motion compensation block of each block size for the macro block.
  • In addition, since the motion prediction [0111] compensation processing portion 5 searches for the motion vector with the predictor reflecting the tendency of motion around the prediction block as the search center, the motion vector is detected even when the motion of pixels of the prediction block is large. Hence the lower prediction precision is prevented even if the motion vector is searched for in the second motion search range SR2 narrower than the first motion search range SR1.
  • Also, the motion prediction [0112] compensation processing portion 5 decides the second search range SR2 only when both the horizontal and vertical addresses (MB_x, MB_y) are not even or odd, whereby the search range SR1 or SR2 is decided with a smaller amount of computation.
  • In this case, since the first search range SR[0113] 1 and the second search range SR2 narrower than the first search range SR1 are alternately decided, the prediction is covered by the first search range SR1 wider than the second search range SR2 even if the second search range SR2 is very narrow. Thereby, the difference between the first search range SR1 and the second search range SR2 can be relatively large, so that the processing load in searching the second search range SR2 is further reduced while suppressing the lower prediction precision.
  • Moreover, the motion prediction [0114] compensation processing portion 5 stops the predictor correcting process of the motion predicating portion 20, when the second search range SR2 is decided.
  • Accordingly, the motion prediction [0115] compensation processing portion 5 does not perform the predictor correcting process, further reducing the processing load in searching the second search range SR2.
  • With the above constitution, the motion search range SR[0116] 1 or SR2 is decided in accordance with the address (MB_x, MB_y) of macro block contained in the prediction block, the motion vector is searched for in the motion search range SR1 or SR2 around the predictor based on the surrounding motion compensation block corresponding to the prediction block, whereby the processing load is reduced by performing the prediction process in the second search range SR2 narrower than the first motion search range SR1. Moreover, the motion vector is searched for with the predictor reflecting the tendency of motion around the prediction block as the search center, whereby the lower prediction precision is prevented and the processing efficiency as the overall coding process is enhanced.
  • (5) Other Embodiments [0117]
  • In the above embodiment, as the address detecting means for detecting the address of object pixel block, the address (MB_x, MB_y) of the macro block containing the prediction block of processing object at present in the [0118] motion predicating portion 20 is detected with reference to the left upper corner of the frame based on the frame data D2 stored in the internal memory of the motion predicating portion 20. However, the invention is not limited thereto, but the address (MB_x, MB_y) of the macro block containing the prediction block may be detected with reference to other than the left upper corner. Also, the pixel block other than the macro block may be detected.
  • Also, in the above embodiment, as the search range deciding means for deciding the first search range or the second search range in accordance with the address of object pixel block, the search [0119] range deciding portion 22 for deciding alternately the first search range SR1 or the second search range SR2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention. However, the invention is not limited thereto, but the first search range SR1 may be decided for the macro blocks around the frame, and the second search range SR2 may be decided for other macro blocks, as shown in FIG. 12. In this case, the processing load is smaller than in the above embodiment, because the first search range SR1 or the second search range SR2 is switched less frequently.
  • Further, in the above embodiment, as the search range deciding means for deciding the first search range or the second search range in accordance with the address of object pixel block, the search [0120] range deciding portion 22 for deciding alternately the first search range SR1 or the second search range SR2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention. However, the invention is not limited thereto, but the first search range SR1 may be decided for the macro block corresponding to the initial position of the slice, and the second search range SR2 may be decided for other macro blocks. In this case, the processing load is smaller than in the above embodiment, because the first search range SR1 or the second search range SR2 is switched less frequently.
  • Further, in the above embodiment, as the search range deciding means for deciding the first search range or, the second search range in accordance with the address of object pixel block, the search [0121] range deciding portion 22 for deciding alternately the first search range SR1 or the second search range SR2 at every other pixel in the horizontal and vertical directions within the frame is applied in this invention. However, the invention is not limited thereto, but a search range wider than the first search range SR1 may be decided when the address is “0” (i.e., macro block for which the motion prediction compensation process is firstly performed among the macro blocks within the object frame). In this case, the prediction precision is further enhanced.
  • Further, in the above embodiment, as the motion vector searching means for searching for the motion vector from the search range decided by the search range deciding means around the predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block, the [0122] motion predicting portion 20 for performing a process for correcting the prediction value is applied in the invention, if the position (motion zero position) of the pixel block on the reference frame corresponding to the position of prediction block is not included in the first search range SR1, when the predictor is located at the center of the search range. However, this invention is not limited thereto, but the correcting process may not be performed.
  • Also, in this invention, the rate at which the position of predictor is shifted may be changed according to the rate of difference between the first search range SR[0123] 1 and the second search range SR2. More specifically, as there is greater difference between the first search range SR1 and the second search range SR2, the position (motion zero position) of pixel block on the reference frame corresponding to the position of prediction block is approached more closely, thereby reducing the processing load in searching the second search range SR2 while suppressing the lower prediction precision properly.
  • Moreover, in the above embodiment, the motion prediction compensation device for predicting and compensating the motion amount of each of the plurality of pixel blocks into which the object frame in consecutive frames is divided, employing the reference frame that is the frame in the future or past for the object frame, the prediction [0124] compensation processing portion 5 for performing the prediction compensation process conforming to the JVT coding method is applied in this invention. However, this invention is not limited thereto, but may be applied to the motion prediction compensation device for performing the motion prediction compensation process conforming to various other coding methods, such as MPEG2.
  • As described above, with the invention, in predicting and compensating the motion amount of each of the plurality of pixel blocks into which the object frame in consecutive frames is divided, employing the reference frame that is the frame in the future or past for the object frame, the address of object pixel block serving as the pixel block of prediction object among the plurality of pixel blocks is detected, the first search range or the second search range narrower than the first search range is decided as the search range of the motion vector for the object pixel block on the reference frame according to the detected address, and the motion vector is searched for from the decided search range around the predictor of the motion vector generated based on the surrounding pixel blocks adjacent to the object pixel block, whereby the processing load is reduced because the motion vector is searched for in the second search range narrower than the first motion search range, and the lower prediction precision is prevented because the search is made around the predictor reflecting the tendency of motion around the prediction block, so that the processing efficiency of the overall coding process is enhanced. [0125]
  • While there has been described in connection with the preferred embodiments of the invention, it will be obvious to those skilled in the art that various changes and modifications may be aimed, therefore, to cover in the appended claims all such changes and modifications as fall within the true spirit and scope of the invention. [0126]

Claims (6)

What is claimed is:
1. A motion prediction compensating device for predicting and compensating a motion amount of each of plural pixel blocks into which an object frame in consecutive frames is divided, employing a reference frame consisting of said frame in the future or past for said object frame, said motion prediction compensating device comprising:
address detecting means for detecting an address of an object pixel block serving as a pixel block of prediction object among said plural pixel blocks;
search range deciding means for deciding a first search range or a second search range narrower than said first search range as the search range of a motion vector for said object pixel block in said reference frame in accordance with said address detected by said address detecting means; and
motion vector searching means for searching for said motion vector from said search range decided by said search range deciding means around a predictor of said motion vector generated based on said surrounding pixel blocks adjacent to said object pixel block.
2. The motion prediction compensating device according to claim 1, wherein
said search range deciding means decides said second search range only when both said horizontal and vertical addresses are not even or odd numbers.
3. The motion prediction compensating device according to claim 2, wherein
said motion vector detecting means corrects said predictor if the position of said pixel block on said reference frame corresponding to the position of said object pixel block is not included within said search range, when said predictor of said motion vector is located at the center of said search range, and said search range deciding means stops to correct said predictor for said motion vector detecting means, when said second search range is decided.
4. A motion prediction compensating method for predicting and compensating a motion amount of each of plural pixel blocks into which an object frame in consecutive frames is divided, employing a reference frame consisting of said frame in the future or past for said object frame, said motion prediction compensating method comprising:
a first step of detecting an address of an object pixel block serving as a pixel block of prediction object among said plural pixel blocks;
a second step of deciding a first search range or a second search range narrower than said first search range as the search range of a motion vector for said object pixel block in said reference frame in accordance with said address detected at said first step; and
a third step of searching for said motion vector from said search range decided at said second step around a predictor of said motion vector generated based on said surrounding pixel blocks adjacent to said object pixel block.
5. The motion prediction compensating method according to claim 4, wherein
said second step includes deciding said second search range only when both said horizontal and vertical addresses are not even or odd numbers.
6. The motion prediction compensating method according to claim 5, wherein
said third step includes correcting said predictor if the position of said pixel block on said reference frame corresponding to the position of said object pixel block is not included within said search range, when said predictor of said motion vector is located at the center of said search range, and said second step includes stopping to correct said predictor at said third step, when said second search range is decided.
US10/832,085 2003-04-28 2004-04-26 Motion prediction compensating device and its method Expired - Fee Related US7746930B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-123976 2003-04-28
JP2003123976A JP3968712B2 (en) 2003-04-28 2003-04-28 Motion prediction compensation apparatus and method

Publications (2)

Publication Number Publication Date
US20040264572A1 true US20040264572A1 (en) 2004-12-30
US7746930B2 US7746930B2 (en) 2010-06-29

Family

ID=33501713

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/832,085 Expired - Fee Related US7746930B2 (en) 2003-04-28 2004-04-26 Motion prediction compensating device and its method

Country Status (2)

Country Link
US (1) US7746930B2 (en)
JP (1) JP3968712B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126730A1 (en) * 2004-12-13 2006-06-15 Matsushita Electric Industrial Co., Ltd. Intra prediction apparatus and intra prediction method
US20060198445A1 (en) * 2005-03-01 2006-09-07 Microsoft Corporation Prediction-based directional fractional pixel motion estimation for video coding
US20060215758A1 (en) * 2005-03-23 2006-09-28 Kabushiki Kaisha Toshiba Video encoder and portable radio terminal device using the video encoder
US20080107175A1 (en) * 2006-11-07 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding based on intra prediction
US20080298695A1 (en) * 2007-05-30 2008-12-04 Kabushiki Kaisha Toshiba Motion vector detecting apparatus, motion vector detecting method and interpolation frame creating apparatus
US7885329B2 (en) * 2004-06-25 2011-02-08 Panasonic Corporation Motion vector detecting apparatus and method for detecting motion vector
US20130064301A1 (en) * 2010-05-20 2013-03-14 Thomson Licensing Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
US20140294310A1 (en) * 2012-01-18 2014-10-02 Panasonic Corporation Image decoding device, image encoding device, image decoding method, and image encoding method
RU2598302C2 (en) * 2011-12-16 2016-09-20 ДжейВиСи КЕНВУД КОРПОРЕЙШН Moving image encoding device, moving image encoding method and moving image encoding program, as well as moving image decoding device, moving image decoding method and moving image decoding program
US20170272775A1 (en) * 2015-11-19 2017-09-21 Hua Zhong University Of Science Technology Optimization of interframe prediction algorithms based on heterogeneous computing
RU2666275C1 (en) * 2017-11-13 2018-09-06 ДжейВиСи КЕНВУД КОРПОРЕЙШН Device and method of coding moving image, long term data-storage computer recorded medium which image coding program is recorded on
US10755445B2 (en) * 2009-04-24 2020-08-25 Sony Corporation Image processing device and method

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6765964B1 (en) 2000-12-06 2004-07-20 Realnetworks, Inc. System and method for intracoding video data
JP4708819B2 (en) 2005-03-14 2011-06-22 キヤノン株式会社 Image processing apparatus, method, computer program, and storage medium
JP4534935B2 (en) * 2005-10-04 2010-09-01 株式会社日立製作所 Transcoder, recording apparatus, and transcoding method
KR101369224B1 (en) * 2007-03-28 2014-03-05 삼성전자주식회사 Method and apparatus for Video encoding and decoding using motion compensation filtering
US8917769B2 (en) 2009-07-03 2014-12-23 Intel Corporation Methods and systems to estimate motion based on reconstructed reference frames at a video decoder
US8462852B2 (en) * 2009-10-20 2013-06-11 Intel Corporation Methods and apparatus for adaptively choosing a search range for motion estimation
US9654792B2 (en) 2009-07-03 2017-05-16 Intel Corporation Methods and systems for motion vector derivation at a video decoder
US9509995B2 (en) 2010-12-21 2016-11-29 Intel Corporation System and method for enhanced DMVD processing
US9769494B2 (en) * 2014-08-01 2017-09-19 Ati Technologies Ulc Adaptive search window positioning for video encoding

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5357287A (en) * 1993-05-20 1994-10-18 Korean Broadcasting System Method of and apparatus for motion estimation of video data
US5467086A (en) * 1992-06-18 1995-11-14 Samsung Electronics Co., Ltd. Apparatus and method of coding/decoding video data
US6212237B1 (en) * 1997-06-17 2001-04-03 Nippon Telegraph And Telephone Corporation Motion vector search methods, motion vector search apparatus, and storage media storing a motion vector search program
US6339656B1 (en) * 1997-12-25 2002-01-15 Matsushita Electric Industrial Co., Ltd. Moving picture encoding decoding processing apparatus
US6348954B1 (en) * 1998-03-03 2002-02-19 Kdd Corporation Optimum motion vector determinator and video coding apparatus using the same
US7099392B2 (en) * 2001-05-07 2006-08-29 Lg Electronics Inc. Motion vector searching method using plural search areas

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3237815B2 (en) 1996-03-01 2001-12-10 日本電信電話株式会社 Motion vector search method and apparatus
JP3299671B2 (en) 1996-03-18 2002-07-08 シャープ株式会社 Image motion detection device
JP4035903B2 (en) 1998-10-22 2008-01-23 ソニー株式会社 Motion vector detection method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5467086A (en) * 1992-06-18 1995-11-14 Samsung Electronics Co., Ltd. Apparatus and method of coding/decoding video data
US5357287A (en) * 1993-05-20 1994-10-18 Korean Broadcasting System Method of and apparatus for motion estimation of video data
US6212237B1 (en) * 1997-06-17 2001-04-03 Nippon Telegraph And Telephone Corporation Motion vector search methods, motion vector search apparatus, and storage media storing a motion vector search program
US6339656B1 (en) * 1997-12-25 2002-01-15 Matsushita Electric Industrial Co., Ltd. Moving picture encoding decoding processing apparatus
US6348954B1 (en) * 1998-03-03 2002-02-19 Kdd Corporation Optimum motion vector determinator and video coding apparatus using the same
US7099392B2 (en) * 2001-05-07 2006-08-29 Lg Electronics Inc. Motion vector searching method using plural search areas

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7885329B2 (en) * 2004-06-25 2011-02-08 Panasonic Corporation Motion vector detecting apparatus and method for detecting motion vector
US20060126730A1 (en) * 2004-12-13 2006-06-15 Matsushita Electric Industrial Co., Ltd. Intra prediction apparatus and intra prediction method
US8472083B2 (en) * 2004-12-13 2013-06-25 Panasonic Corporation Intra prediction apparatus and intra prediction method using first blocks obtained by dividing a picture and second blocks obtained by dividing the first block
US7580456B2 (en) * 2005-03-01 2009-08-25 Microsoft Corporation Prediction-based directional fractional pixel motion estimation for video coding
US20060198445A1 (en) * 2005-03-01 2006-09-07 Microsoft Corporation Prediction-based directional fractional pixel motion estimation for video coding
US7675974B2 (en) * 2005-03-23 2010-03-09 Kabushiki Kaisha Toshiba Video encoder and portable radio terminal device using the video encoder
US20060215758A1 (en) * 2005-03-23 2006-09-28 Kabushiki Kaisha Toshiba Video encoder and portable radio terminal device using the video encoder
US20080107175A1 (en) * 2006-11-07 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding based on intra prediction
US20080298695A1 (en) * 2007-05-30 2008-12-04 Kabushiki Kaisha Toshiba Motion vector detecting apparatus, motion vector detecting method and interpolation frame creating apparatus
US10755445B2 (en) * 2009-04-24 2020-08-25 Sony Corporation Image processing device and method
US9510009B2 (en) * 2010-05-20 2016-11-29 Thomson Licensing Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
US20130064301A1 (en) * 2010-05-20 2013-03-14 Thomson Licensing Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
US10021412B2 (en) 2010-05-20 2018-07-10 Thomson Licensing Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
RU2636688C1 (en) * 2011-12-16 2017-11-27 ДжейВиСи КЕНВУД КОРПОРЕЙШН Moving image decoding device and method, long-term computer-readable recording medium, to which image decoding program is recorded
RU2598302C2 (en) * 2011-12-16 2016-09-20 ДжейВиСи КЕНВУД КОРПОРЕЙШН Moving image encoding device, moving image encoding method and moving image encoding program, as well as moving image decoding device, moving image decoding method and moving image decoding program
RU2678507C1 (en) * 2011-12-16 2019-01-29 ДжейВиСи КЕНВУД КОРПОРЕЙШН Device and method of moving image coding, a long-term recording medium where a coding program written
RU2679552C1 (en) * 2011-12-16 2019-02-11 ДжейВиСи КЕНВУД КОРПОРЕЙШН Device and method for decoding moving image, long-term computer-readable record medium, on which decoding program is recorded
US9153037B2 (en) * 2012-01-18 2015-10-06 Panasonic Intellectual Property Management Co., Ltd. Image decoding device, image encoding device, image decoding method, and image encoding method
US20140294310A1 (en) * 2012-01-18 2014-10-02 Panasonic Corporation Image decoding device, image encoding device, image decoding method, and image encoding method
US20170272775A1 (en) * 2015-11-19 2017-09-21 Hua Zhong University Of Science Technology Optimization of interframe prediction algorithms based on heterogeneous computing
RU2666275C1 (en) * 2017-11-13 2018-09-06 ДжейВиСи КЕНВУД КОРПОРЕЙШН Device and method of coding moving image, long term data-storage computer recorded medium which image coding program is recorded on

Also Published As

Publication number Publication date
JP3968712B2 (en) 2007-08-29
JP2004328633A (en) 2004-11-18
US7746930B2 (en) 2010-06-29

Similar Documents

Publication Publication Date Title
US7746930B2 (en) Motion prediction compensating device and its method
US6690729B2 (en) Motion vector search apparatus and method
US8625916B2 (en) Method and apparatus for image encoding and image decoding
US7580456B2 (en) Prediction-based directional fractional pixel motion estimation for video coding
US6859494B2 (en) Methods and apparatus for sub-pixel motion estimation
AU684901B2 (en) Method and circuit for estimating motion between pictures composed of interlaced fields, and device for coding digital signals comprising such a circuit
US20040114688A1 (en) Device for and method of estimating motion in video encoder
US7630566B2 (en) Method and apparatus for improved estimation and compensation in digital video compression and decompression
US20070047651A1 (en) Video prediction apparatus and method for multi-format codec and video encoding/decoding apparatus and method using the video prediction apparatus and method
US10652570B2 (en) Moving image encoding device, moving image encoding method, and recording medium for recording moving image encoding program
KR20040069210A (en) Sharpness enhancement in post-processing of digital video signals using coding information and local spatial features
WO2005109897A1 (en) Adaptive-weighted motion estimation method and frame rate converting apparatus employing the method
US20220046233A1 (en) Image decoding device, image decoding method, and program
US20080031335A1 (en) Motion Detection Device
KR100942475B1 (en) Image information encoding method and encoder
US6909750B2 (en) Detection and proper interpolation of interlaced moving areas for MPEG decoding with embedded resizing
EP0632657B1 (en) Method of prediction of a video image
JP2755851B2 (en) Moving picture coding apparatus and moving picture coding method
JP2002118851A (en) Motion vector conversion method and apparatus
KR100926752B1 (en) Fine Motion Estimation Method and Apparatus for Video Coding
US6925125B2 (en) Enhanced aperture problem solving method using displaced center quadtree adaptive partitioning
KR100240620B1 (en) Method and apparatus to form symmetric search windows for bidirectional half pel motion estimation
JP2883585B2 (en) Moving picture coding apparatus and moving picture coding method
KR100617177B1 (en) Motion estimation method
JP2758378B2 (en) Moving picture decoding apparatus and moving picture decoding method

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SATO, KAZUSHI;TSUCHIYA, TOSHIHARU;YAGASAKI, YOICHI;REEL/FRAME:015752/0480

Effective date: 20040819

Owner name: SONY CORPORATION,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SATO, KAZUSHI;TSUCHIYA, TOSHIHARU;YAGASAKI, YOICHI;REEL/FRAME:015752/0480

Effective date: 20040819

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220629