USRE40080E1 - Video coding method and decoding method and devices thereof - Google Patents

Video coding method and decoding method and devices thereof Download PDF

Info

Publication number
USRE40080E1
USRE40080E1 US09/691,858 US69185800A USRE40080E US RE40080 E1 USRE40080 E1 US RE40080E1 US 69185800 A US69185800 A US 69185800A US RE40080 E USRE40080 E US RE40080E
Authority
US
United States
Prior art keywords
frame
motion vector
blocks
block
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/691,858
Inventor
Thiow Keng Tan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to US09/691,858 priority Critical patent/USRE40080E1/en
Application granted granted Critical
Publication of USRE40080E1 publication Critical patent/USRE40080E1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Definitions

  • This invention can be used in low bit rate video coding for tele-communicative applications. It improves the temporal frame rate of the decoder output as well as the overall picture quality.
  • the frames are coded using only forward prediction, hereafter referred to as P-frames.
  • some frames are coded using bi-direction prediction, hereafter referred to as B-frames.
  • B-frames improve the efficiency of the coding scheme.
  • ITU-T Recommendation H.261 Formly CCITT Recommendation H.261 Codes for audiovisual services at p ⁇ 64 kbit/s Geneva. 1990
  • the [2] is ISO/TEC 11172-2 1993 .
  • FIG. 1a and 1b illustrates the frame prediction of H.261 and MPEG as described above.
  • the blocks in the PB-frames are coded and transmitted together thus reducing the total delay.
  • the total delay should not be more than a scheme using forward prediction only but at half the frame rate.
  • FIG. 2a shows the PB-frame prediction.
  • a PB-frame consists of two pictures being coded as one unit.
  • the name PB comes from the name of picture types in MPEG where there are P-frames and B-frames.
  • a PB-frame consists of one P-frame which is predicted from the last decoded P-frame and one B-frame which is predicted both from the last decoded P-frame and the P-frame currently being decoded.
  • This last picture is called B-frame because parts of it may be bi-directionally predicted from the past and further P-frame.
  • FIG. 2b shows the forward and bi-directional prediction for a block in the B-frame, hereafter refereed to as a B-block. Only the region that overlaps with the corresponding block in the current P-frame, hereafter referred to as the P-block, is bi-directionally predicted. The rest of the B-block is forward predicted from the previous frame. Thus only the previous frame is required in the frame store. The information from the P-frame is obtained from the P-block currently being decoded.
  • a second problem involves the quantization and transmission of the residual of the predication error in the B-block
  • the coefficients from the P-block and the B-block are interleaved in some scanning order which requires the B-block efficients to be transmitted even when they are all zero. This is not very efficient as it is quite often that there are no residual coefficients to transmit (all coefficients are zero).
  • the current invention employs a delta motion vector to compensate for the non-linear motion.
  • This delta motion vectors are transmitted to the decoder at the block level only when necessary.
  • a flag is used to indicate to the decoder if there are delta motion vectors present for the B-block.
  • this invention also uses a flag to indicate if there are coefficients for the B-block to be decoded.
  • FIG. 3a shows the linear motion model used for the derivation of the forward and backward motion vectors from the P-block motion vector and the temporal reference information. As illustrated in FIG. 3b , this model breaks down when the motion is not linear. The derived forward and backward motion vector is different from the actual motion vector when the motion is not linear. This is especially true when objects in the scene are moving at changing velocities.
  • FIG. 1a is a prior art which illustrates the predictions mode used in the ITU-T Recommendation H.261 Standard.
  • FIG. 1b is a prior art which illustrates the prediction mode used in the ISO-IEC/JTC MPEG Standard.
  • FIG. 2a illustrates the PB-frame prediction mode
  • FIG. 2b illustrates the B-block bi-directional prediction mode.
  • FIG. 3a illustrates the linear motion model
  • FIG. 3b illustrates the non-linear motion model of the current invention.
  • FIG. 4 illustrates the encoder functionality block diagram.
  • FIG. 5 illustrates the B-block bi-directional prediction functionality block diagram.
  • FIG. 6 illustrates the decoder functionality block diagram.
  • FIG. 4 illustrates the encoding functionality diagram.
  • the present invention deals with the method for deriving the motion vectors for the B-block.
  • the encoding functionality is presented here for completeness of the embodiment.
  • the encoding functionality block diagram depicts an encoder using a motion estimation and compensation for reducing the temporal redundancy in the sequence to be coded.
  • the input sequences is organized into a first frame and pairs of subsequent frames.
  • the first frame hereafter referred to as the I-frame
  • the pair of subsequent frames hereafter referred to as PB-frame, consist of a B-frame followed by a P-frame.
  • the P-frame is forward predicted based on the previously reconstructed I-frame or P-frame and the B-frame is bi-directionally predicted based on the previously reconstructed I-frame or P-frame and the information in the current P-frame.
  • the input frame image sequence, 1 is placed in the Frame Memory 2 . If the frame is classified as an I-frame or a P-frame it is passed through line 14 to the Reference Memory 3 , for use as the reference frame in the motion estimation of the next PB-frame to be predictively encoded. The signal is then passed through line 13 to the Block Sampling module 4 , where it is partitioned into spatially non-overlapping blocks of pixel data for further processing.
  • the sampled blocks are passed through line 16 to the DCT module 7 . If the frame is classified as a PB-frame, the sampled blocks are passed through line 17 to the Motion Estimation module 5 .
  • the Motion Estimation module 5 uses information from the Reference Frame Memory 3 and the current block 17 to obtain the motion vector for that provides the best match for the P-block.
  • the motion vector and the local reconstructed frame, 12 are passed through line 19 and 20 , respectively, to the Motion Compensation module 6 .
  • the difference image is formed by subtracting the motion compensated decoded frame, 21 , from the current P-block 15 . This signal is then passed through line 22 to the DCT module 7 .
  • each block is transformed into the DCT domain coefficients.
  • the transform coefficienta are passed through line 23 to Quantization module 8 , where they are quantizied.
  • the quantizied coefficients are then passed though line 24 to the Run-length & Variable Length Coding module 9 .
  • the coefficients are entropy coded to form the Output Bit Stream 25 .
  • the quantized coefficients are also passed through line 26 to the Inverse Quantization module 10 .
  • the output of the Inverse Quantization 10 is then passed through line 27 to the Inverse DCT module 11 .
  • the current block is an I-block then the reconstructed block is placed, via line 28 , in the Local Decoded Frame Memory 12 .
  • the output of the Inverse DCT 29 is added to the motion compensated output 21 , to from the reconstructed block 30 .
  • the reconstructed block 30 is then placed in the Local Decoded Frame Memory 12 , for the motion compensation of the subsequent frames.
  • FIG. 5 shows a more detailed functional diagram for the B-block prediction process.
  • the P-motion vector derived in the Motion Estimation module 51 is passed through line 57 to the Motion Vector Sealing Module 53 .
  • the forward and backward motion vectors of the B-block is derived using the formula (1) and (2), respectively.
  • an additional motion search around these vectors is performed in the Delta Motion Search module 54 , to obtain the delta motion vector.
  • the motion vector is obtained by performing the search for all detail motion vector values between ⁇ 3 and 3.
  • the delta motion vector value that gives the best prediction in terms of the samples mean absolute difference in the pixel values of the B-block and the prediction block is chosen.
  • the prediction is formed in the Bi-directional Motion Compensation module 55 , according to FIG. 2b using the information from the Local Decoded Frame Memory 52 , and the Current Reconstructed P-block 59 .
  • the bi-directional prediction only information available in the corresponding P-block is used to predict the B-block.
  • the average of the P-block information and the information from the Local Decoded Frame is used to predict the B-block.
  • the rest of the B-block is predicted using information from the Local Decoded Frame only.
  • the reduction difference block is then passed through line 22 to the DCT module 7 .
  • the DCT coefficients are then passed through line 23 to the Quantization module 8 .
  • the result of the Quantization module 8 is passed through line 24 to the Run-length & Variable Length Coding 9 .
  • NOB which is the acronym for No B-block.
  • This flag is generated in Run-length & Variable Length Coding module 9 based on whether there are residual error in the Quantization module 8 and delta motion vectors found in the Delta Motion Search module 54 is not zero.
  • Table 1 provides the preferred embodiment of the variable length code for the NOB flag.
  • the variable length code of the NOB flag is inserted in the Output Bitstream, 25 , prior to the delta motion vector an quantized residual error codes.
  • FIG. 6 shows the functional block diagram for the decoder.
  • the Input Bit Stream 31 is passed to the Variable Length & Run Length Decoding module 32 .
  • the block and side information are extracted in this module. If the frame is a PB-frame then the bitstream is checked if any delta motion vector and/or quantized residual error coefficients present.
  • the output of the module 32 is passed through line 37 to the Inverse Quantization module 33 .
  • the output of the Inverse Quantization 33 is then passed through line 38 to the Inverse DCT module 34 .
  • the coefficients are transformed back into the pixel values.
  • the output of Inverse DCT 34 is passed through line 39 and stored in the Frame Memory 42 .
  • the side information containing the motion vector are passed through line 45 to the Motion compensation module 36 .
  • the motion Compensation module 36 uses this information and the information in the Local Decoded Memory, 35 , to from the motion compensated signal, 44 . This signal is then added to the output of the Inverse DCT module 34 , to form the reconstruction of the P-block.
  • the Motion Compensation module 36 uses the additional information obtained in the reconstructed P-block to obtain the bi-directional prediction for the B-block.
  • the B-block is then reconstructed and placed in the Frame Memory, 42 , together with the P-block.
  • the temporal frame rate of the decoded sequences can be effectively doubled at a fraction of the expected cost in bit rate.
  • the delay is similar to that of the same sequence decoded at half the frame rate.
  • a new predictive coding is used to increase the temporal frame rate and coding efficiency without introducing excessive delay.
  • the motion vector for the blocks in the bi-directionally predicted frame is derived from the motion vector of the corresponding block in the forward predicted frame using a linear motion model. This however is not effective when the motion in the image sequence is not linear.
  • the efficiency of this method can be further improved if a non-linear motion model is used.
  • a delta motion vector is added to or subtracted from the derived forward and backward motion vector, respectively.
  • the encoder performs an additional search to determine if there is a need for the delta motion vector.
  • the presence of this delta motion vector in the transmitted bitstream is sinaglled to the decoder which then takes the appropriate action to make use of the delta motion vector to derive the effective forward and backward motion vectors for the bi-directionally predicted block.

Abstract

A new predictive coding is used to increase the temporal frame rate and coding efficiency without introducing excessive delay. Currently the motion vector for the blocks in the bi-directionally predicted frame is derived from the motion vector of the corresponding block in the forward predicted frame using a linear motion model. This however is not effective when the motion in the image sequence is not linear. The efficiency of this method can be further improved if a non-linear motion model is used. In this model a delta motion vector is added to or subtracted from the derived forward and backward motion vector, respectively. The encoder performs an additional search to determine if there is a need for the delta motion vector. The presence of this delta motion vector in the transmitted bitstream is signalled to the decoder which then takes the appropriate action to make use of the delta motion vector to derive the effective forward and backward motion vectors for the bi-directionally predicted block.

Description

This application is a divisional reissue application of U.S. Pat. No. 5,825,421, issued Oct. 20, 1998. This application also has a related reissue application Ser. No. 09/691,857, filed on Oct. 18, 2000.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention can be used in low bit rate video coding for tele-communicative applications. It improves the temporal frame rate of the decoder output as well as the overall picture quality.
2. Related art of the Invention
In a typical hybrid transform coding algorithm such as the ITU-T Recommendation H.261 [1] and MPEG [2] motion compensation is used to reduce the amount of temporal redundancy in the sequence. In the H.261 coding scheme, the frames are coded using only forward prediction, hereafter referred to as P-frames. In the MPEG doing scheme, some frames are coded using bi-direction prediction, hereafter referred to as B-frames. B-frames improve the efficiency of the coding scheme. Now the [1] is ITU-T Recommendation H.261 (Formerly CCITT Recommendation H.261) Codes for audiovisual services at p×64 kbit/s Geneva. 1990 , and the [2] is ISO/TEC 11172-2 1993 . Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s—Part 2: Video.
However, it introduces delay in the encoding and decoding, making it unsuitable for applications in the communicative services where delay is an important parameter. FIG. 1a and 1b illustrates the frame prediction of H.261 and MPEG as described above. A new method of coding involving the coding of the P and B frames as a single unit, hereafter referred to as the PB-frame, was introduced. In this scheme the blocks in the PB-frames are coded and transmitted together thus reducing the total delay. In fact the total delay should not be more than a scheme using forward prediction only but at half the frame rate.
FIG. 2a shows the PB-frame prediction. A PB-frame consists of two pictures being coded as one unit. The name PB comes from the name of picture types in MPEG where there are P-frames and B-frames. Thus a PB-frame consists of one P-frame which is predicted from the last decoded P-frame and one B-frame which is predicted both from the last decoded P-frame and the P-frame currently being decoded. This last picture is called B-frame because parts of it may be bi-directionally predicted from the past and further P-frame.
FIG. 2b shows the forward and bi-directional prediction for a block in the B-frame, hereafter refereed to as a B-block. Only the region that overlaps with the corresponding block in the current P-frame, hereafter referred to as the P-block, is bi-directionally predicted. The rest of the B-block is forward predicted from the previous frame. Thus only the previous frame is required in the frame store. The information from the P-frame is obtained from the P-block currently being decoded.
In the PB-block only the motion vectors for the P-block is transmitted to the decoder. The forward and backward motion vectors for the B-block is derived from the P motion vectors. A linear motion model is used and the temporal reference of the B and P frame is used to scale the motion vector appropriately. FIG. 3a depicts the motion vector sealing and the formula is shown below:
MVF=(TRB×MV)/TRP  (1)
MVB=((TRB−TRP)×MV)/TRP  (2)
where
    • MV is the motion vector of the P-block.
    • MVF and MVB are the forward and backward motion vectors for the B-block.
    • TRB is the increment in the temporal reference from the last P-frame to the current B-frame, and
    • TRP is the increment in the temporal reference from the last P-frame to the current P-frame.
Currently the method used in the prior art assumes a linear motion model. However this assumption is not valid in a normal scene where the motion is typically not linear. This is especially true when the camera shakes and when objects are not moving at constant velocities.
A second problem involves the quantization and transmission of the residual of the predication error in the B-block Currently the coefficients from the P-block and the B-block are interleaved in some scanning order which requires the B-block efficients to be transmitted even when they are all zero. This is not very efficient as it is quite often that there are no residual coefficients to transmit (all coefficients are zero).
SUMMARY OF THE INVENTION
In order to solve the first problem, the current invention employs a delta motion vector to compensate for the non-linear motion. Thus it becomes necessary for the encoder to perform an additional motion search to obtain the optimum delta motion vector that when added to the derived motion vectors would result in the best match in the prediction. This delta motion vectors are transmitted to the decoder at the block level only when necessary. A flag is used to indicate to the decoder if there are delta motion vectors present for the B-block.
For the second problem, this invention also uses a flag to indicate if there are coefficients for the B-block to be decoded.
The operation of the Invention is described as follows.
FIG. 3a shows the linear motion model used for the derivation of the forward and backward motion vectors from the P-block motion vector and the temporal reference information. As illustrated in FIG. 3b, this model breaks down when the motion is not linear. The derived forward and backward motion vector is different from the actual motion vector when the motion is not linear. This is especially true when objects in the scene are moving at changing velocities.
In the current invention the problem is solved by adding a small delta motion vector to the derived motion vector to compensate for the difference between the derived and true motion vector. Therefore the equation in (1) and (2) are now replaced by equations (3) and (4), respectively.
MVF′=(TRB×MV)/TRP+MVDelta  (3)
MVB′=((TRB=TRP)×MV)/TRP−MVDelta   (4)
where
    • MV is the motion vector of the P-block.
    • MVDelta is the delta motion vector.
    • MVF and MVB are the new forward and backward motion vectors for the B-block according to the current invention.
    • TRB is the increment in the temporal reference from the last P-frame to the current B-frame. and
    • TRp is the increment in the temporal reference from the last P-frame to the current P-frame.
    • Note: Equations (3) and (4) are used for the motion vector in the horizontal as well as the vertical directions. Thus the motion vectors are in pairs and there are actually two independent delta motion vectors, one each for the horizontal and vertical directions.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1a is a prior art which illustrates the predictions mode used in the ITU-T Recommendation H.261 Standard.
FIG. 1b is a prior art which illustrates the prediction mode used in the ISO-IEC/JTC MPEG Standard.
FIG. 2a illustrates the PB-frame prediction mode.
FIG. 2b illustrates the B-block bi-directional prediction mode.
FIG. 3a illustrates the linear motion model.
FIG. 3b illustrates the non-linear motion model of the current invention.
FIG. 4 illustrates the encoder functionality block diagram.
FIG. 5 illustrates the B-block bi-directional prediction functionality block diagram.
FIG. 6 illustrates the decoder functionality block diagram.
PREFERRED EMBODIMENTS
The preferred embodiment of the current invention is described here. FIG. 4 illustrates the encoding functionality diagram. The present invention deals with the method for deriving the motion vectors for the B-block. The encoding functionality is presented here for completeness of the embodiment.
The encoding functionality block diagram depicts an encoder using a motion estimation and compensation for reducing the temporal redundancy in the sequence to be coded. The input sequences is organized into a first frame and pairs of subsequent frames. The first frame, hereafter referred to as the I-frame, is coded independent of all other frames. The pair of subsequent frames, hereafter referred to as PB-frame, consist of a B-frame followed by a P-frame. The P-frame is forward predicted based on the previously reconstructed I-frame or P-frame and the B-frame is bi-directionally predicted based on the previously reconstructed I-frame or P-frame and the information in the current P-frame.
The input frame image sequence, 1, is placed in the Frame Memory 2. If the frame is classified as an I-frame or a P-frame it is passed through line 14 to the Reference Memory 3, for use as the reference frame in the motion estimation of the next PB-frame to be predictively encoded. The signal is then passed through line 13 to the Block Sampling module 4, where it is partitioned into spatially non-overlapping blocks of pixel data for further processing.
If the frame is classified as an I-frame, the sampled blocks are passed through line 16 to the DCT module 7. If the frame is classified as a PB-frame, the sampled blocks are passed through line 17 to the Motion Estimation module 5. The Motion Estimation module 5 uses information from the Reference Frame Memory 3 and the current block 17 to obtain the motion vector for that provides the best match for the P-block. The motion vector and the local reconstructed frame, 12, are passed through line 19 and 20, respectively, to the Motion Compensation module 6. The difference image is formed by subtracting the motion compensated decoded frame, 21, from the current P-block 15. This signal is then passed through line 22 to the DCT module 7.
In the DCT module 7, each block is transformed into the DCT domain coefficients. The transform coefficienta are passed through line 23 to Quantization module 8, where they are quantizied. The quantizied coefficients are then passed though line 24 to the Run-length & Variable Length Coding module 9. Here the coefficients are entropy coded to form the Output Bit Stream 25.
If the current block is an I-block or a P-block, the quantized coefficients are also passed through line 26 to the Inverse Quantization module 10. The output of the Inverse Quantization 10, is then passed through line 27 to the Inverse DCT module 11. If the current block is an I-block then the reconstructed block is placed, via line 28, in the Local Decoded Frame Memory 12. If the current block is a P-block then the output of the Inverse DCT 29 is added to the motion compensated output 21, to from the reconstructed block 30. The reconstructed block 30, is then placed in the Local Decoded Frame Memory 12, for the motion compensation of the subsequent frames.
After the P-block have been locally reconstructed, the information is passed again to the Motion Compensation Module 6, where the prediction of the B-block is formed. FIG. 5 shows a more detailed functional diagram for the B-block prediction process. The P-motion vector derived in the Motion Estimation module 51, is passed through line 57 to the Motion Vector Sealing Module 53. Here the forward and backward motion vectors of the B-block is derived using the formula (1) and (2), respectively. In the present embodiment, an additional motion search around these vectors is performed in the Delta Motion Search module 54, to obtain the delta motion vector. In this embodiment the motion vector is obtained by performing the search for all detail motion vector values between −3 and 3. The delta motion vector value that gives the best prediction in terms of the samples mean absolute difference in the pixel values of the B-block and the prediction block is chosen. The prediction is formed in the Bi-directional Motion Compensation module 55, according to FIG. 2b using the information from the Local Decoded Frame Memory 52, and the Current Reconstructed P-block 59. In the bi-directional prediction, only information available in the corresponding P-block is used to predict the B-block. The average of the P-block information and the information from the Local Decoded Frame is used to predict the B-block. The rest of the B-block is predicted using information from the Local Decoded Frame only.
The reduction difference block is then passed through line 22 to the DCT module 7. The DCT coefficients are then passed through line 23 to the Quantization module 8. The result of the Quantization module 8, is passed through line 24 to the Run-length & Variable Length Coding 9. In this module the presence of the delta motion vector and the quantized residual error in the Output Bitstream 25, is indicated a variable length code. NOB which is the acronym for No B-block. This flag is generated in Run-length & Variable Length Coding module 9 based on whether there are residual error in the Quantization module 8 and delta motion vectors found in the Delta Motion Search module 54 is not zero. Table 1 provides the preferred embodiment of the variable length code for the NOB flag. The variable length code of the NOB flag is inserted in the Output Bitstream, 25, prior to the delta motion vector an quantized residual error codes.
TABLE 1
(Variable length code for the NOB flag)
Quantized Residual Delta Motion
NOB Error Coded Vectors Coded
0 No No
10 No Yes
110 Yes No
111 Yes Yes
FIG. 6 shows the functional block diagram for the decoder. The Input Bit Stream 31, is passed to the Variable Length & Run Length Decoding module 32. The block and side information are extracted in this module. If the frame is a PB-frame then the bitstream is checked if any delta motion vector and/or quantized residual error coefficients present. The output of the module 32is passed through line 37 to the Inverse Quantization module 33. The output of the Inverse Quantization 33, is then passed through line 38 to the Inverse DCT module 34. Here the coefficients are transformed back into the pixel values.
If the current frame is an I-frame then the output of Inverse DCT 34, is passed through line 39 and stored in the Frame Memory 42.
If the current frame is a PB-frame, the side information containing the motion vector are passe through line 45 to the Motion compensation module 36. The motion Compensation module 36, uses this information and the information in the Local Decoded Memory, 35, to from the motion compensated signal, 44. This signal is then added to the output of the Inverse DCT module 34, to form the reconstruction of the P-block.
The Motion Compensation module 36, then uses the additional information obtained in the reconstructed P-block to obtain the bi-directional prediction for the B-block. The B-block is then reconstructed and placed in the Frame Memory, 42, together with the P-block.
By implementing this invention, the temporal frame rate of the decoded sequences can be effectively doubled at a fraction of the expected cost in bit rate. The delay is similar to that of the same sequence decoded at half the frame rate.
As described above in the present invention a new predictive coding is used to increase the temporal frame rate and coding efficiency without introducing excessive delay. Currently the motion vector for the blocks in the bi-directionally predicted frame is derived from the motion vector of the corresponding block in the forward predicted frame using a linear motion model. This however is not effective when the motion in the image sequence is not linear. According to this invention, the efficiency of this method can be further improved if a non-linear motion model is used. In this model a delta motion vector is added to or subtracted from the derived forward and backward motion vector, respectively. The encoder performs an additional search to determine if there is a need for the delta motion vector. The presence of this delta motion vector in the transmitted bitstream is sinaglled to the decoder which then takes the appropriate action to make use of the delta motion vector to derive the effective forward and backward motion vectors for the bi-directionally predicted block.

Claims (16)

1. A method for encoding a sequence of video image frames comprising the steps of:
dividing a source sequence into a set of group of pictures, each group of pictures comprising a first frame (I-frame) followed by a plurality of pairs of predictively encoded frames (PB-frame pairs), each PB-frame pair having a corresponding P-block;
dividing each I-frame or PB-frame pair into a plurality of spatially non-overlapping blocks of pixel data;
encoding the blocks from the I-frame (I-blocks) independently from any other frames in the group of pictures;
predictively encoding the blocks from the second frame of the PB-frame pair (P-blocks), based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair;
bi-directionally predictively encoding the blocks from the first frame of the PB-frame pair (B-blocks), based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair and the corresponding P-block in the current PB-frame pair;
deriving a sealed forward motion vector and a sealed backward motion vector of the B-block by sealing the motion vector of the corresponding P-block in the current PB-frame pair;
obtaining a final forward motion vector for the B-block by adding a delta motion vector on the sealed forward motion vector; and
obtaining a final backward motion vector for the B-block by subtracting the delta motion vector from the sealed backward motion vector.
2. A method for encoding a sequence of video image frames according to claim 1, wherein
the sealing of the motion vector is based on a temporal reference of the first and second frames of the PB-frame pair.
3. A method for encoding a sequence of video image frames according to claim 1, further comprising the step of forming an encoded output, wherein the encoded output is a bitstream comprising:
temporal reference information for the first and second frames of the PB-frame pairs;
motion vector information for the P-blocks;
quantized residual error information for the P-blocks;
delta motion vector information for the B-blocks; and
quantized residual error information for the B-blocks.
4. A method for encoding a sequence of video image frames according to claim 3, wherein
the output bitstream contains additional information to indicate the presence of at least one of:
the delta motion vector information for the B-blocks; and
the quantized residual error information for the B-blocks.
5. A method for decoding a sequence of video image frames comprising the steps of:
decoding the compressed video image sequence as a set of group of pictures, each group of pictures comprising an I-frame followed by a plurality of PB-frame pairs, each PB-frame pair having a corresponding P-block;
decoding each I-frame or PB-frame pair into a plurality of spatially non-overlapping blocks of pixel data;
decoding the I-blocks from the I-frame independently from any other frames in the group of pictures;
predictively decoding the P-block from the second frame of the PB-frame pair based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair;
bi-directionally predictively decoding the B-blocks from the first frame of the PB-frame pair based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair and the corresponding P-block in the current PB-frame pair;
driving a sealed forward motion vector and a sealed backward motion vector for the B-block by sealing the motion vector of the corresponding P-block in the current PB-frame pair;
obtaining a final forward motion vector for the B-block by adding a delta motion vector to the sealed forward motion vector; and
obtaining a final backward motion vector for the B-block by subtracting the delta motion vector from the sealed backward motion vector.
6. A method for decoding a sequence of video image frames according to claim 5, further comprising the step of forming a decoded output, wherein the decoded output is responsive to a bitstream comprising:
temporal reference information for the first and second frames of the PB-frame pairs;
motion vector information for the P-blocks;
quantized residual error information for the P-blocks;
the delta motion vector information for the B-blocks; and
quantized residual error information for the B-blocks.
7. A method for decoding a sequence of video image frames according to claim 6, wherein
the bitstream contains additional information to indicate the presence of at least one of:
the delta motion vector information for the B-blocks; and
the quantized residual error information for the B-block.
8. A method of decoding a sequence of video image frames according to claim 5, wherein
the sealing is based on a temporal reference of the first and second frames of the PB-frame pair.
9. An apparatus for encoding a sequence of video image frames comprising:
means for encoding each frame in a sequence of video image frames into a set of group of pictures, each group of pictures comprising an I-frame followed by a plurality of PB-frame pairs;
means for dividing the I-frame and the PB-frame pair into a plurality of spatially non-overlapping blocks of pixel data;
means for encoding and decoding the I-blocks of the I-frame independently from any other frames in the group of pictures;
means for storing the decoded I-blocks to predictively encode subsequent frames;
means for predictively encoding and decoding the P-blocks of the second frame of the PB-frame pair based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair;
means for storing the decoded P-block to predictively encode subsequent frames;
means of deriving a sealed forward motion vector and a sealed backward motion vector for a B-block by sealing the motion vector of the corresponding P-block in the current PB-frame pair, the B-block being the first frame of the PB-frame pair;
means for obtaining a final forward motion vector for the B-block by adding a delta motion vector to the sealed forward motion vector;
means for obtaining a final backward motion vector for the B-block by subtracting the same delta motion vector from the sealed backward motion vector; and
means for encoding the B-blocks of the first frame of the PB-frame pairs based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair and the corresponding P-block in the current PB-frame pair using the final forward motion vector and the final backward motion vector.
10. An apparatus for decoding a sequence of video image frames comprising:
means for decoding each frame in a sequence of video image frames into a set of group of pictures, each group of pictures composing an I-frame followed by a plurality of PB-frame pairs;
means for decoding the I-blocks of the I-frame independently of any other frames in the group of pictures;
means for storing the decoded I-blocks to predictively decode subsequent frames;
means for decoding the P-blocks of the second frame of the PB-frame pair based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame pair;
means for storing the decoded P-blocks to predictively decode subsequent frames;
means for deriving a sealed forward motion vector and a sealed backward motion vector for a B-block by sealing the motion vector of the corresponding P-block in the current PB-frame pair, the B-block being the first frame of the PB-frame pair;
means for obtaining final forward motion vector for the B-block by adding a delta motion vector to the sealed forward motion vector;
means for obtaining a final backward motion vector for the B-block by subtracting the delta motion vector to the sealed backward motion vector; and
means for decoding the B-blocks of the first frame of the PB-frame pairs based on the I-blocks in the previous I-frame of the P-blocks in the previous PB-frame pair and the corresponding P-block in the current PB-frame pair using the final forward motion vector and the final backward motion vector.
11. A method for encoding a sequence of video image frames comprising the steps of:
dividing a source sequence into a plurality of groups of pictures, each group of pictures comprising a first frame (I-frame) followed by a plurality of pairs of predictively encoded frames (PB-frame pairs);
dividing each I-frame or PB-frame pair into a plurality of blocks;
encoding the blocks from the I-frame;
predictively encoding the blocks from the second frame of the PB-frame pair;
bi-directionally predictively encoding the blocks from the first frame of a PB-frame pair (B-blocks);
deriving a sealed forward motion vector and a sealed backward motion vector for the B-block;
obtaining a final forward motion vector for the B-block by adding a delta motion vector to the sealed forward motion vector; and
obtaining a final backward motion vector for the B-block by subtracting the delta motion vector from the sealed backward motion vector.
12. An apparatus for encoding a sequence of video image frames comprising:
means for dividing a source sequence into a plurality of groups of pictures, each group of pictures comprising a first frame (I-frame) followed by a plurality of pairs of predictively encoded frames (PB-frame pairs);
means for dividing each I-frame or PB-frame pair into a plurality of blocks;
means for encoding the blocks from the I-frame;
means for predictively encoding the blocks from the second frame of the PB-frame pair;
means for bi-directionally predictively encoding the blocks from the first frame of a PB-frame pair (B-blocks);
means for deriving a sealed forward motion vector and a sealed backward motion vector for the B-block;
means for obtaining a final forward motion vector for the B-block by adding a delta motion vector to the sealed forward motion vector; and
means for obtaining a final backward motion vector for the B-block by subtracting the delta motion vector from the sealed backward motion vector.
13. A method for decoding a compressed video image sequence of a group of pictures including an I-frame followed by a plurality of P-frames and B-frames, comprising the steps of:
decoding a block in the I-frame independently from any other frames in the group of pictures;
predictively decoding a block in a P-frame based on the previous I-frame or a previous P-frame;
bi-directionally predictively decoding a block in a B-frame based on the previous I-frame or a previous P-frame and a block in a P-frame positioned after the B-frame;
deriving a scaled forward motion vector and a scaled backward motion vector for the block in the B-frame by scaling a motion vector of the block in the P-frame positioned after the B-frame;
obtaining a final forward motion vector for the block in the B-frame by adding a delta motion vector to the scaled forward motion vector; and
obtaining a final backward motion vector for the block in the B-frame by adding the delta motion vector to the scaled backward motion vector.
14. A method of decoding a sequence of video image frames according to claim 13, wherein the deriving step includes:
scaling of the forward and backward motion vectors is based on a temporal reference of the B-frame and the P-frame.
15. A method for decoding a sequence of video image frames according to claim 13, further comprising the step of forming a decoded output, wherein the decoded output is responsive to a bitstream comprising:
temporal reference information for the B-frame and the P-frame;
motion vector information for the block in the P-frame;
quantized residual error information for the block in the P-frame;
the delta motion vector information for the block in the B-frame; and
quantized residual error information for the block in the B-frame.
16. A method for decoding a sequence of video image frames according to claim 15, wherein
the bitstream contains additional information indicating a presence of at least one of
the delta motion vector information for the block in the B-frame; and
the quantized residual error information for the block in the B-frame.
US09/691,858 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof Expired - Lifetime USRE40080E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/691,858 USRE40080E1 (en) 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP34060995A JPH09182083A (en) 1995-12-27 1995-12-27 Video image encoding method and decoding method and device therefor
US08/773,574 US5825421A (en) 1995-12-27 1996-12-27 Video coding method and decoding method and devices thereof
US09/691,858 USRE40080E1 (en) 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/773,574 Reissue US5825421A (en) 1995-12-27 1996-12-27 Video coding method and decoding method and devices thereof

Publications (1)

Publication Number Publication Date
USRE40080E1 true USRE40080E1 (en) 2008-02-19

Family

ID=18338622

Family Applications (3)

Application Number Title Priority Date Filing Date
US08/773,574 Ceased US5825421A (en) 1995-12-27 1996-12-27 Video coding method and decoding method and devices thereof
US09/691,857 Expired - Lifetime USRE39455E1 (en) 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof
US09/691,858 Expired - Lifetime USRE40080E1 (en) 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US08/773,574 Ceased US5825421A (en) 1995-12-27 1996-12-27 Video coding method and decoding method and devices thereof
US09/691,857 Expired - Lifetime USRE39455E1 (en) 1995-12-27 2000-10-18 Video coding method and decoding method and devices thereof

Country Status (3)

Country Link
US (3) US5825421A (en)
EP (1) EP0782343A3 (en)
JP (1) JPH09182083A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070041451A1 (en) * 2001-11-06 2007-02-22 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US20160080767A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10476928B2 (en) * 2014-05-05 2019-11-12 Huawei Technologies Co., Ltd. Network video playback method and apparatus

Families Citing this family (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6957350B1 (en) 1996-01-30 2005-10-18 Dolby Laboratories Licensing Corporation Encrypted and watermarked temporal and resolution layering in advanced television
US6404813B1 (en) * 1997-03-27 2002-06-11 At&T Corp. Bidirectionally predicted pictures or video object planes for efficient and flexible video coding
SG65064A1 (en) 1997-04-09 1999-05-25 Matsushita Electric Ind Co Ltd Image predictive decoding method image predictive decoding apparatus image predictive coding method image predictive coding apparatus and data storage media
US6233356B1 (en) 1997-07-08 2001-05-15 At&T Corp. Generalized scalability for video coder based on video objects
US6993201B1 (en) 1997-07-08 2006-01-31 At&T Corp. Generalized scalability for video coder based on video objects
US6018368A (en) * 1997-07-11 2000-01-25 Samsung Electro-Mechanics Co., Ltd. Scalable encoding apparatus and method with improved function of scaling motion vector
KR100235355B1 (en) * 1997-08-13 1999-12-15 전주범 Improved motion estimator and estimation method thereof
US6625320B1 (en) * 1997-11-27 2003-09-23 British Telecommunications Public Limited Company Transcoding
CA2265089C (en) * 1998-03-10 2007-07-10 Sony Corporation Transcoding system using encoding history information
US6031872A (en) * 1998-06-16 2000-02-29 Daewoo Electronics Co., Ltd. Method and apparatus for encoding a video signal
US6499060B1 (en) * 1999-03-12 2002-12-24 Microsoft Corporation Media coding for loss recovery with remotely predicted data units
US7050503B2 (en) * 1999-04-17 2006-05-23 Pts Corporation Segment-based encoding system using residue coding by basis function coefficients
US7085319B2 (en) * 1999-04-17 2006-08-01 Pts Corporation Segment-based encoding system using segment hierarchies
KR20010071706A (en) * 1999-04-30 2001-07-31 요트.게.아. 롤페즈 Video encoding method with selection of B-frame encoding mode
KR20010071692A (en) * 1999-04-30 2001-07-31 요트.게.아. 롤페즈 Low bit rate video coding method and system
US7170941B2 (en) * 1999-08-13 2007-01-30 Patapsco Designs Inc. Temporal compression
GB9928022D0 (en) * 1999-11-26 2000-01-26 British Telecomm Video coding and decording
US7286724B2 (en) * 1999-12-06 2007-10-23 Hyundai Curitel, Inc. Method and apparatus for searching, browsing and summarizing moving image data using fidelity for tree-structure moving image hierarchy
US6351545B1 (en) * 1999-12-14 2002-02-26 Dynapel Systems, Inc. Motion picture enhancing system
US8374237B2 (en) 2001-03-02 2013-02-12 Dolby Laboratories Licensing Corporation High precision encoding and decoding of video images
JP3674535B2 (en) * 2001-05-08 2005-07-20 日本電気株式会社 Video coding method and apparatus
US7266150B2 (en) 2001-07-11 2007-09-04 Dolby Laboratories, Inc. Interpolation of video compression frames
US6816552B2 (en) * 2001-07-11 2004-11-09 Dolby Laboratories Licensing Corporation Interpolation of video compression frames
US8111754B1 (en) 2001-07-11 2012-02-07 Dolby Laboratories Licensing Corporation Interpolation of video compression frames
US20030112863A1 (en) 2001-07-12 2003-06-19 Demos Gary A. Method and system for improving compressed image chroma information
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
AU2003213360A1 (en) * 2002-03-14 2003-09-22 Matsushita Electric Industrial Co., Ltd. Motion vector detection method
US7027510B2 (en) * 2002-03-29 2006-04-11 Sony Corporation Method of estimating backward motion vectors within a video sequence
US20040001546A1 (en) * 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US7154952B2 (en) 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
CN1748427A (en) * 2003-02-04 2006-03-15 皇家飞利浦电子股份有限公司 Predictive encoding of motion vectors including a flag notifying the presence of coded residual motion vector data
US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US7577198B2 (en) * 2003-09-07 2009-08-18 Microsoft Corporation Number of reference fields for an interlaced forward-predicted field
US8085844B2 (en) * 2003-09-07 2011-12-27 Microsoft Corporation Signaling reference frame distances
US7889792B2 (en) * 2003-12-24 2011-02-15 Apple Inc. Method and system for video encoding using a variable number of B frames
CN100562109C (en) * 2004-01-16 2009-11-18 Nxp股份有限公司 Video information is carried out the method for compression/de-compression
JP2007536817A (en) * 2004-05-04 2007-12-13 クゥアルコム・インコーポレイテッド Method and apparatus for motion compensated frame rate upconversion
US20050286629A1 (en) * 2004-06-25 2005-12-29 Adriana Dumitras Coding of scene cuts in video sequences using non-reference frames
US8948262B2 (en) 2004-07-01 2015-02-03 Qualcomm Incorporated Method and apparatus for using frame rate up conversion techniques in scalable video coding
EP2096873A3 (en) 2004-07-20 2009-10-14 Qualcomm Incorporated Method and apparatus for encoder assisted-frame rate conversion (EA-FRUC) for video compression
TW200629899A (en) * 2004-07-20 2006-08-16 Qualcomm Inc Method and apparatus for frame rate up conversion with multiple reference frames and variable block sizes
US8553776B2 (en) 2004-07-21 2013-10-08 QUALCOMM Inorporated Method and apparatus for motion vector assignment
US7561620B2 (en) * 2004-08-03 2009-07-14 Microsoft Corporation System and process for compressing and decompressing multiple, layered, video streams employing spatial and temporal encoding
TWI245548B (en) * 2004-10-20 2005-12-11 Inst Information Industry Method and device for video decoding
US8634413B2 (en) 2004-12-30 2014-01-21 Microsoft Corporation Use of frame caching to improve packet loss recovery
JP2006279573A (en) * 2005-03-29 2006-10-12 Sanyo Electric Co Ltd Encoder and encoding method, and decoder and decoding method
US20070076796A1 (en) * 2005-09-27 2007-04-05 Fang Shi Frame interpolation using more accurate motion information
US8937997B2 (en) 2006-03-16 2015-01-20 Apple Inc. Scalable video coding/multiplexing compatible with non-scalable decoders
US8750387B2 (en) 2006-04-04 2014-06-10 Qualcomm Incorporated Adaptive encoder-assisted frame rate up conversion
US7456760B2 (en) * 2006-09-11 2008-11-25 Apple Inc. Complexity-aware encoding
JP5134001B2 (en) * 2006-10-18 2013-01-30 アップル インコーポレイテッド Scalable video coding with lower layer filtering
US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US8526489B2 (en) * 2007-09-14 2013-09-03 General Instrument Corporation Personal video recorder
WO2009070508A1 (en) * 2007-11-30 2009-06-04 Dolby Laboratories Licensing Corp. Temporally smoothing a motion estimate
US20090304086A1 (en) * 2008-06-06 2009-12-10 Apple Inc. Method and system for video coder and decoder joint optimization
FR2933565A1 (en) * 2008-07-01 2010-01-08 France Telecom METHOD AND DEVICE FOR ENCODING AN IMAGE SEQUENCE USING TEMPORAL PREDICTION, SIGNAL, DATA MEDIUM, DECODING METHOD AND DEVICE, AND CORRESPONDING COMPUTER PROGRAM PRODUCT
US9445121B2 (en) 2008-08-04 2016-09-13 Dolby Laboratories Licensing Corporation Overlapped block disparity estimation and compensation architecture
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
US8976856B2 (en) 2010-09-30 2015-03-10 Apple Inc. Optimized deblocking filters
KR101187530B1 (en) * 2011-03-02 2012-10-02 한국과학기술원 Rendering strategy for monoscopic, stereoscopic and multi-view computer generated imagery, system using the same and recording medium for the same
US9232230B2 (en) * 2012-03-21 2016-01-05 Vixs Systems, Inc. Method and device to identify motion vector candidates using a scaled motion search
JP7279939B2 (en) * 2016-09-21 2023-05-23 カカドゥ アール アンド ディー ピーティーワイ リミテッド Base Fixed Models and Inference for Video and Multiview Imagery Compression and Upsampling
CN112632426B (en) * 2020-12-22 2022-08-30 新华三大数据技术有限公司 Webpage processing method and device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5136378A (en) * 1990-09-07 1992-08-04 Matsushita Electric Industrial Co., Ltd. Moving picture coding apparatus
US5144426A (en) * 1989-10-13 1992-09-01 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
US5150432A (en) * 1990-03-26 1992-09-22 Kabushiki Kaisha Toshiba Apparatus for encoding/decoding video signals to improve quality of a specific region
US5155593A (en) * 1989-09-27 1992-10-13 Sony Corporation Video signal coding method
US5267334A (en) * 1991-05-24 1993-11-30 Apple Computer, Inc. Encoding/decoding moving images with forward and backward keyframes for forward and reverse display
US5293229A (en) * 1992-03-27 1994-03-08 Matsushita Electric Corporation Of America Apparatus and method for processing groups of fields in a video data compression system
US5315326A (en) * 1991-04-26 1994-05-24 Victor Company Of Japan, Ltd. Efficient coding/decoding apparatuses for processing digital image signal
US5361105A (en) * 1993-03-05 1994-11-01 Matsushita Electric Corporation Of America Noise reduction system using multi-frame motion estimation, outlier rejection and trajectory correction
US5386234A (en) * 1991-11-13 1995-01-31 Sony Corporation Interframe motion predicting method and picture signal coding/decoding apparatus
US5412428A (en) * 1992-12-28 1995-05-02 Sony Corporation Encoding method and decoding method of color signal component of picture signal having plurality resolutions
EP0651574A1 (en) 1993-03-24 1995-05-03 Sony Corporation Method and apparatus for coding/decoding motion vector, and method and apparatus for coding/decoding image signal
US5436665A (en) 1992-03-03 1995-07-25 Kabushiki Kaisha Toshiba Motion picture coding apparatus
US5467136A (en) 1991-05-31 1995-11-14 Kabushiki Kaisha Toshiba Video decoder for determining a motion vector from a scaled vector and a difference vector
US5481310A (en) * 1991-08-29 1996-01-02 Sharp Kabushiki Kaisha Image encoding apparatus
US5905534A (en) * 1993-07-12 1999-05-18 Sony Corporation Picture decoding and encoding method and apparatus for controlling processing speeds
US6104753A (en) * 1996-02-03 2000-08-15 Lg Electronics Inc. Device and method for decoding HDTV video
US6184935B1 (en) * 1997-03-12 2001-02-06 Matsushita Electric Industrial, Co. Ltd. Upsampling filter and half-pixel generator for an HDTV downconversion system
US6219383B1 (en) * 1997-06-30 2001-04-17 Daewoo Electronics Co., Ltd. Method and apparatus for selectively detecting motion vectors of a wavelet transformed video signal

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155593A (en) * 1989-09-27 1992-10-13 Sony Corporation Video signal coding method
US5144426A (en) * 1989-10-13 1992-09-01 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
US5150432A (en) * 1990-03-26 1992-09-22 Kabushiki Kaisha Toshiba Apparatus for encoding/decoding video signals to improve quality of a specific region
US5136378A (en) * 1990-09-07 1992-08-04 Matsushita Electric Industrial Co., Ltd. Moving picture coding apparatus
US5315326A (en) * 1991-04-26 1994-05-24 Victor Company Of Japan, Ltd. Efficient coding/decoding apparatuses for processing digital image signal
US5267334A (en) * 1991-05-24 1993-11-30 Apple Computer, Inc. Encoding/decoding moving images with forward and backward keyframes for forward and reverse display
US5467136A (en) 1991-05-31 1995-11-14 Kabushiki Kaisha Toshiba Video decoder for determining a motion vector from a scaled vector and a difference vector
US5481310A (en) * 1991-08-29 1996-01-02 Sharp Kabushiki Kaisha Image encoding apparatus
US5386234A (en) * 1991-11-13 1995-01-31 Sony Corporation Interframe motion predicting method and picture signal coding/decoding apparatus
US5436665A (en) 1992-03-03 1995-07-25 Kabushiki Kaisha Toshiba Motion picture coding apparatus
US5293229A (en) * 1992-03-27 1994-03-08 Matsushita Electric Corporation Of America Apparatus and method for processing groups of fields in a video data compression system
US5412428A (en) * 1992-12-28 1995-05-02 Sony Corporation Encoding method and decoding method of color signal component of picture signal having plurality resolutions
US5361105A (en) * 1993-03-05 1994-11-01 Matsushita Electric Corporation Of America Noise reduction system using multi-frame motion estimation, outlier rejection and trajectory correction
EP0651574A1 (en) 1993-03-24 1995-05-03 Sony Corporation Method and apparatus for coding/decoding motion vector, and method and apparatus for coding/decoding image signal
US5905534A (en) * 1993-07-12 1999-05-18 Sony Corporation Picture decoding and encoding method and apparatus for controlling processing speeds
US6104753A (en) * 1996-02-03 2000-08-15 Lg Electronics Inc. Device and method for decoding HDTV video
US6184935B1 (en) * 1997-03-12 2001-02-06 Matsushita Electric Industrial, Co. Ltd. Upsampling filter and half-pixel generator for an HDTV downconversion system
US6219383B1 (en) * 1997-06-30 2001-04-17 Daewoo Electronics Co., Ltd. Method and apparatus for selectively detecting motion vectors of a wavelet transformed video signal

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
"Recommendation H.261-Video Codec for Audiovisual Services at px 64 kbit/s", International Telegraph and Telephone Consulative Committee, Study Group XV-Report R 37, Aug. 1990.
"Transmission of Non-Telephone Signals Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video", ITU-T Telecommunication Standarization Sector of ITU, XX,XX, Jul. 1, 1995, pp. A-B, I-VIII, 1, XP000198491.
A. Nagata, "Moving Picture Coding System for Digital Storage Media Using Hybrid Coding", vol. 2, No. 2, Aug. 1, 1990, pp. 109-116, XP000243471.
European Search Report dated Nov. 8, 2000, application No. EP 96120920.
K. Rijkse, "H-263: Video Coding For Low-Bit-Rate Communication", IEEE Communications Magazine vol. 34, No. 12, Dec. 1, 1996, pp. 42-45, XP000636452.
K. Rijske, "ITU Standardization of Very Low Bitrate Video Coding Algorithms", vol. 7, No. 4, pp. 553-565, XP004047099, Nov. 1, 1995.
Kozu et al., "A New Technique for Block-Based Motion Compensation", pp. V/217-20, vol. 5, XP002151093, Apr. 1, 1994.
Secretariat: Japan (JISC), "Coded Representation of Audio, Picture, Multimedia and Hypermedia Information," ISO/IEC JTC 1/SC 29 N 313, dated May 20, 1993.
W. Lynch, "Bidirectional Motion Estimation Based On P Frame Motion Vectors and Area Overlap", vol. CONF. 17, Mar. 23, 1992, pp. 445-448, XP000378964.

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9241161B2 (en) 2001-11-06 2016-01-19 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9338448B2 (en) 2001-11-06 2016-05-10 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US20080205522A1 (en) * 2001-11-06 2008-08-28 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US20100014589A1 (en) * 2001-11-06 2010-01-21 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US20100020873A1 (en) * 2001-11-06 2010-01-28 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US8107533B2 (en) 2001-11-06 2012-01-31 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8126056B2 (en) 2001-11-06 2012-02-28 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8126057B2 (en) * 2001-11-06 2012-02-28 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8194747B2 (en) 2001-11-06 2012-06-05 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8213517B2 (en) 2001-11-06 2012-07-03 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8265153B2 (en) 2001-11-06 2012-09-11 Panasonic Corporation Moving picture coding method, and moving picture decoding method
US8964839B2 (en) 2001-11-06 2015-02-24 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9078003B2 (en) 2001-11-06 2015-07-07 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9241162B2 (en) 2001-11-06 2016-01-19 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9578323B2 (en) 2001-11-06 2017-02-21 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9462267B2 (en) 2001-11-06 2016-10-04 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US9344714B2 (en) 2001-11-06 2016-05-17 Panasonic Intellectual Property Corporation Of America Moving picture coding method, and moving picture decoding method
US20070041452A1 (en) * 2001-11-06 2007-02-22 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US20070041451A1 (en) * 2001-11-06 2007-02-22 Satoshi Kondo Moving picture coding method, and moving picture decoding method
US20160080769A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US20160080764A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US20160080767A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US20160080761A1 (en) * 2008-03-07 2016-03-17 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10244254B2 (en) 2008-03-07 2019-03-26 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10334271B2 (en) * 2008-03-07 2019-06-25 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10341679B2 (en) * 2008-03-07 2019-07-02 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10412409B2 (en) * 2008-03-07 2019-09-10 Sk Planet Co., Ltd. Encoding system using motion estimation and encoding method using motion estimation
US10476928B2 (en) * 2014-05-05 2019-11-12 Huawei Technologies Co., Ltd. Network video playback method and apparatus

Also Published As

Publication number Publication date
EP0782343A2 (en) 1997-07-02
US5825421A (en) 1998-10-20
JPH09182083A (en) 1997-07-11
USRE39455E1 (en) 2007-01-02
EP0782343A3 (en) 2000-12-20

Similar Documents

Publication Publication Date Title
USRE40080E1 (en) Video coding method and decoding method and devices thereof
USRE43238E1 (en) Picture signal transmitting method and apparatus
EP1672926B1 (en) Bi-directional predicting method for video coding/decoding
US5708473A (en) Two stage video film compression method and system
US8259805B2 (en) Method and apparatus for generating coded picture data and for decoding coded picture data
KR100592651B1 (en) Transcoding
AU691268B2 (en) Image coded data re-encoding apparatus
US5500678A (en) Optimized scanning of transform coefficients in video coding
US7356081B1 (en) Bidirectionally predicted pictures or video object planes for efficient and flexible video coding
EP0618734A2 (en) Picture signal processing
US7769087B2 (en) Picture level adaptive frame/field coding for digital video content
US7359558B2 (en) Spatial scalable compression
JP3358835B2 (en) Image coding method and apparatus
EP0616472B1 (en) Transmission and decoding of picture signals
US5563593A (en) Video coding with optimized low complexity variable length codes
US7088772B2 (en) Method and apparatus for updating motion vector memories
US20060133475A1 (en) Video coding
JP2001028756A (en) Method and device for executing selection between intra- frame coding mode and inter-frame coding mode in context base
KR100415494B1 (en) Image encoding method and apparatus, recording apparatus, video signal encoding apparatus, processing apparatus and method, video data processing apparatus and method
US7373004B2 (en) Apparatus for constant quality rate control in video compression and target bit allocator thereof
EP0577365B1 (en) Encoding of picture signals
US6256349B1 (en) Picture signal encoding method and apparatus, picture signal transmitting method, picture signal decoding method and apparatus and recording medium
JP2002543715A (en) Low bit rate video encoding method and system
JPH08251582A (en) Encoded data editing device
JP3356413B2 (en) Image decoding method and apparatus

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12