|Numéro de publication||USRE40080 E1|
|Type de publication||Octroi|
|Numéro de demande||US 09/691,858|
|Date de publication||19 févr. 2008|
|Date de dépôt||18 oct. 2000|
|Date de priorité||27 déc. 1995|
|État de paiement des frais||Payé|
|Numéro de publication||09691858, 691858, US RE40080 E1, US RE40080E1, US-E1-RE40080, USRE40080 E1, USRE40080E1|
|Inventeurs||Thiow Keng Tan|
|Cessionnaire d'origine||Matsushita Electric Industrial Co., Ltd.|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (18), Citations hors brevets (9), Référencé par (23), Classifications (3), Événements juridiques (1)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
This application is a divisional reissue application of U.S. Pat. No. 5,825,421, issued Oct. 20, 1998. This application also has a related reissue application Ser. No. 09/691,857, filed on Oct. 18, 2000.
1. Field of the Invention
This invention can be used in low bit rate video coding for tele-communicative applications. It improves the temporal frame rate of the decoder output as well as the overall picture quality.
2. Related art of the Invention
In a typical hybrid transform coding algorithm such as the ITU-T Recommendation H.261  and MPEG  motion compensation is used to reduce the amount of temporal redundancy in the sequence. In the H.261 coding scheme, the frames are coded using only forward prediction, hereafter referred to as P-frames. In the MPEG doing scheme, some frames are coded using bi-direction prediction, hereafter referred to as B-frames. B-frames improve the efficiency of the coding scheme. Now the  is ITU-T Recommendation H.261 (Formerly CCITT Recommendation H.261) Codes for audiovisual services at p×64 kbit/s Geneva. 1990 , and the  is ISO/TEC 11172-2 1993 . Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s—Part 2: Video.
However, it introduces delay in the encoding and decoding, making it unsuitable for applications in the communicative services where delay is an important parameter.
In the PB-block only the motion vectors for the P-block is transmitted to the decoder. The forward and backward motion vectors for the B-block is derived from the P motion vectors. A linear motion model is used and the temporal reference of the B and P frame is used to scale the motion vector appropriately.
Currently the method used in the prior art assumes a linear motion model. However this assumption is not valid in a normal scene where the motion is typically not linear. This is especially true when the camera shakes and when objects are not moving at constant velocities.
A second problem involves the quantization and transmission of the residual of the predication error in the B-block Currently the coefficients from the P-block and the B-block are interleaved in some scanning order which requires the B-block efficients to be transmitted even when they are all zero. This is not very efficient as it is quite often that there are no residual coefficients to transmit (all coefficients are zero).
In order to solve the first problem, the current invention employs a delta motion vector to compensate for the non-linear motion. Thus it becomes necessary for the encoder to perform an additional motion search to obtain the optimum delta motion vector that when added to the derived motion vectors would result in the best match in the prediction. This delta motion vectors are transmitted to the decoder at the block level only when necessary. A flag is used to indicate to the decoder if there are delta motion vectors present for the B-block.
For the second problem, this invention also uses a flag to indicate if there are coefficients for the B-block to be decoded.
The operation of the Invention is described as follows.
In the current invention the problem is solved by adding a small delta motion vector to the derived motion vector to compensate for the difference between the derived and true motion vector. Therefore the equation in (1) and (2) are now replaced by equations (3) and (4), respectively.
The preferred embodiment of the current invention is described here.
The encoding functionality block diagram depicts an encoder using a motion estimation and compensation for reducing the temporal redundancy in the sequence to be coded. The input sequences is organized into a first frame and pairs of subsequent frames. The first frame, hereafter referred to as the I-frame, is coded independent of all other frames. The pair of subsequent frames, hereafter referred to as PB-frame, consist of a B-frame followed by a P-frame. The P-frame is forward predicted based on the previously reconstructed I-frame or P-frame and the B-frame is bi-directionally predicted based on the previously reconstructed I-frame or P-frame and the information in the current P-frame.
The input frame image sequence, 1, is placed in the Frame Memory 2. If the frame is classified as an I-frame or a P-frame it is passed through line 14 to the Reference Memory 3, for use as the reference frame in the motion estimation of the next PB-frame to be predictively encoded. The signal is then passed through line 13 to the Block Sampling module 4, where it is partitioned into spatially non-overlapping blocks of pixel data for further processing.
If the frame is classified as an I-frame, the sampled blocks are passed through line 16 to the DCT module 7. If the frame is classified as a PB-frame, the sampled blocks are passed through line 17 to the Motion Estimation module 5. The Motion Estimation module 5 uses information from the Reference Frame Memory 3 and the current block 17 to obtain the motion vector for that provides the best match for the P-block. The motion vector and the local reconstructed frame, 12, are passed through line 19 and 20, respectively, to the Motion Compensation module 6. The difference image is formed by subtracting the motion compensated decoded frame, 21, from the current P-block 15. This signal is then passed through line 22 to the DCT module 7.
In the DCT module 7, each block is transformed into the DCT domain coefficients. The transform coefficienta are passed through line 23 to Quantization module 8, where they are quantizied. The quantizied coefficients are then passed though line 24 to the Run-length & Variable Length Coding module 9. Here the coefficients are entropy coded to form the Output Bit Stream 25.
If the current block is an I-block or a P-block, the quantized coefficients are also passed through line 26 to the Inverse Quantization module 10. The output of the Inverse Quantization 10, is then passed through line 27 to the Inverse DCT module 11. If the current block is an I-block then the reconstructed block is placed, via line 28, in the Local Decoded Frame Memory 12. If the current block is a P-block then the output of the Inverse DCT 29 is added to the motion compensated output 21, to from the reconstructed block 30. The reconstructed block 30, is then placed in the Local Decoded Frame Memory 12, for the motion compensation of the subsequent frames.
After the P-block have been locally reconstructed, the information is passed again to the Motion Compensation Module 6, where the prediction of the B-block is formed.
The reduction difference block is then passed through line 22 to the DCT module 7. The DCT coefficients are then passed through line 23 to the Quantization module 8. The result of the Quantization module 8, is passed through line 24 to the Run-length & Variable Length Coding 9. In this module the presence of the delta motion vector and the quantized residual error in the Output Bitstream 25, is indicated a variable length code. NOB which is the acronym for No B-block. This flag is generated in Run-length & Variable Length Coding module 9 based on whether there are residual error in the Quantization module 8 and delta motion vectors found in the Delta Motion Search module 54 is not zero. Table 1 provides the preferred embodiment of the variable length code for the NOB flag. The variable length code of the NOB flag is inserted in the Output Bitstream, 25, prior to the delta motion vector an quantized residual error codes.
(Variable length code for the NOB flag)
If the current frame is an I-frame then the output of Inverse DCT 34, is passed through line 39 and stored in the Frame Memory 42.
If the current frame is a PB-frame, the side information containing the motion vector are passe through line 45 to the Motion compensation module 36. The motion Compensation module 36, uses this information and the information in the Local Decoded Memory, 35, to from the motion compensated signal, 44. This signal is then added to the output of the Inverse DCT module 34, to form the reconstruction of the P-block.
The Motion Compensation module 36, then uses the additional information obtained in the reconstructed P-block to obtain the bi-directional prediction for the B-block. The B-block is then reconstructed and placed in the Frame Memory, 42, together with the P-block.
By implementing this invention, the temporal frame rate of the decoded sequences can be effectively doubled at a fraction of the expected cost in bit rate. The delay is similar to that of the same sequence decoded at half the frame rate.
As described above in the present invention a new predictive coding is used to increase the temporal frame rate and coding efficiency without introducing excessive delay. Currently the motion vector for the blocks in the bi-directionally predicted frame is derived from the motion vector of the corresponding block in the forward predicted frame using a linear motion model. This however is not effective when the motion in the image sequence is not linear. According to this invention, the efficiency of this method can be further improved if a non-linear motion model is used. In this model a delta motion vector is added to or subtracted from the derived forward and backward motion vector, respectively. The encoder performs an additional search to determine if there is a need for the delta motion vector. The presence of this delta motion vector in the transmitted bitstream is sinaglled to the decoder which then takes the appropriate action to make use of the delta motion vector to derive the effective forward and backward motion vectors for the bi-directionally predicted block.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US5136378 *||15 août 1991||4 août 1992||Matsushita Electric Industrial Co., Ltd.||Moving picture coding apparatus|
|US5144426 *||12 oct. 1990||1 sept. 1992||Matsushita Electric Industrial Co., Ltd.||Motion compensated prediction interframe coding system|
|US5150432 *||22 mars 1991||22 sept. 1992||Kabushiki Kaisha Toshiba||Apparatus for encoding/decoding video signals to improve quality of a specific region|
|US5155593 *||27 sept. 1990||13 oct. 1992||Sony Corporation||Video signal coding method|
|US5267334 *||21 janv. 1993||30 nov. 1993||Apple Computer, Inc.||Encoding/decoding moving images with forward and backward keyframes for forward and reverse display|
|US5293229 *||27 mars 1992||8 mars 1994||Matsushita Electric Corporation Of America||Apparatus and method for processing groups of fields in a video data compression system|
|US5315326 *||24 avr. 1992||24 mai 1994||Victor Company Of Japan, Ltd.||Efficient coding/decoding apparatuses for processing digital image signal|
|US5361105 *||5 mars 1993||1 nov. 1994||Matsushita Electric Corporation Of America||Noise reduction system using multi-frame motion estimation, outlier rejection and trajectory correction|
|US5386234 *||5 nov. 1992||31 janv. 1995||Sony Corporation||Interframe motion predicting method and picture signal coding/decoding apparatus|
|US5412428 *||9 déc. 1993||2 mai 1995||Sony Corporation||Encoding method and decoding method of color signal component of picture signal having plurality resolutions|
|US5436665||19 nov. 1993||25 juil. 1995||Kabushiki Kaisha Toshiba||Motion picture coding apparatus|
|US5467136||17 févr. 1994||14 nov. 1995||Kabushiki Kaisha Toshiba||Video decoder for determining a motion vector from a scaled vector and a difference vector|
|US5481310 *||22 avr. 1994||2 janv. 1996||Sharp Kabushiki Kaisha||Image encoding apparatus|
|US5905534 *||12 juil. 1994||18 mai 1999||Sony Corporation||Picture decoding and encoding method and apparatus for controlling processing speeds|
|US6104753 *||3 févr. 1997||15 août 2000||Lg Electronics Inc.||Device and method for decoding HDTV video|
|US6184935 *||11 mars 1998||6 févr. 2001||Matsushita Electric Industrial, Co. Ltd.||Upsampling filter and half-pixel generator for an HDTV downconversion system|
|US6219383 *||8 juin 1998||17 avr. 2001||Daewoo Electronics Co., Ltd.||Method and apparatus for selectively detecting motion vectors of a wavelet transformed video signal|
|EP0651574A1||24 mars 1994||3 mai 1995||Sony Corporation||Method and apparatus for coding/decoding motion vector, and method and apparatus for coding/decoding image signal|
|1||"Recommendation H.261-Video Codec for Audiovisual Services at px 64 kbit/s", International Telegraph and Telephone Consulative Committee, Study Group XV-Report R 37, Aug. 1990.|
|2||"Transmission of Non-Telephone Signals Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video", ITU-T Telecommunication Standarization Sector of ITU, XX,XX, Jul. 1, 1995, pp. A-B, I-VIII, 1, XP000198491.|
|3||A. Nagata, "Moving Picture Coding System for Digital Storage Media Using Hybrid Coding", vol. 2, No. 2, Aug. 1, 1990, pp. 109-116, XP000243471.|
|4||European Search Report dated Nov. 8, 2000, application No. EP 96120920.|
|5||K. Rijkse, "H-263: Video Coding For Low-Bit-Rate Communication", IEEE Communications Magazine vol. 34, No. 12, Dec. 1, 1996, pp. 42-45, XP000636452.|
|6||K. Rijske, "ITU Standardization of Very Low Bitrate Video Coding Algorithms", vol. 7, No. 4, pp. 553-565, XP004047099, Nov. 1, 1995.|
|7||Kozu et al., "A New Technique for Block-Based Motion Compensation", pp. V/217-20, vol. 5, XP002151093, Apr. 1, 1994.|
|8||Secretariat: Japan (JISC), "Coded Representation of Audio, Picture, Multimedia and Hypermedia Information," ISO/IEC JTC 1/SC 29 N 313, dated May 20, 1993.|
|9||W. Lynch, "Bidirectional Motion Estimation Based On P Frame Motion Vectors and Area Overlap", vol. CONF. 17, Mar. 23, 1992, pp. 445-448, XP000378964.|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US8107533||26 oct. 2006||31 janv. 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8126056||31 oct. 2007||28 févr. 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8126057 *||31 oct. 2007||28 févr. 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8194747||30 sept. 2009||5 juin 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8213517||26 oct. 2006||3 juil. 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8265153||30 sept. 2009||11 sept. 2012||Panasonic Corporation||Moving picture coding method, and moving picture decoding method|
|US8964839||1 juin 2012||24 févr. 2015||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9078003||31 déc. 2014||7 juil. 2015||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9241161||1 juin 2015||19 janv. 2016||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9241162||1 juin 2015||19 janv. 2016||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9338448||9 déc. 2015||10 mai 2016||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9344714||9 déc. 2015||17 mai 2016||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9462267||14 avr. 2016||4 oct. 2016||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US9578323||16 août 2016||21 févr. 2017||Panasonic Intellectual Property Corporation Of America||Moving picture coding method, and moving picture decoding method|
|US20070041451 *||26 oct. 2006||22 févr. 2007||Satoshi Kondo||Moving picture coding method, and moving picture decoding method|
|US20070041452 *||26 oct. 2006||22 févr. 2007||Satoshi Kondo||Moving picture coding method, and moving picture decoding method|
|US20080205522 *||31 oct. 2007||28 août 2008||Satoshi Kondo||Moving picture coding method, and moving picture decoding method|
|US20100014589 *||30 sept. 2009||21 janv. 2010||Satoshi Kondo||Moving picture coding method, and moving picture decoding method|
|US20100020873 *||30 sept. 2009||28 janv. 2010||Satoshi Kondo||Moving picture coding method, and moving picture decoding method|
|US20160080761 *||20 nov. 2015||17 mars 2016||Sk Planet Co., Ltd.||Encoding system using motion estimation and encoding method using motion estimation|
|US20160080764 *||20 nov. 2015||17 mars 2016||Sk Planet Co., Ltd.||Encoding system using motion estimation and encoding method using motion estimation|
|US20160080767 *||20 nov. 2015||17 mars 2016||Sk Planet Co., Ltd.||Encoding system using motion estimation and encoding method using motion estimation|
|US20160080769 *||20 nov. 2015||17 mars 2016||Sk Planet Co., Ltd.||Encoding system using motion estimation and encoding method using motion estimation|
|Classification aux États-Unis||375/240.15, 348/699|
|14 avr. 2010||FPAY||Fee payment|
Year of fee payment: 12