US5512952A - Picture signal encoding and/or decoding apparatus - Google Patents

Picture signal encoding and/or decoding apparatus Download PDF

Info

Publication number
US5512952A
US5512952A US08/289,999 US28999994A US5512952A US 5512952 A US5512952 A US 5512952A US 28999994 A US28999994 A US 28999994A US 5512952 A US5512952 A US 5512952A
Authority
US
United States
Prior art keywords
block
motion
motion vectors
prediction
transform coefficients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/289,999
Inventor
Ryuichi Iwamura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to US08/289,999 priority Critical patent/US5512952A/en
Application granted granted Critical
Publication of US5512952A publication Critical patent/US5512952A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • This invention relates to an apparatus for compressing a picture signal, and an apparatus for expanding a compressed picture signal, and, more particularly, to an apparatus that is suitable for use in applications in which the picture signal being compressed is to be recorded, and in which the compressed picture signal being expanded has been reproduced.
  • an NTSC video signal for example, can be recorded on and reproduced from a conventional video disc.
  • the motion picture signal must be subject to high efficiency compression, and the reproduced motion pictures signal must be capable of being expanded efficiently.
  • the MPEG method detects a motion vector for each block of the motion picture signal, and generates a prediction block by applying motion compensation to a prediction picture according to the motion vector. This reduces redundancy in the motion picture signal in the time domain.
  • the block of prediction errors between each block of the present picture and its corresponding prediction block is subject to a discrete cosine transform, and the resulting transform coefficients are quantized, to reduce redundancy in the motion picture signal in the spatial domain.
  • the differential vector between the motion vectors of the target block and the left side block thereof is encoded on encoding the motion vector of prescribed each block. Therefore, when there are many targets to be imaged in a picture, the motions of which are different each other, the quantity of prediction error information between the current picture and the prediction picture is increased, which degrades the compression efficiency.
  • each of the 8 ⁇ 8 blocks be divided into four 4 ⁇ 4 subblocks, that a motion vector be determined for each subblock, and that the motion of the block be compensated by using the resulting four motion vectors.
  • this proposal degrades the compression efficiency because of the increased number of motion vectors.
  • an object of this invention is to provide an apparatus for compressing a motion picture signal and for expanding a compressed motion picture signal in which an original motion picture signal is compressed with high efficiency to provide a compressed signal that can be expanded to recover the original motion picture signal.
  • the foregoing object and other objects of the invention have been achieved by the provision of an apparatus for compressing a motion picture signal.
  • the motion picture signal is divided into blocks.
  • the apparatus comprises a circuit that subtracts the blocks of the motion picture signal from corresponding prediction blocks of a prediction picture to provide prediction error blocks; a circuit that orthogonally transforms the prediction error blocks to provide transform coefficients; a circuit that quantizes the transform coefficients to provide quantized transform coefficients; and a circuit that codes the quantized transform coefficients to provide coded transform coefficients.
  • a local decoding circuit locally decodes the quantized transform coefficients to provide an additional prediction picture.
  • a motion detector calculates a motion vector for each of plural subblocks obtained by dividing each block of the motion picture signal by at least four.
  • a representative motion vector generating circuit generates plural representative motion vectors representing the motion vectors of the subblocks constituting each block.
  • the representative motion vectors are generated from the motion vectors of the subblocks constituting the block, and are fewer in number than the number of subblocks constituting the block.
  • the apparatus includes a motion compensator for producing the prediction blocks from the prediction picture by applying motion compensation to the prediction picture in response to the plural representative motion vectors.
  • the invention also provides a complementary expander for expanding a compressed motion picture signal.
  • the compressed motion picture signal includes a compressed picture block obtained by compressing a block of a motion picture signal.
  • the compressed picture block includes coded transform coefficients and coded vector data representing the block of the motion picture signal.
  • the coded vector data includes plural representative motion vectors representing motion vectors of a number of subblocks obtained by dividing the block of the motion picture signal by at least four.
  • the expander provides an output picture signal, and comprises a demultiplexer that separates the coded transform coefficients and the coded vector data from the compressed picture block.
  • a vector decoder detects and decodes the plural representative motion vectors in the coded vector data.
  • the vector decoder decodes fewer representative motion vectors than the number of subblocks.
  • a calculating circuit calculates the motion vectors of the subblocks from the representative motion vectors.
  • a circuit derives a block of the output picture signal from the coded transform coefficients and the motion vectors.
  • FIG. 1 is a block diagram illustrating the construction of a first embodiment of an apparatus for compressing a picture signal according to the present invention
  • FIGS. 2A and 2B are schematic views illustrating a block consisting of flat subblocks and unflat subblocks
  • FIGS. 3A and 3B are schematic views for explaining folding in a block that includes flat subblocks and unflat subblocks in its lower and upper parts, respectively;
  • FIG. 4 is a table showing the DCT coefficients generated by the folding shown in FIGS. 3A and 3B;
  • FIGS. 5A and 5B are schematic views for explaining folding in a block that includes flat subblocks and unflat subblocks in its left and right parts, respectively;
  • FIG. 6 is a table showing the DCT coefficients generated by the folding shown in FIGS. 5A and 5B;
  • FIGS. 7A and 7B are schematic views for explaining folding in a block that includes diagonally-opposed flat subblocks and unflat subblocks;
  • FIGS. 8A and 8B are schematic views for explaining folding in a block that includes only one unflat subblock
  • FIG. 9 is a table showing the DCT coefficients generated by the folding shown in FIGS. 8A and 8B;
  • FIGS. 10A and 10B are schematic views for explaining a zigzag scan
  • FIGS. 11A and 11B are a schematic view of a block and a table of the pixel values thereof, respectively;
  • FIG. 12 is a block diagram illustrating an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 1 for compressing a picture signal;
  • FIG. 13 is a block diagram illustrating the construction of a second embodiment of an apparatus for compressing a picture signal according to the present invention.
  • FIG. 14 is a schematic view illustrating a block divided into four subblocks
  • FIG. 15 is a schematic view for explaining the method by which two representative vectors represent the motion vectors of the four subblocks shown in FIG. 14;
  • FIG. 16 is a table for explaining how the different patterns shown in FIG. 15 are detected in response to the differential vectors
  • FIG. 17 is a table for explaining how the different patterns shown in FIG. 15 are coded.
  • FIG. 18 is a block diagram illustrating the construction of one embodiment of an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 13 for compressing a picture signal;
  • FIG. 19 is a schematic view for explaining the evaluation of a representative vector
  • FIG. 20 is a block diagram illustrating the construction of third embodiment of an apparatus for compressing a picture signal according to the present invention.
  • FIG. 21 is a schematic view for explaining the method used in the embodiment shown in FIG. 1 in which the present picture is segmented into blocks for carrying out motion compensation;
  • FIG. 22 is a table for explaining the differential vector, and the adoption block code used in the embodiment shown in FIG. 1;
  • FIG. 23 is a schematic view for explaining the construction of the motion vector memory 52 used in the embodiment shown in FIG. 1;
  • FIG. 24 is a table illustrating the variable length coding adopted for the vector value in the embodiment shown in FIG. 1;
  • FIG. 25 is a block diagram illustrating the construction of one embodiment of an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 18 for compressing a picture signal;
  • FIG. 26 is a schematic view for explaining another embodiment in which the present picture is segmented into blocks for carrying out motion compensation.
  • FIG. 27 is a schematic view for explaining a further embodiment in which the present picture is segmented into blocks for carrying out motion compensation.
  • FIG. 1 shows a block diagram illustrating the construction of one embodiment of an apparatus according to the present invention for compressing a motion picture signal.
  • DCT Discrete cosine transform
  • blocks of the motion picture signal consisting of, e.g., 8 ⁇ 8 pixels (8 pixels ⁇ 8 lines).
  • DCT discrete cosine transform
  • many of the resulting transform coefficients are zero. Therefore, the number of bits of the compressed picture signal required to represent the transform coefficients can be reduced by including in the compressed picture signal data indicating the number of transform coefficients that are zero. This enables more bits to be allocated for quantizing the non-zero transform coefficients, which, in turn, reduces the quantizing noise.
  • the apparatus shown in FIG. 1 is constructed to reduce further the volume of the compressed picture signal required to represent the transform coefficients by increasing the number of zero transform coefficients.
  • FIGS. 2A and 2B show a block composed of 8 ⁇ 8 pixels segmented into four subblocks, each consisting of 4 ⁇ 4 pixels. Then, the flatness of each subblock is measured. "Flatness,” as used herein, indicates that the variation in the pixel values within the subblock is small.
  • FIGS. 2A and 2B depict, as an example, an object, the balloon GX, in the right two subblocks, whereas the left two subblocks are devoid of any detail. More specifically, in this instance, the left two subblocks are deemed to be flat subblocks, while the right two blocks are deemed to be unflat subblocks. A flat subblock is indicated by a logical 1, whereas an unflat subblock is indicated by a logical 0. The flatness of the block shown in FIG. 2A may therefore be indicated as shown in FIG. 2B.
  • FIG. 3A a heart-shaped object is depicted in the upper two subblocks, whereas the lower two subblocks are devoid of any detail.
  • the flatness of the upper two subblocks is indicated by a logical 1
  • the flatness of the lower two subblocks is indicated by a logical 1.
  • the unflat upper two subblocks are folded over the flat lower two subblocks about the horizontal center line L1.
  • FIG. 3B When a block that is symmetrical between its upper and lower halves, i.e., about the center line L1, is discrete cosine transform processed, alternate rows of the resulting discrete cosine transform coefficients (transform coefficients) are all zero, as shown in FIG. 4.
  • the transform coefficients in alternate (even-numbered) lines are all zero.
  • FIG. 5A a heart-shaped object is shown in the right two subblocks in the block, whereas the left two subblocks are devoid of any detail.
  • the unflat subblocks are folded over the flat subblocks about the vertical center line L2. This results in the picture shown in FIG. 5B.
  • DCT processing is applied to the block shown in FIG. 5B, alternate columns of the resulting transform coefficients are all zero, as shown in FIG. 6.
  • FIG. 7A shows a heart-shaped object and a star-shaped object in the left upper subblock and in the right lower subblock, respectively, of the block, whereas the right upper and left lower subblocks are devoid of detail.
  • the picture illustrated in FIG. 7B results from folding the block about the vertical center line L2. Applying DCT processing to this block also results transform coefficients, alternate columns of which are zero, as shown in FIG. 6.
  • FIG. 8A shows a heart-shaped object in only the right lower subblock, whereas the remaining three subblocks are devoid of detail.
  • left subblocks are folded over the right subblocks along the vertical center line L1
  • the upper subblocks are folded over the lower subblocks along the horizontal line L2.
  • the resulting block is illustrated in FIG. 8B.
  • DCT processing is applied to the block illustrated in FIG. 8B, alternate rows and alternate columns of the resulting transform coefficients are all zero, as illustrated in FIG. 9.
  • the transform coefficients shown in FIG. 4 are synthesized with the transform coefficients shown in FIG. 6.
  • Determining the flatness of the subblocks in a block, and, when two or more subblocks are flat, performing one or more folding operations to increase the symmetry of the block, as described above, increases the number of zero transform coefficients when the block is DCT processed. This increases the number of quantizing bits available to represent the other, non-zero, transform coefficients in the compressed picture signal. Instead of including in the compressed picture signal data for each zero transform coefficient, data are included indicating the number of zero transform coefficients resulting from the DCT transform of the block. Since the zero transform coefficients are known from the flatness information, the number of bits available to represent the non-zero transform coefficients can be increased.
  • the block of transform coefficients can be read using a zigzag scan along lines at 45 degrees to the block, as shown in, e.g., FIG. 10A.
  • the resulting non-zero coefficients and data indicating the number of zero transform coefficients are included in the compressed picture signal.
  • rows and columns in which all of the transform coefficients are zero appear periodically, and the locations of these rows and columns are known in advance.
  • the zigzag-scan can be performed by skipping the rows and/or columns in which the transform coefficients are known to be zero. This way, the number of bits required to represent the transform coefficients in the compressed picture signal can be reduced.
  • the input picture signal SIN is supplied to the flat block judgment circuit 1, where it is divided into blocks, each block is segmented into subblocks, and the flatness of each subblock is judged. Then, the flat block judgment circuit 1 supplies the folding circuit 2, the switch 3, the variable length coder 6, and the signal multiplexer 7 with 4-bit flatness information S1, which indicates which of the four subblocks constituting the block is flat. Further, the flat block judgment circuit 1 calculates a representative value S2 of the flat subblocks, which it feeds to the signal multiplexer 7.
  • Each block of the motion picture signal is fed from the flat block judgment circuit 1 to the switch 3 either directly, or via the folding circuit 2, and thence to the discrete cosine transform (DCT) circuit 4.
  • the transform coefficients from the DCT circuit 4 are supplied to the quantizer 5, where they are quantized.
  • the quantized transform coefficients from the quantizer 5 are supplied to the variable length coder 6, and the output thereof is supplied to the signal multiplexer 7.
  • the coded transform coefficients are multiplexed with the representative value S2 from the flat block judgment circuit 1.
  • the multiplexed output is supplied to the output buffer 8, where it is temporarily stored.
  • the compressed picture signal Sout is read out from the output buffer 8 for recording on a suitable recording medium (not shown), such as a disc.
  • the flat block judgment circuit 1 segments the input motion picture signals into blocks of 8 ⁇ 8 pixels (8 pixels ⁇ 8 lines). Then, each block is segmented into four subblocks, each having 4 ⁇ 4 pixels (4 pixels ⁇ 4 lines). Additionally, the flatness of each of the four subblocks is judged. To determine the flatness of a subblock, for instance, if the difference between the maximum and the minimum of the pixel values in the subblock is smaller than a preset reference value, the subblock is judged to be flat. Alternatively, the flatness can be also judged from, e.g., the dispersion within the subblock. As explained above, the flatness information for each block, indicating the ones of the four subblocks constituting the block that are flat is a 4-bit code, as stated above.
  • the flat block judgment circuit 1 also computes the representative value of each of the subblocks judged to be flat, and feeds each representative value S2 to the signal multiplexer 7.
  • the representative value can be the left upper element A00 in FIG. 11B, which corresponds to the DC component of the transform coefficients.
  • the representative value of the subblock can be the mean of the pixel values in the subblock. In this case, the 16 pixel values within the subblock are added together, and the resulting sum is divided by 16 (4 ⁇ 4) to calculate the representative value.
  • One representative value can be provided for each subblock; or one representative value can be provided for each block.
  • the folding circuit 2 folds each block of the motion picture signal from the flat block judgment circuit 1, in response to the flatness information supplied by the flat block judging circuit 1. For example, in the block shown in FIG. 11A, if the three subblocks A, B and C are flat, and the subblock D is unflat, the folding process shown in FIGS. 8A and 8B is performed. If the pixel values in subblocks A' through D' of the block obtained by folding are indicated by a'ij, b'ij, c'ij, d'ij, respectively, these pixel values are computed by the following formulae:
  • dmn represents the pixels of the subblock D shown in FIG. 11A before the folding process is executed.
  • the switch 3 switches between its upper and lower contacts as shown in the figure in accordance with the flatness information supplied from the flat block judgment circuit 1. Consequently, blocks of the motion picture signal before being folded, or blocks of the folded picture signal are fed to the discrete cosine transform circuit 4 as required.
  • the discrete cosine transform circuit 4 applies discrete cosine transform processing to each block of folded or non-folded motion picture signal.
  • the resulting transform coefficients from the discrete cosine transform circuit 4 are supplied to the quantizer 5, where they are quantized using a predetermined quantizing step size.
  • the quantized transform coefficients 5 are supplied from the quantizer 5 to the variable-length coder 6, where they are variable-length coded.
  • the variable-length coder 6, described above with reference to FIG. 10B reads each block of transform coefficients by performing a zigzag-scan, and skipping the rows and/or columns in which all the transform coefficients are zero as a result of the folding. This way, the transform coefficients are variable-length coded.
  • the flat block judgment circuit 1 supplies the variable-length coder 6 with the flatness information S1 to tell the variable-length coder which rows and/or columns are to be skipped.
  • the variable-length coder 6 determines from the flatness information S1 how the folding was peribrined, and, hence, the rows and/or columns of zero transform coefficients resulting from transforming the folded block.
  • variable-length coded coefficients from the variable-length coder 6 are supplied to the signal multiplexer 7, where they are multiplexed with the representative value of the subblock S2 supplied by the flat block judgment circuit 1.
  • the resulting multiplexed signal is supplied to the output buffer 8, whence the compressed picture signal is subsequently read for recording on the disc (not shown).
  • FIG. 12 shows the construction of one embodiment of an apparatus for expanding the compressed picture signal compressed by the apparatus shown in FIG. 1 for compressing a motion picture signal.
  • the input buffer 11 temporarily stores the compressed picture signal reproduced from the recording medium (not shown), such as a disc.
  • the demultiplexer 12 separates the compressed picture signal received from the input buffer 11 into blocks of coded transform coefficients, representative values, and flatness information.
  • the variable-length coding of the coded coefficients S11 from the demultiplexer 12 is reversed by the inverse variable length coder 13, and the quantizing of the resulting quantized transform coefficients is reversed by the inverse quantizer 14.
  • the inverse discrete cosine transform circuit 15 applies an inverse discrete cosine transform to each block of transform coefficients from the inverse quantizer 14, and the resulting block of pixel values is fed to the switch 17 directly, and via the restoring circuit 16.
  • the restoring circuit restores those subblocks of pixel values that are flat blocks to picture blocks using the representative value S2X from by the demultiplexer 12.
  • the switch 17 selects blocks of pixel values from the output of the restoring circuit 16 or from the output of the inverse discrete cosine transform circuit 15 in response to the flatness information S1X.
  • the demultiplexer 12 separates the coded transform coefficients from the compressed picture signal read out of the input buffer 11, and supplies them to the inverse variable-length coder 13.
  • the demultiplexer 12 also separates the representative value S2X and the flatness information S1X from the compressed picture signal.
  • the flatness information S1X is fed to the inverse variable-length coder 13, the restoring circuit 16, and the switch 17, while the representative value S2X is fed to the restoring circuit 16.
  • the inverse variable-length coder 13 applies inverse variable-length coding processing to the coded transform coefficients from the demultiplexer 12.
  • the inverse variable-length coder in accordance with the flatness information S1X from the demultiplexer 12, inserts zeroes into the rows and/or columns of quantized transform coefficients that were skipped in the compressor as a result of folding.
  • the inverse quantizer 14 inversely quantizes the quantized transform coefficients from the inverse variable-length coder 13, and feeds the resulting transform coefficients to the inverse discrete cosine transform circuit 15.
  • the inverse discrete cosine transform circuit 15 applies inverse discrete cosine transform processing to each block of transform coefficients from the inverse quantizer 14, and provides corresponding blocks of pixel values.
  • the restoring circuit 16 replaces the flat subblock data, which was suppressed by folding in the compressor, with the representative value S2X from the demultiplexer 12.
  • the subblocks suppressed by folding are thereby restored.
  • the switch 17 selects either the output of the restoring circuit 16, or the output of the inverse discrete cosine transform circuit 15. An output signal corresponding to the original motion picture signal is therefore restored and fed to the output terminal.
  • the representative pixel value of the flat subblocks is included in the compressed picture signal in lieu of the coded transform coefficients of the flat subblocks.
  • transmitting no representative pixel value is also possible.
  • the flatness information which specifies the flat subblocks is transmitted.
  • the expander the inverse discrete cosine transformation is executed first, to provide the respective pixel values.
  • the pixel values in the flat subblocks are obtained by smoothing accordance with the flatness information.
  • the smoothing method involves, e.g., replacing the respective pixel values or clipping the pixel values in a predetermined range. If no representative pixel values are included in the compressed picture signal, it is impossible to reduce the number of transform coefficients by folding.
  • the blocks of the motion picture signal are DCT processed.
  • the folding process just described can be used to reduce the number of bits in the compressed picture signal can be used with any transform method in which multiple zero coefficients result from transforming a symmetrical block as in, e.g., a Hadamard transform.
  • the present invention is, as a matter of course, applicable to a still picture signal, and is not simply limited to a motion picture signal.
  • FIG. 13 is a block diagram illustrating the second embodiment of an apparatus according to the present invention for compressing a motion picture signal.
  • the discrete cosine transform circuit 22 applies DCT processing to blocks of the motion picture signal SIN, or to a block of prediction errors between a block of the motion picture signal, and a corresponding prediction block of a prediction picture, generated by the subtraction circuit 21.
  • the quantizer 23 quantizes the resulting transform coefficients from the DCT circuit 22.
  • the variable-length coder 24 applies variable-length coding to the resulting quantized transform coefficients.
  • the quantized transform coefficients from the quantizer 23 are fed to the inverse quantizer 27, where they are inversely quantized.
  • the inverse discrete cosine transform circuit 28 applies inverse DCT processing to the resulting transform coefficients to provide a block of a reconstructed prediction errors.
  • the block of reconstructed prediction errors is fed into the adder 34, where it is added to the prediction block.
  • the resulting reconstructed picture block is stored in the frame memory 29 as a block of a prediction picture.
  • the vector arithmetic unit 30 segments the blocks of 8 ⁇ 8 pixels (8 pixels ⁇ 8 lines) of the motion picture signal into four subblocks A, B, C and D, each having 4 ⁇ 4 pixels (4 pixels ⁇ 4 lines).
  • the vector arithmetic unit 30 also computes a motion vector VA, VB, VC, and VD for each subblock, using the prediction pictures stored in the frame memory 29. This is shown in FIG. 14.
  • the vector arithmetic unit 30 also computes the following values:
  • the vector arithmetic unit 30 compares the values of the differential vectors with a predetermined threshold THR, and feeds the results to the block pattern judging unit 31.
  • the block pattern judging unit 31 selects one of the motion vectors VA, VB, VC, or VD of the subblocks A, B, C, or D as each of the one or two representative vectors shown in FIG. 15 for instance.
  • the first pattern PT1 requires a single representative motion vector to represent the motion vectors VA, VB, VC, and VD of the four subblocks.
  • the pattern PT1 is the same as if motion compensation were applied to the complete block.
  • the second pattern PT2 requires two representative motion vectors to represent the sets of motion vectors VA and VB, and VC and VD, respectively.
  • the third pattern PT3 requires two representative motion vectors to which represent the sets of motion vectors VA and VC, and VB and VD, respectively.
  • the fourth pattern PT4 requires two representative motion vectors, one of which represents motion vector VA and is, for instance the motion vector VA itself, the other of which represents the set of motion vectors VB, VC and VD.
  • the fifth pattern PT5 requires two representative motion vectors, one of which represents motion vector VB, and is, for instance, motion vector VB itself, the other of which represents the set of motion vectors VA, VC and VD.
  • the sixth pattern PT6 requires two representative motion vectors, one of which represents motion vector VC, and is, for instance, motion vector VC itself, the other of which represents the set of motion vectors VA, VB and VD.
  • the seventh pattern PT7 requires two representative motion vectors, one of which represents motion vector VD, and is, for instance, motion vector VD itself, the other of which represents the set of motion vectors VA, VB and VC.
  • the block pattern judging unit 31 in response to the results from the vector arithmetic unit 30, selects one of the available patterns according to Table 1 (FIG. 16).
  • the block pattern judging circuit feeds a selected pattern signal, indicating the selected pattern, into the motion compensation circuit 32, which, in response to the selected pattern signal and the representative motion vectors, performs motion compensation on the prediction pictures stored in the frame memory 29.
  • a O indicates when the value of the differential vector is smaller than the threshold THR
  • an X indicates when the value of the differential vector is larger than the threshold THR.
  • the selected pattern signal is also fed into the vector encoder 33. Based on, e.g., Table 2, shown in FIG. 17, the vector encoder generates the appropriate variable-length selected pattern code to indicate the selected pattern.
  • the selected pattern code is fed into the multiplexer 25, where it is multiplexed with the code from the variable-length coder 24.
  • the subtractor 21 derives a block of prediction errors between a block of the current picture and a corresponding prediction block of a prediction picture read out of the frame memory 29.
  • the block of prediction errors is fed into the discrete cosine transform (DCT) circuit 22.
  • the DCT circuit 22 applies a discrete cosine transform to the block of prediction errors, and feeds the resulting transform coefficients into the quantizer 23.
  • the quantizer quantizes the transform coefficients, and the resulting quantized transform coefficients are then variable-length coded by the variable-length coder (VLC) 24.
  • VLC variable-length coder
  • the resulting coded transform coefficients are then fed to the output terminal via the multiplexer 25 and the output buffer 26.
  • the inverse quantizer 27 and the inverse DCT circuit 28 respectively apply inverse quantizing and an inverse discrete cosine transform to the quantized transform coefficients from the quantizer 23, and the resulting block of reconstructed prediction errors is fed to the adder 34.
  • the block of reconstructed prediction errors supplied to the adder 34 is a reconstruction of the block of prediction errors produced by the subtractor 21.
  • the motion compensation circuit 32 performs motion compensation on the prediction picture, for example, the previous picture, in response to the selected pattern signal and the representative motion vectors from the block pattern judging unit 31.
  • the prediction block which is a block of the prediction picture to which motion compensation has been applied according to the selected pattern signal and the representative motion vectors, is read out from the frame memory 29 and fed to the adder 34 and the subtractor 21.
  • the adder 34 adds the prediction block to the block of reconstructed prediction errors from the inverse DCT circuit 28, and the resulting reconstructed picture block is supplied to the frame memory 29, where it is stored as a block of another prediction picture.
  • the motion picture input signal (e.g. a digital video signal) is fed into the vector arithmetic unit 30, where the above-mentioned motion vectors VA, VB, VC, and VD of the four subblocks A, B, C, and D, respectively, are computed and detected.
  • the vector arithmetic unit additionally calculates the magnitudes VAB, VAC, VAD, VBC, VBD, and VCD of the differences between pairs of these vectors, i.e. the differential vectors.
  • the resulting vectors and differential vectors are fed into the block pattern judging unit 31, wherein the block pattern for each block is selected. More specifically, the magnitudes of the differential vectors are compared to the threshold THR to determine which of the patterns PT1 through PT7 to apply in accordance with the selection rules shown in Table 1 (FIG. 16).
  • the resulting selected pattern signal is fed into the vector encoder 33, wherein the selected pattern signal is coded using, for example, the variable-length codes shown in Table 2 (FIG. 17).
  • the coded pattern signal is fed into the multiplexer 25, where it is multiplexed with the coded transform coefficients, as described above.
  • the selected pattern signal is also fed into the motion compensator 32, where it employed for providing motion compensation.
  • FIG. 18 is a block diagram illustrating an embodiment of an apparatus for expanding the compressed motion picture signal compressed by the motion picture signal compressor shown in FIG. 13.
  • the compressed motion picture signal SINX is fed into the demultiplexer 42 through the input buffer 41.
  • the demultiplexer the compressed motion picture signal is separated into coded transform coefficients and coded pattern information.
  • the coded transform coefficients are fed into the inverse variable-length coder (VLC) 43 where the variable-length coding is decoded.
  • VLC variable-length coder
  • the resulting quantized transform coefficients are inversely quantized by the inverse quantizer 44, and the resulting transform coefficients are inverse discrete cosine transformed by the inverse discrete cosine transform circuit 45.
  • the resulting blocks of prediction errors are, in the same way as in the local decoder in the compressor, added to the corresponding prediction block from the frame memory 49, and stored as a block of a new prediction picture in the frame memory 49.
  • the coded selected pattern signal is fed into the pattern information decoder 47, where it is thereby decoded to provide a selected pattern signal and motion vectors.
  • This information is fed into the motion compensator 48, where it is used to apply motion compensation to the prediction picture in the frame memory 48. In this manner, the compressed motion picture signal is expanded to reconstruct the original motion picture signal.
  • the representative motion vector is determined by performing block matching between the current block and the prediction picture in each of the two areas into which the current block is divided.
  • the block matching results in the absolute difference sum between the area and the prediction picture being a minimum.
  • sumA (vX, vY) be the absolute difference sum of the subblock A when the motion vector is (vX, vY).
  • SumA (vX, vY) is expressed as follows: ##EQU1##
  • sumB (vX, vY), sumC (vX, vY), sumD (vX, vY) are calculated and stored, together with sumA (vX, vY).
  • the motion vectors of subblocks B, C and D are also obtained. Then, the differences between these vectors motion are calculated, from which the block patterns are determined using the rules set out in Table 1.
  • the resulting vectors VA+B, VC+D are representative motion vectors.
  • the representative vectors are vectors VA and VB+C+D.
  • the methods of selecting the block pattern and the representative vector are not limited to the embodiment discussed above.
  • another method of selecting the block pattern could involve selecting based on the magnitude of the absolute difference sum with respect to the prediction picture.
  • the representative vectors are (vector VA+vector VB)/2 and (vector VC+vector VD)/2.
  • the representative vectors may be the vector VA and (vector VB+vector VC+vector VD)/3.
  • the block pattern codes are not limited to those set forth in Table 2.
  • the motion vectors of the four subblocks are expressed by the two representative motion vectors, but may alternatively be expressed by three representative motion vectors.
  • FIG. 20 is a block diagram illustrating a third embodiment of an apparatus for compressing a motion picture signal.
  • the motion picture signal divided into blocks of, for instance, 8 ⁇ 8 pixel values, is fed into the motion vector arithmetic unit 51.
  • the output of the motion vector arithmetic unit 51 is fed into the motion vector memory 52 and stored therein.
  • the output of the motion vector arithmetic unit 51 is also fed into the motion compensator 32, and into the vector encoder 53.
  • the prediction picture read out data from the frame memory 29 is fed into the motion compensator 32.
  • the output read from the motion vector memory 52 is also supplied to the vector encoder 53.
  • the information received by the vector encoder has been processed by the arithmetic equations described above, so that differential vector information and block pattern information is encoded therein.
  • the coded transform coefficients coded by the variable length coder 24 are supplied to the multiplexer 25, which multiplexes them with the differential vector information and feeds the resulting multiplexed signal to the output buffer 26.
  • the compressed motion picture signal is read out of the output buffer for recording on a disc (not shown), for instance.
  • FIG. 21 illustrates how the motion picture signal is segmented into blocks, each of which includes 8 ⁇ 8 pixel values, as the units for carrying out motion compensation. Shown in this example is the case in which the motion vectors of three neighboring blocks A, B, and C, adjacent upwards, right upwards and to the left of the current block X are to be compared.
  • Each block has one motion vector for indicating the motion with respect to a prediction block of a prediction picture, one picture before the current block X. More specifically, VX is temporarily defined as the motion vector of the current block X, while VA, VB, and VC are defined as the motion vectors of the neighboring blocks A, B, and C, respectively.
  • VX is coded using the fewest bits, processing based on the following algorithms is executed. Namely;
  • a first step is to check which of the motion vectors VA or VB or VC equals VX (alternatively, whether or not differences fall within a range that is allowable as equal). If two or more vectors are equal to VX, they are represented by one vector; and
  • step (1) There exist three possible results from the processing of step (1), and four possible results from the processing in step (2), a total of seven possible results from the processing. These results are, as illustrated in FIG. 22, expressed by 3-bit adoption block codes.
  • step (1) if it is determined that one of the vectors, VA, VB, and VC is equal to the motion vector VX (alternatively, the difference falls within the range that is allowable as equal).
  • step (2) if it is decided that none of the vectors, VA, VB, and VC is equal the motion vector VX (alternatively, the differences exceed the allowable range), transmitted also are the differential vector having the shortest code length among four vectors, VX-VXA, VX-VXB, VX-VXC and VX and the adoption block code after being coded.
  • the blocks XA and XB of will not exist (when the current block is located in the uppermost row), or the block XC will not exist (when the current block is in the leftmost column). In such cases, no current block X is disposed in the first row and column of the picture, the comparative neighboring blocks do not exist at all. In such cases, the motion vector VX of the current block X is included in the compressed picture signal as it is.
  • the motion vector is computed from the current block of the motion picture signal.
  • the motion vector memory 52 is capable of storing, as shown in FIG. 23, twice as many motion vectors as the number of blocks in each row of the picture, i.e., the motion vectors for two rows.
  • the motion vector memory 52 holds the vectors of the row that is now being processed, i.e., the current row, and the row before it.
  • the adoption block code and the vector value (or differential value) are coded for each block, and the coded result is transferred from the vector encoder 53 to the multiplexer 25.
  • variable-length coder (VLC) 24 described above codes the quantized transform coefficients using the VLC codes shown in FIG. 24, which are then fed into the above-mentioned multiplexer 25.
  • the codes from the vector encoder 53 are multiplexed therein and transmitted via the output buffer 26 to a transmission path or a medium such as a storage device or the like (not shown).
  • FIG. 25 is a block diagram demonstrating one embodiment of an apparatus for expanding a compressed motion picture signal compressed by the compressor shown in FIG. 18.
  • the compressed motion picture signal is supplied to the demultiplexer 42 via the input buffer 41.
  • the demultiplexer separates from the compressed motion picture signal the coded transform coefficients and the differential vector information (i.e., the adoption block code and the differential vector code).
  • the differential vector information separated by the demultiplexer 42 is supplied to the vector decoder 61, to which the motion vector memory 62 is connected, to decode the motion vector VX of the current block according to the adoption block code.
  • the output of the vector decoder is supplied to the motion compensator 48, to which the output of the frame memory 49 is connected.
  • the motion compensator performs motion compensation using the motion vector, and stores the result in the frame memory 49 as a prediction block.
  • the prediction block is supplied to the adder 46, where it is added to the block of reconstructed prediction errors from the inverse discrete cosine transform circuit 45, to provide a block of the reconstructed picture.
  • the output of the adder 46 is supplied to the frame memory 49, where it is stored, and is read out as a block of the reconstructed picture signal.
  • the three neighboring blocks such as XA, XB, and XC, are compared with the current block X.
  • the current block may alternatively be compared with two neighboring blocks, such as XA and XB, as shown in FIG. 26, or with four neighboring blocks, such as XA, XB, XC, and XD, as shown in FIG. 27.
  • the adoption block code need not be a fixed length code of three bits, but also may be a variable-length code.
  • the magnitude of the vectors is variable-length coded in the manner shown in FIG. 24, the present invention is not limited this.

Abstract

Apparatus for compressing a motion picture signal divided into blocks, in which circuits perform the following operations: subtracting the blocks of the motion picture signal from corresponding prediction blocks of a prediction picture to provide prediction error blocks. Orthogonally transforming the error blocks to provide transform coefficients. Quantizing the transform coefficients to provide quantized transform coefficients, and coding the quantized transform coefficients to provide coded transform coefficients. A local decoder expands the quantized transform coefficients to provide an additional prediction picture. A motion detector calculates a motion vector for each of plural subblocks obtained by dividing each block by at least four. A representative motion vector generator generates plural representative vectors from the motion vectors of the subblocks constituting each block. The representative vectors are fewer in number than the number of subblocks per block, and represent the motion vectors of the subblocks. In response to the representative vectors, a motion compensator applies motion compensation to the prediction picture to produce the prediction blocks. In a complementary expander, a demultiplexer separates coded transform coefficients and coded vector data, including the representative vectors, from the compressed picture block. A vector decoder detects and decodes the representative vectors in the coded vector data, decoding fewer representative vectors than the number of subblocks. A calculating circuit calculates the motion vectors of the subblocks from the representative vectors, and a circuit derives a block of the output picture signal from the coded transform coefficients and the motion vectors.

Description

This is a continuation of application Ser. No. 07/942,927 filed on Sep. 10, 1992, now abandoned.
BACKGROUND OF THE INVENTION
This invention relates to an apparatus for compressing a picture signal, and an apparatus for expanding a compressed picture signal, and, more particularly, to an apparatus that is suitable for use in applications in which the picture signal being compressed is to be recorded, and in which the compressed picture signal being expanded has been reproduced.
It is known to compress a picture signal by dividing the picture signal into blocks of 8×8 pixels (=8 pixels×8 lines), and subjecting the blocks to processing by means of a discrete cosine transform (DCT), quantizing, and variable length coding the resulting transform coefficients, which are then recorded on a recording medium, e.g. a disc. The compressed picture signal recorded on the disc is then reproduced from the disc, variable length decoded, inverse-quantized, and inverse-discrete cosine transformed to reconstruct the original picture signal.
It is desirable to provide a recording medium that has a short access time and a large capacity because a motion picture, for example, requires that a large quantity of information to be stored. Presently, an NTSC video signal, for example, can be recorded on and reproduced from a conventional video disc. When it is desired to record the digital motion picture signal on a smaller disc than a conventional video disc, the motion picture signal must be subject to high efficiency compression, and the reproduced motion pictures signal must be capable of being expanded efficiently.
To answer this problem, there have been proposed some methods for compressing the motion picture signal to be recorded with high efficiency. One of these methods is that proposed by the moving picture experts group (MPEG). The MPEG method detects a motion vector for each block of the motion picture signal, and generates a prediction block by applying motion compensation to a prediction picture according to the motion vector. This reduces redundancy in the motion picture signal in the time domain. In addition, the block of prediction errors between each block of the present picture and its corresponding prediction block is subject to a discrete cosine transform, and the resulting transform coefficients are quantized, to reduce redundancy in the motion picture signal in the spatial domain.
Attempting to increase the compression efficiency by enlarging the quantizing step size by which the transform coefficients are quantized results in larger quantizing errors. Larger quantizing errors make noise the in flat portions of the picture (i.e. the portions of the picture in which there is little detail) more obvious.
Further, in a conventional apparatus for compressing a motion picture signal, the differential vector between the motion vectors of the target block and the left side block thereof is encoded on encoding the motion vector of prescribed each block. Therefore, when there are many targets to be imaged in a picture, the motions of which are different each other, the quantity of prediction error information between the current picture and the prediction picture is increased, which degrades the compression efficiency.
Furthermore, in this situation, there is a problem that different parts of the block can have motions that are different from each other, which degrades the prediction accuracy.
To remedy this problem, it has been suggested that each of the 8×8 blocks be divided into four 4×4 subblocks, that a motion vector be determined for each subblock, and that the motion of the block be compensated by using the resulting four motion vectors. However this proposal degrades the compression efficiency because of the increased number of motion vectors.
SUMMARY OF THE INVENTION
In view of the foregoing, an object of this invention is to provide an apparatus for compressing a motion picture signal and for expanding a compressed motion picture signal in which an original motion picture signal is compressed with high efficiency to provide a compressed signal that can be expanded to recover the original motion picture signal.
The foregoing object and other objects of the invention have been achieved by the provision of an apparatus for compressing a motion picture signal. The motion picture signal is divided into blocks. The apparatus comprises a circuit that subtracts the blocks of the motion picture signal from corresponding prediction blocks of a prediction picture to provide prediction error blocks; a circuit that orthogonally transforms the prediction error blocks to provide transform coefficients; a circuit that quantizes the transform coefficients to provide quantized transform coefficients; and a circuit that codes the quantized transform coefficients to provide coded transform coefficients. A local decoding circuit locally decodes the quantized transform coefficients to provide an additional prediction picture. A motion detector calculates a motion vector for each of plural subblocks obtained by dividing each block of the motion picture signal by at least four. A representative motion vector generating circuit generates plural representative motion vectors representing the motion vectors of the subblocks constituting each block. The representative motion vectors are generated from the motion vectors of the subblocks constituting the block, and are fewer in number than the number of subblocks constituting the block. Finally, the apparatus includes a motion compensator for producing the prediction blocks from the prediction picture by applying motion compensation to the prediction picture in response to the plural representative motion vectors.
The invention also provides a complementary expander for expanding a compressed motion picture signal. The compressed motion picture signal includes a compressed picture block obtained by compressing a block of a motion picture signal. The compressed picture block includes coded transform coefficients and coded vector data representing the block of the motion picture signal. The coded vector data includes plural representative motion vectors representing motion vectors of a number of subblocks obtained by dividing the block of the motion picture signal by at least four. The expander provides an output picture signal, and comprises a demultiplexer that separates the coded transform coefficients and the coded vector data from the compressed picture block. A vector decoder detects and decodes the plural representative motion vectors in the coded vector data. The vector decoder decodes fewer representative motion vectors than the number of subblocks. A calculating circuit calculates the motion vectors of the subblocks from the representative motion vectors. Finally, a circuit derives a block of the output picture signal from the coded transform coefficients and the motion vectors.
The nature, principle and utility of the invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings in which like parts are designated by like reference numerals or characters.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying drawings:
FIG. 1 is a block diagram illustrating the construction of a first embodiment of an apparatus for compressing a picture signal according to the present invention;
FIGS. 2A and 2B are schematic views illustrating a block consisting of flat subblocks and unflat subblocks;
FIGS. 3A and 3B are schematic views for explaining folding in a block that includes flat subblocks and unflat subblocks in its lower and upper parts, respectively;
FIG. 4 is a table showing the DCT coefficients generated by the folding shown in FIGS. 3A and 3B;
FIGS. 5A and 5B are schematic views for explaining folding in a block that includes flat subblocks and unflat subblocks in its left and right parts, respectively;
FIG. 6 is a table showing the DCT coefficients generated by the folding shown in FIGS. 5A and 5B;
FIGS. 7A and 7B are schematic views for explaining folding in a block that includes diagonally-opposed flat subblocks and unflat subblocks;
FIGS. 8A and 8B are schematic views for explaining folding in a block that includes only one unflat subblock;
FIG. 9 is a table showing the DCT coefficients generated by the folding shown in FIGS. 8A and 8B;
FIGS. 10A and 10B are schematic views for explaining a zigzag scan;
FIGS. 11A and 11B are a schematic view of a block and a table of the pixel values thereof, respectively;
FIG. 12 is a block diagram illustrating an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 1 for compressing a picture signal;
FIG. 13 is a block diagram illustrating the construction of a second embodiment of an apparatus for compressing a picture signal according to the present invention;
FIG. 14 is a schematic view illustrating a block divided into four subblocks;
FIG. 15 is a schematic view for explaining the method by which two representative vectors represent the motion vectors of the four subblocks shown in FIG. 14;
FIG. 16 is a table for explaining how the different patterns shown in FIG. 15 are detected in response to the differential vectors;
FIG. 17 is a table for explaining how the different patterns shown in FIG. 15 are coded;
FIG. 18 is a block diagram illustrating the construction of one embodiment of an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 13 for compressing a picture signal;
FIG. 19 is a schematic view for explaining the evaluation of a representative vector;
FIG. 20 is a block diagram illustrating the construction of third embodiment of an apparatus for compressing a picture signal according to the present invention;
FIG. 21 is a schematic view for explaining the method used in the embodiment shown in FIG. 1 in which the present picture is segmented into blocks for carrying out motion compensation;
FIG. 22 is a table for explaining the differential vector, and the adoption block code used in the embodiment shown in FIG. 1;
FIG. 23 is a schematic view for explaining the construction of the motion vector memory 52 used in the embodiment shown in FIG. 1;
FIG. 24 is a table illustrating the variable length coding adopted for the vector value in the embodiment shown in FIG. 1;
FIG. 25 is a block diagram illustrating the construction of one embodiment of an apparatus for expanding a compressed picture signal compressed by the apparatus shown in FIG. 18 for compressing a picture signal;
FIG. 26 is a schematic view for explaining another embodiment in which the present picture is segmented into blocks for carrying out motion compensation; and
FIG. 27 is a schematic view for explaining a further embodiment in which the present picture is segmented into blocks for carrying out motion compensation.
DETAILED DESCRIPTION OF THE INVENTION
Preferred embodiments of this invention will be described with reference to the accompanying drawings:
FIG. 1 shows a block diagram illustrating the construction of one embodiment of an apparatus according to the present invention for compressing a motion picture signal. The principles of the apparatus will be described first. Discrete cosine transform (DCT) processing is applied to blocks of the motion picture signal consisting of, e.g., 8×8 pixels (8 pixels×8 lines). When a discrete cosine transform is applied to a block of the motion picture signal, many of the resulting transform coefficients are zero. Therefore, the number of bits of the compressed picture signal required to represent the transform coefficients can be reduced by including in the compressed picture signal data indicating the number of transform coefficients that are zero. This enables more bits to be allocated for quantizing the non-zero transform coefficients, which, in turn, reduces the quantizing noise. The apparatus shown in FIG. 1 is constructed to reduce further the volume of the compressed picture signal required to represent the transform coefficients by increasing the number of zero transform coefficients.
FIGS. 2A and 2B show a block composed of 8×8 pixels segmented into four subblocks, each consisting of 4×4 pixels. Then, the flatness of each subblock is measured. "Flatness," as used herein, indicates that the variation in the pixel values within the subblock is small.
FIGS. 2A and 2B depict, as an example, an object, the balloon GX, in the right two subblocks, whereas the left two subblocks are devoid of any detail. More specifically, in this instance, the left two subblocks are deemed to be flat subblocks, while the right two blocks are deemed to be unflat subblocks. A flat subblock is indicated by a logical 1, whereas an unflat subblock is indicated by a logical 0. The flatness of the block shown in FIG. 2A may therefore be indicated as shown in FIG. 2B.
The flatness of each subblock is indicated by a logical 0 or a logical 1. Consequently, one block consisting of four subblocks can have 16 (=2×2×2×2) possible patterns of flat/unflat subblocks. Hence, flatness information of one block can be indicated by a 4-bit word.
For example, in FIG. 3A, a heart-shaped object is depicted in the upper two subblocks, whereas the lower two subblocks are devoid of any detail. In this case, the flatness of the upper two subblocks is indicated by a logical 0, whereas the flatness of the lower two subblocks is indicated by a logical 1. In this instance, the unflat upper two subblocks are folded over the flat lower two subblocks about the horizontal center line L1. This results in the picture shown in FIG. 3B. When a block that is symmetrical between its upper and lower halves, i.e., about the center line L1, is discrete cosine transform processed, alternate rows of the resulting discrete cosine transform coefficients (transform coefficients) are all zero, as shown in FIG. 4. Thus, the transform coefficients in alternate (even-numbered) lines are all zero.
Further, in FIG. 5A, a heart-shaped object is shown in the right two subblocks in the block, whereas the left two subblocks are devoid of any detail. In this instance, the unflat subblocks are folded over the flat subblocks about the vertical center line L2. This results in the picture shown in FIG. 5B. When DCT processing is applied to the block shown in FIG. 5B, alternate columns of the resulting transform coefficients are all zero, as shown in FIG. 6.
In a further example, FIG. 7A shows a heart-shaped object and a star-shaped object in the left upper subblock and in the right lower subblock, respectively, of the block, whereas the right upper and left lower subblocks are devoid of detail. In this case, the picture illustrated in FIG. 7B results from folding the block about the vertical center line L2. Applying DCT processing to this block also results transform coefficients, alternate columns of which are zero, as shown in FIG. 6.
In a yet further example, FIG. 8A shows a heart-shaped object in only the right lower subblock, whereas the remaining three subblocks are devoid of detail. In this case, left subblocks are folded over the right subblocks along the vertical center line L1, and the upper subblocks are folded over the lower subblocks along the horizontal line L2. The resulting block is illustrated in FIG. 8B. When DCT processing is applied to the block illustrated in FIG. 8B, alternate rows and alternate columns of the resulting transform coefficients are all zero, as illustrated in FIG. 9. In other words, the transform coefficients shown in FIG. 4 are synthesized with the transform coefficients shown in FIG. 6.
Determining the flatness of the subblocks in a block, and, when two or more subblocks are flat, performing one or more folding operations to increase the symmetry of the block, as described above, increases the number of zero transform coefficients when the block is DCT processed. This increases the number of quantizing bits available to represent the other, non-zero, transform coefficients in the compressed picture signal. Instead of including in the compressed picture signal data for each zero transform coefficient, data are included indicating the number of zero transform coefficients resulting from the DCT transform of the block. Since the zero transform coefficients are known from the flatness information, the number of bits available to represent the non-zero transform coefficients can be increased.
The block of transform coefficients can be read using a zigzag scan along lines at 45 degrees to the block, as shown in, e.g., FIG. 10A. The resulting non-zero coefficients and data indicating the number of zero transform coefficients are included in the compressed picture signal. In accordance with the embodiment discussed above, however, rows and columns in which all of the transform coefficients are zero appear periodically, and the locations of these rows and columns are known in advance. Hence, as illustrated in FIG. 10B, the zigzag-scan can be performed by skipping the rows and/or columns in which the transform coefficients are known to be zero. This way, the number of bits required to represent the transform coefficients in the compressed picture signal can be reduced.
In the apparatus shown in FIG. 1 for compressing a motion picture signal, the input picture signal SIN is supplied to the flat block judgment circuit 1, where it is divided into blocks, each block is segmented into subblocks, and the flatness of each subblock is judged. Then, the flat block judgment circuit 1 supplies the folding circuit 2, the switch 3, the variable length coder 6, and the signal multiplexer 7 with 4-bit flatness information S1, which indicates which of the four subblocks constituting the block is flat. Further, the flat block judgment circuit 1 calculates a representative value S2 of the flat subblocks, which it feeds to the signal multiplexer 7.
Each block of the motion picture signal is fed from the flat block judgment circuit 1 to the switch 3 either directly, or via the folding circuit 2, and thence to the discrete cosine transform (DCT) circuit 4. The transform coefficients from the DCT circuit 4 are supplied to the quantizer 5, where they are quantized. The quantized transform coefficients from the quantizer 5 are supplied to the variable length coder 6, and the output thereof is supplied to the signal multiplexer 7. In the signal multiplexer, the coded transform coefficients are multiplexed with the representative value S2 from the flat block judgment circuit 1. The multiplexed output is supplied to the output buffer 8, where it is temporarily stored. The compressed picture signal Sout is read out from the output buffer 8 for recording on a suitable recording medium (not shown), such as a disc.
Next, the operation of the circuit will be explained. The flat block judgment circuit 1 segments the input motion picture signals into blocks of 8×8 pixels (8 pixels×8 lines). Then, each block is segmented into four subblocks, each having 4×4 pixels (4 pixels×4 lines). Additionally, the flatness of each of the four subblocks is judged. To determine the flatness of a subblock, for instance, if the difference between the maximum and the minimum of the pixel values in the subblock is smaller than a preset reference value, the subblock is judged to be flat. Alternatively, the flatness can be also judged from, e.g., the dispersion within the subblock. As explained above, the flatness information for each block, indicating the ones of the four subblocks constituting the block that are flat is a 4-bit code, as stated above.
The flat block judgment circuit 1 also computes the representative value of each of the subblocks judged to be flat, and feeds each representative value S2 to the signal multiplexer 7. The representative value can be the left upper element A00 in FIG. 11B, which corresponds to the DC component of the transform coefficients. Alternatively, the representative value of the subblock can be the mean of the pixel values in the subblock. In this case, the 16 pixel values within the subblock are added together, and the resulting sum is divided by 16 (4×4) to calculate the representative value. One representative value can be provided for each subblock; or one representative value can be provided for each block.
The folding circuit 2 folds each block of the motion picture signal from the flat block judgment circuit 1, in response to the flatness information supplied by the flat block judging circuit 1. For example, in the block shown in FIG. 11A, if the three subblocks A, B and C are flat, and the subblock D is unflat, the folding process shown in FIGS. 8A and 8B is performed. If the pixel values in subblocks A' through D' of the block obtained by folding are indicated by a'ij, b'ij, c'ij, d'ij, respectively, these pixel values are computed by the following formulae:
a'ij=d(3-i)(3-j)                                           (1)
b'ij=d(3-i)j                                               (2)
c'ij=di(3-j)                                               (3)
d'ij=dij                                                   (4)
where, as shown in FIG. 11B, dmn represents the pixels of the subblock D shown in FIG. 11A before the folding process is executed.
The switch 3 switches between its upper and lower contacts as shown in the figure in accordance with the flatness information supplied from the flat block judgment circuit 1. Consequently, blocks of the motion picture signal before being folded, or blocks of the folded picture signal are fed to the discrete cosine transform circuit 4 as required. The discrete cosine transform circuit 4 applies discrete cosine transform processing to each block of folded or non-folded motion picture signal. The resulting transform coefficients from the discrete cosine transform circuit 4 are supplied to the quantizer 5, where they are quantized using a predetermined quantizing step size.
The quantized transform coefficients 5 are supplied from the quantizer 5 to the variable-length coder 6, where they are variable-length coded. The variable-length coder 6, described above with reference to FIG. 10B, reads each block of transform coefficients by performing a zigzag-scan, and skipping the rows and/or columns in which all the transform coefficients are zero as a result of the folding. This way, the transform coefficients are variable-length coded. The flat block judgment circuit 1 supplies the variable-length coder 6 with the flatness information S1 to tell the variable-length coder which rows and/or columns are to be skipped. The variable-length coder 6 determines from the flatness information S1 how the folding was peribrined, and, hence, the rows and/or columns of zero transform coefficients resulting from transforming the folded block.
The variable-length coded coefficients from the variable-length coder 6 are supplied to the signal multiplexer 7, where they are multiplexed with the representative value of the subblock S2 supplied by the flat block judgment circuit 1. The resulting multiplexed signal is supplied to the output buffer 8, whence the compressed picture signal is subsequently read for recording on the disc (not shown).
FIG. 12 shows the construction of one embodiment of an apparatus for expanding the compressed picture signal compressed by the apparatus shown in FIG. 1 for compressing a motion picture signal. In the apparatus shown in FIG. 12, the input buffer 11 temporarily stores the compressed picture signal reproduced from the recording medium (not shown), such as a disc. The demultiplexer 12 separates the compressed picture signal received from the input buffer 11 into blocks of coded transform coefficients, representative values, and flatness information. The variable-length coding of the coded coefficients S11 from the demultiplexer 12 is reversed by the inverse variable length coder 13, and the quantizing of the resulting quantized transform coefficients is reversed by the inverse quantizer 14. The inverse discrete cosine transform circuit 15 applies an inverse discrete cosine transform to each block of transform coefficients from the inverse quantizer 14, and the resulting block of pixel values is fed to the switch 17 directly, and via the restoring circuit 16. The restoring circuit restores those subblocks of pixel values that are flat blocks to picture blocks using the representative value S2X from by the demultiplexer 12. The switch 17 selects blocks of pixel values from the output of the restoring circuit 16 or from the output of the inverse discrete cosine transform circuit 15 in response to the flatness information S1X.
The operation of the apparatus for expanding the compressed picture signal will now be described. The demultiplexer 12 separates the coded transform coefficients from the compressed picture signal read out of the input buffer 11, and supplies them to the inverse variable-length coder 13. The demultiplexer 12 also separates the representative value S2X and the flatness information S1X from the compressed picture signal. The flatness information S1X is fed to the inverse variable-length coder 13, the restoring circuit 16, and the switch 17, while the representative value S2X is fed to the restoring circuit 16.
The inverse variable-length coder 13 applies inverse variable-length coding processing to the coded transform coefficients from the demultiplexer 12. The inverse variable-length coder, in accordance with the flatness information S1X from the demultiplexer 12, inserts zeroes into the rows and/or columns of quantized transform coefficients that were skipped in the compressor as a result of folding. The inverse quantizer 14 inversely quantizes the quantized transform coefficients from the inverse variable-length coder 13, and feeds the resulting transform coefficients to the inverse discrete cosine transform circuit 15. The inverse discrete cosine transform circuit 15 applies inverse discrete cosine transform processing to each block of transform coefficients from the inverse quantizer 14, and provides corresponding blocks of pixel values.
As stated above, some of the blocks of pixel values are obtained as a consequence of folding in the compressor. For these blocks, in response to the flatness information S1X, the restoring circuit 16 replaces the flat subblock data, which was suppressed by folding in the compressor, with the representative value S2X from the demultiplexer 12. The subblocks suppressed by folding are thereby restored. The switch 17 selects either the output of the restoring circuit 16, or the output of the inverse discrete cosine transform circuit 15. An output signal corresponding to the original motion picture signal is therefore restored and fed to the output terminal.
In the system just described, the representative pixel value of the flat subblocks is included in the compressed picture signal in lieu of the coded transform coefficients of the flat subblocks. However, transmitting no representative pixel value is also possible. In this case, only the flatness information which specifies the flat subblocks is transmitted. In this instance, in the expander, the inverse discrete cosine transformation is executed first, to provide the respective pixel values. Then, the pixel values in the flat subblocks are obtained by smoothing accordance with the flatness information. The smoothing method involves, e.g., replacing the respective pixel values or clipping the pixel values in a predetermined range. If no representative pixel values are included in the compressed picture signal, it is impossible to reduce the number of transform coefficients by folding.
In the embodiment described above, the blocks of the motion picture signal are DCT processed. However, the folding process just described can be used to reduce the number of bits in the compressed picture signal can be used with any transform method in which multiple zero coefficients result from transforming a symmetrical block as in, e.g., a Hadamard transform. Further, the present invention is, as a matter of course, applicable to a still picture signal, and is not simply limited to a motion picture signal.
Next, FIG. 13 is a block diagram illustrating the second embodiment of an apparatus according to the present invention for compressing a motion picture signal.
In FIG. 13, the discrete cosine transform circuit 22 applies DCT processing to blocks of the motion picture signal SIN, or to a block of prediction errors between a block of the motion picture signal, and a corresponding prediction block of a prediction picture, generated by the subtraction circuit 21. The quantizer 23 quantizes the resulting transform coefficients from the DCT circuit 22. The variable-length coder 24 applies variable-length coding to the resulting quantized transform coefficients. The quantized transform coefficients from the quantizer 23 are fed to the inverse quantizer 27, where they are inversely quantized. The inverse discrete cosine transform circuit 28 applies inverse DCT processing to the resulting transform coefficients to provide a block of a reconstructed prediction errors. The block of reconstructed prediction errors is fed into the adder 34, where it is added to the prediction block. The resulting reconstructed picture block is stored in the frame memory 29 as a block of a prediction picture.
The vector arithmetic unit 30 segments the blocks of 8×8 pixels (8 pixels×8 lines) of the motion picture signal into four subblocks A, B, C and D, each having 4×4 pixels (4 pixels×4 lines). The vector arithmetic unit 30 also computes a motion vector VA, VB, VC, and VD for each subblock, using the prediction pictures stored in the frame memory 29. This is shown in FIG. 14.
The vector arithmetic unit 30 also computes the following values:
the value VAB of the differential vector between the motion vector VA of the subblock A and the motion vector VB of the subblock B,
VAB=|vector VA-vector VB|                (5)
the value VAC of the differential vector between the motion vector VA of the subblock A and the motion vector VC of the subblock C,
VAC=|vector VA-vector VC|                (6)
the value VAD of the differential vector between the motion vector VA of the subblock A and the motion vector VD of the subblock D,
VAD=|vector VA-vector VD|                (7)
the value VBC of a differential vector between the motion vector VB of the subblock B and the motion vector VC of the subblock C,
VBC=|vector VB-vector VC|                (8)
the value VBD of a differential vector between the motion vector VB of the subblock B and the motion vector VD of the subblock D, and
VBD=|vector VB-vector VD|                (9)
the value VCD of a differential vector between the motion vector VC of the subblock C and the motion vector VD of the subblock D.
VCD=|vector VC-vector VD|                (10)
The vector arithmetic unit 30 then compares the values of the differential vectors with a predetermined threshold THR, and feeds the results to the block pattern judging unit 31. The block pattern judging unit 31 selects one of the motion vectors VA, VB, VC, or VD of the subblocks A, B, C, or D as each of the one or two representative vectors shown in FIG. 15 for instance.
In FIG. 15, the first pattern PT1 requires a single representative motion vector to represent the motion vectors VA, VB, VC, and VD of the four subblocks. The pattern PT1 is the same as if motion compensation were applied to the complete block.
The second pattern PT2 requires two representative motion vectors to represent the sets of motion vectors VA and VB, and VC and VD, respectively. The third pattern PT3 requires two representative motion vectors to which represent the sets of motion vectors VA and VC, and VB and VD, respectively.
The fourth pattern PT4 requires two representative motion vectors, one of which represents motion vector VA and is, for instance the motion vector VA itself, the other of which represents the set of motion vectors VB, VC and VD. The fifth pattern PT5 requires two representative motion vectors, one of which represents motion vector VB, and is, for instance, motion vector VB itself, the other of which represents the set of motion vectors VA, VC and VD. The sixth pattern PT6 requires two representative motion vectors, one of which represents motion vector VC, and is, for instance, motion vector VC itself, the other of which represents the set of motion vectors VA, VB and VD. The seventh pattern PT7 requires two representative motion vectors, one of which represents motion vector VD, and is, for instance, motion vector VD itself, the other of which represents the set of motion vectors VA, VB and VC.
The block pattern judging unit 31, in response to the results from the vector arithmetic unit 30, selects one of the available patterns according to Table 1 (FIG. 16). The block pattern judging circuit feeds a selected pattern signal, indicating the selected pattern, into the motion compensation circuit 32, which, in response to the selected pattern signal and the representative motion vectors, performs motion compensation on the prediction pictures stored in the frame memory 29.
Note that in the Table 1 of FIG. 16, a O indicates when the value of the differential vector is smaller than the threshold THR, and an X indicates when the value of the differential vector is larger than the threshold THR.
The selected pattern signal is also fed into the vector encoder 33. Based on, e.g., Table 2, shown in FIG. 17, the vector encoder generates the appropriate variable-length selected pattern code to indicate the selected pattern. The selected pattern code is fed into the multiplexer 25, where it is multiplexed with the code from the variable-length coder 24.
Next, the operation of the apparatus described above will be described. To reduce redundancy in the time domain, the subtractor 21 derives a block of prediction errors between a block of the current picture and a corresponding prediction block of a prediction picture read out of the frame memory 29. The block of prediction errors is fed into the discrete cosine transform (DCT) circuit 22. The DCT circuit 22 applies a discrete cosine transform to the block of prediction errors, and feeds the resulting transform coefficients into the quantizer 23. The quantizer quantizes the transform coefficients, and the resulting quantized transform coefficients are then variable-length coded by the variable-length coder (VLC) 24. The resulting coded transform coefficients are then fed to the output terminal via the multiplexer 25 and the output buffer 26.
In addition, the inverse quantizer 27 and the inverse DCT circuit 28 respectively apply inverse quantizing and an inverse discrete cosine transform to the quantized transform coefficients from the quantizer 23, and the resulting block of reconstructed prediction errors is fed to the adder 34. The block of reconstructed prediction errors supplied to the adder 34 is a reconstruction of the block of prediction errors produced by the subtractor 21.
The motion compensation circuit 32 performs motion compensation on the prediction picture, for example, the previous picture, in response to the selected pattern signal and the representative motion vectors from the block pattern judging unit 31. The prediction block, which is a block of the prediction picture to which motion compensation has been applied according to the selected pattern signal and the representative motion vectors, is read out from the frame memory 29 and fed to the adder 34 and the subtractor 21. The adder 34 adds the prediction block to the block of reconstructed prediction errors from the inverse DCT circuit 28, and the resulting reconstructed picture block is supplied to the frame memory 29, where it is stored as a block of another prediction picture.
To generate the prediction block, the motion picture input signal, (e.g. a digital video signal) is fed into the vector arithmetic unit 30, where the above-mentioned motion vectors VA, VB, VC, and VD of the four subblocks A, B, C, and D, respectively, are computed and detected. The vector arithmetic unit additionally calculates the magnitudes VAB, VAC, VAD, VBC, VBD, and VCD of the differences between pairs of these vectors, i.e. the differential vectors. The resulting vectors and differential vectors are fed into the block pattern judging unit 31, wherein the block pattern for each block is selected. More specifically, the magnitudes of the differential vectors are compared to the threshold THR to determine which of the patterns PT1 through PT7 to apply in accordance with the selection rules shown in Table 1 (FIG. 16).
The resulting selected pattern signal is fed into the vector encoder 33, wherein the selected pattern signal is coded using, for example, the variable-length codes shown in Table 2 (FIG. 17). The coded pattern signal is fed into the multiplexer 25, where it is multiplexed with the coded transform coefficients, as described above. In addition, as described above, the selected pattern signal is also fed into the motion compensator 32, where it employed for providing motion compensation.
Next, FIG. 18 is a block diagram illustrating an embodiment of an apparatus for expanding the compressed motion picture signal compressed by the motion picture signal compressor shown in FIG. 13. The compressed motion picture signal SINX is fed into the demultiplexer 42 through the input buffer 41. In the demultiplexer, the compressed motion picture signal is separated into coded transform coefficients and coded pattern information. The coded transform coefficients are fed into the inverse variable-length coder (VLC) 43 where the variable-length coding is decoded. The resulting quantized transform coefficients are inversely quantized by the inverse quantizer 44, and the resulting transform coefficients are inverse discrete cosine transformed by the inverse discrete cosine transform circuit 45.
The resulting blocks of prediction errors are, in the same way as in the local decoder in the compressor, added to the corresponding prediction block from the frame memory 49, and stored as a block of a new prediction picture in the frame memory 49.
The coded selected pattern signal is fed into the pattern information decoder 47, where it is thereby decoded to provide a selected pattern signal and motion vectors. This information is fed into the motion compensator 48, where it is used to apply motion compensation to the prediction picture in the frame memory 48. In this manner, the compressed motion picture signal is expanded to reconstruct the original motion picture signal.
Referring now to FIG. 19, a practical example of a method for calculating a representative motion vector will be described. The representative motion vector is determined by performing block matching between the current block and the prediction picture in each of the two areas into which the current block is divided. The block matching results in the absolute difference sum between the area and the prediction picture being a minimum. The respective symbols are defined as follows:
PO (x, y): pixel value of coordinates in the current picture
PREF (x, y): pixel value of coordinates of the prediction block in the prediction picture
(xA, yA): coordinates of left upper corner of subblock A
(vX, vY): motion vector
L: number of pixels of one side of the subblock
vLIMIT: search range of motion vector
Let sumA (vX, vY) be the absolute difference sum of the subblock A when the motion vector is (vX, vY). SumA (vX, vY) is expressed as follows: ##EQU1##
The value of (vX, vY) for which sumA (vX, vY) is a minimum in the range -vLIMIT≦vX≦vLIMIT, -vLIMIT≦vY≦vLIMIT is set as the motion vector VA=(vAX, vAY) of the subblock A.
Similarly, sumB (vX, vY), sumC (vX, vY), sumD (vX, vY) are calculated and stored, together with sumA (vX, vY). The motion vectors of subblocks B, C and D are also obtained. Then, the differences between these vectors motion are calculated, from which the block patterns are determined using the rules set out in Table 1.
For instance, when the pattern PT2 is selected,
sumA+B (vX, vY)=sumA (vX, vY)+sumB (vX, vY)                (12)
The value of (vX, vY) for which sumA+B is a minimum in the range of -vLIMIT≦vX≦vLIMIT, -vLIMIT≦vY≦vLIMIT is set as the vector VA+B=(v(A+B)X, v(A+B)Y). The vector VC+D=(v(C+D)X, v(C+D)Y), which gives a maximum value of sumC+D (vX, vY)=sumC (vX, vY)+sumD (vX, vY), is calculated in a similar manner. The resulting vectors VA+B, VC+D are representative motion vectors.
In the case of the block pattern PT4,
sumB+C+D (vX, vY)=sumB (vX, vY)+sumC (vX, vY)+sumD (vX,vY) (13)
and the vector VB+C+D is calculated therefrom. The representative vectors are vectors VA and VB+C+D.
In the case of the block pattern PT1,
sumA+B+C+D (vX, vY)=sumA (vX, vY)+sumB (vX, vY)+sumC (vX, vY)+sumD (vX, vY)(14)
and the representative vector VA+B+C+D is calculated therefrom.
Note that the methods of selecting the block pattern and the representative vector are not limited to the embodiment discussed above. For example, another method of selecting the block pattern could involve selecting based on the magnitude of the absolute difference sum with respect to the prediction picture. Further, as a method of selecting the representative motion vector, in the case of, e.g., the block pattern PT2, the representative vectors are (vector VA+vector VB)/2 and (vector VC+vector VD)/2. In the case of the pattern PT4, the representative vectors may be the vector VA and (vector VB+vector VC+vector VD)/3.
Moreover, the block pattern codes are not limited to those set forth in Table 2. Besides, in the embodiment described above, the motion vectors of the four subblocks are expressed by the two representative motion vectors, but may alternatively be expressed by three representative motion vectors.
FIG. 20 is a block diagram illustrating a third embodiment of an apparatus for compressing a motion picture signal. In FIG. 20, parts corresponding to those in the embodiment shown in FIG. 13 are designated by like reference numerals or characters. The motion picture signal, divided into blocks of, for instance, 8×8 pixel values, is fed into the motion vector arithmetic unit 51. The output of the motion vector arithmetic unit 51 is fed into the motion vector memory 52 and stored therein. The output of the motion vector arithmetic unit 51 is also fed into the motion compensator 32, and into the vector encoder 53. The prediction picture read out data from the frame memory 29 is fed into the motion compensator 32. The output read from the motion vector memory 52 is also supplied to the vector encoder 53. The information received by the vector encoder has been processed by the arithmetic equations described above, so that differential vector information and block pattern information is encoded therein.
The coded transform coefficients coded by the variable length coder 24 are supplied to the multiplexer 25, which multiplexes them with the differential vector information and feeds the resulting multiplexed signal to the output buffer 26. The compressed motion picture signal is read out of the output buffer for recording on a disc (not shown), for instance.
The operation of the circuit shown in FIG. 20 will be described with prediction to FIG. 21 through FIG. 24. FIG. 21 illustrates how the motion picture signal is segmented into blocks, each of which includes 8×8 pixel values, as the units for carrying out motion compensation. Shown in this example is the case in which the motion vectors of three neighboring blocks A, B, and C, adjacent upwards, right upwards and to the left of the current block X are to be compared.
Each block has one motion vector for indicating the motion with respect to a prediction block of a prediction picture, one picture before the current block X. More specifically, VX is temporarily defined as the motion vector of the current block X, while VA, VB, and VC are defined as the motion vectors of the neighboring blocks A, B, and C, respectively. Herein, when VX is coded using the fewest bits, processing based on the following algorithms is executed. Namely;
(1) A first step is to check which of the motion vectors VA or VB or VC equals VX (alternatively, whether or not differences fall within a range that is allowable as equal). If two or more vectors are equal to VX, they are represented by one vector; and
(2) Next, if it is judged that any of the vectors VA, VB and VC does not equal VX (alternatively, whether the differences exceed the allowable range), the vector coded using the shortest length code among the four vectors, VX-VXA, VX-VXB, VX-VXC, VX is selected (simply, the vector having the least magnitude among the four vectors is selected).
There exist three possible results from the processing of step (1), and four possible results from the processing in step (2), a total of seven possible results from the processing. These results are, as illustrated in FIG. 22, expressed by 3-bit adoption block codes.
In the case of step (1), if it is determined that one of the vectors, VA, VB, and VC is equal to the motion vector VX (alternatively, the difference falls within the range that is allowable as equal). In the case of step (2), if it is decided that none of the vectors, VA, VB, and VC is equal the motion vector VX (alternatively, the differences exceed the allowable range), transmitted also are the differential vector having the shortest code length among four vectors, VX-VXA, VX-VXB, VX-VXC and VX and the adoption block code after being coded.
Note that, if the current block X is located in the outermost row or column of the picture, as can be seen from FIG. 21, the blocks XA and XB of will not exist (when the current block is located in the uppermost row), or the block XC will not exist (when the current block is in the leftmost column). In such cases, no current block X is disposed in the first row and column of the picture, the comparative neighboring blocks do not exist at all. In such cases, the motion vector VX of the current block X is included in the compressed picture signal as it is.
In the motion vector arithmetic unit, the motion vector is computed from the current block of the motion picture signal. The motion vector memory 52 is capable of storing, as shown in FIG. 23, twice as many motion vectors as the number of blocks in each row of the picture, i.e., the motion vectors for two rows. The motion vector memory 52 holds the vectors of the row that is now being processed, i.e., the current row, and the row before it. The adoption block code and the vector value (or differential value) are coded for each block, and the coded result is transferred from the vector encoder 53 to the multiplexer 25.
Further, the variable-length coder (VLC) 24 described above codes the quantized transform coefficients using the VLC codes shown in FIG. 24, which are then fed into the above-mentioned multiplexer 25. The codes from the vector encoder 53 are multiplexed therein and transmitted via the output buffer 26 to a transmission path or a medium such as a storage device or the like (not shown).
FIG. 25 is a block diagram demonstrating one embodiment of an apparatus for expanding a compressed motion picture signal compressed by the compressor shown in FIG. 18. In FIG. 25 parts corresponding to ones of FIG. 18 are designated by same reference numbers or characters. The compressed motion picture signal is supplied to the demultiplexer 42 via the input buffer 41. The demultiplexer separates from the compressed motion picture signal the coded transform coefficients and the differential vector information (i.e., the adoption block code and the differential vector code).
The differential vector information separated by the demultiplexer 42 is supplied to the vector decoder 61, to which the motion vector memory 62 is connected, to decode the motion vector VX of the current block according to the adoption block code. The output of the vector decoder is supplied to the motion compensator 48, to which the output of the frame memory 49 is connected. The motion compensator performs motion compensation using the motion vector, and stores the result in the frame memory 49 as a prediction block. The prediction block is supplied to the adder 46, where it is added to the block of reconstructed prediction errors from the inverse discrete cosine transform circuit 45, to provide a block of the reconstructed picture. The output of the adder 46 is supplied to the frame memory 49, where it is stored, and is read out as a block of the reconstructed picture signal.
In the above mentioned compressor, the three neighboring blocks, such as XA, XB, and XC, are compared with the current block X. The current block may alternatively be compared with two neighboring blocks, such as XA and XB, as shown in FIG. 26, or with four neighboring blocks, such as XA, XB, XC, and XD, as shown in FIG. 27.
In the embodiment illustrated in FIG. 22, the adoption block code need not be a fixed length code of three bits, but also may be a variable-length code. Although the magnitude of the vectors is variable-length coded in the manner shown in FIG. 24, the present invention is not limited this.
While there has been described in connection with the preferred embodiments of the invention, it will be obvious to those skilled in the art that various changes and modifications may be made therein without departing from the invention, and it is aimed, therefore, to cover in the appended claims all such changes and modifications as fall within the true spirit and scope of the invention.

Claims (19)

What is claimed is:
1. Apparatus for compressing a motion picture signal, the motion picture signal being divided into blocks including a current block, the apparatus comprising:
motion detecting means for segmenting the current block into subblocks numbering at least four, and for calculating, from a prediction picture and each of the subblocks constituting the current block, a motion vector for each of the subblocks constituting the current block;
subtracting means for subtracting the current block from a prediction block of the prediction picture to provide a prediction error block;
means for orthogonally transforming the prediction error block to provide transform coefficients;
means for quantizing the transform coefficients to provide quantized transform coefficients;
means for coding the quantized transform coefficients to provide coded quantized transform coefficients;
local decoding means for locally decoding the quantized transform coefficients to provide a block of an additional prediction picture;
representative motion vector generating means, operating when a difference between motion vectors of at least two of the subblocks constituting the current block is below a predetermined threshold, for generating, from the motion vectors of the subblocks constituting the current block, at least one or more representative motion vectors, a single one of said at least one or more representative motion vectors representing said motion vectors below the predetermined threshold, said at least one or more representative motion vectors collectively representing the motion vectors of all the subblocks constituting the current block, the representative motion vector generating means generating fewer representative motion vectors than a number of subblocks constituting the current block; and
motion compensating means for producing the prediction block from the prediction picture, the prediction block being constituted of a prediction subblock corresponding to each of the subblocks constituting the current block, the motion compensating means producing each prediction subblock by applying motion compensation to the prediction picture in response to a motion vector derived for the prediction subblock from the at least one or more representative motion vectors.
2. The apparatus according to claim 1, wherein the representative motion vector generating means includes:
means for calculating a difference between each of plural pairs of the motion vectors of the subblocks constituting the block to provide a difference vector for each of the plural pairs of motion vectors; and
means for calculating the at least one or more representative motion vectors for the current block using the difference vectors.
3. The apparatus according to claim 2, further comprising multiplexing means for multiplexing the coded quantized transform coefficients and the representative motion vectors.
4. The apparatus according to claim 2, wherein the means for calculating the at least one or more representative motion vectors for the current block using the difference vectors includes:
means for comparing the difference vectors with a predetermined threshold to provide a comparison result;
means for selecting, as a selected pattern for the current block, in response to the comparison result, one of plural predetermined patterns, each of the plural predetermined patterns indicating an arrangement within the current block of the subblocks whose motion vectors are all collectively represented by the at least one or more representative motion vectors; and
means for calculating the at least one or more representative motion vectors for the current block from the motion vectors of the subblocks indicated by the selected pattern for the current block.
5. The apparatus according to claim 4, wherein the representative vector generating means additionally comprises pattern indicating means for generating a selected pattern signal indicating the selected pattern.
6. The apparatus according to claim 1, 2, 4, 5, or 3, wherein the local decoding means includes:
inverse quantizing means for inversely quantizing the quantized transform coefficients to provide transform coefficients; and
inverse orthogonal transform means for inversely orthogonally transforming the transform coefficients.
7. Apparatus for compressing a motion picture signal, the motion picture signal being divided into blocks including a current block, the apparatus comprising:
motion detecting means for segmenting the current block into subblocks numbering at least four, and for calculating, from a prediction picture and each of the subblocks constituting the current block, a motion vector for each of the subblocks constituting the current block;
means for subtracting the current block from a prediction block of the prediction picture to provide a prediction error block;
means for orthogonally transforming the prediction error block to provide transform coefficients;
means for quantizing the transform coefficients to provide quantized transform coefficients;
means for coding the quantized transform coefficients to provide coded quantized transform coefficients;
local decoding means for locally decoding the quantized transform coefficients to provide a block of an additional prediction picture;
representative motion vector generating means, operating when a difference between motion vectors of at least two of the subblocks constituting the current block is below a predetermined threshold, for generating, for the current block from the motion vectors of the subblocks constituting the current block, at least one or more representative motion vectors, a single one of said at least one or more representative motion vectors representing said motion vectors below the predetermined threshold, said at least one or more representative motion vectors collectively representing the motion vectors of all the subblocks constituting the current block, the representative motion vector generating means generating fewer representative motion vectors than a number of subblocks constituting the block, the representative motion vector generating means including:
means for selecting, as a selected pattern for the current block, one of plural predetermined patterns each of the plural predetermined patterns indicating an arrangement within the current block of the subblocks whose motion vectors are all collectively represented by the at least one or more representative motion vectors, and
means for adopting ones of the motion vectors of the subblocks indicated by the selected pattern as the at least one or more representative motion vectors for the current block; and
motion compensating means for producing the prediction block from the prediction picture, the prediction block being constituted of a prediction subblock corresponding to each of the subblocks constituting the current block, the motion compensating means producing each prediction subblock by applying motion compensation to the prediction picture in response to a motion vector derived for the prediction subblock from the at least one or more representative motion vectors.
8. The apparatus according to claim 7, wherein:
the representative motion vector generating means additionally includes means for generating a selected pattern signal indicating the selected pattern for the current block; and
the apparatus additionally comprises means for multiplexing the coded quantized transform coefficients, the at least one or more representative motion vectors, and the selected pattern signal.
9. Apparatus for compressing a motion picture signal, the motion picture signal being divided into blocks including a current block, the apparatus comprising:
motion detecting means for segmenting the current block into subblocks numbering at least four, and for calculating, from a prediction picture and each of the subblocks constituting the current block, a motion vector for each of the subblocks constituting the current block;
means for subtracting the current block from a prediction block of the prediction picture to provide a prediction error block;
means for orthogonally transforming the prediction error block to provide transform coefficients;
means for quantizing the transform coefficients to provide quantized transform coefficients;
means for coding the quantized transform coefficients to provide coded quantized transform coefficients;
local decoding means for locally decoding the quantized transform coefficients to provide a block of an additional prediction picture;
representative motion vector generating means, operating when a difference between motion vectors of at least two of the subblocks constituting the current block is below a predetermined threshold, for generating, for the current block from the motion vectors of the subblocks constituting the current block, at least one or more representative motion vectors, a single one of said at least one or more representative motion vectors representing said motion vectors below the predetermined threshold, said at least one or more representative motion vectors collectively representing the motion vectors of all the subblocks constituting the current block, the subblocks whose motion vectors are represented by each of the at least one or more representative motion vectors collectively forming a pattern within the current block;
pattern indicating means for generating a selected pattern signal indicating the pattern of the subblocks whose motion vectors are collectively represented by the at least one or more representative motion vectors; and
motion compensating means for producing the prediction block from the prediction picture, the prediction block being constituted of a prediction subblock corresponding to each of the subblocks constituting the current block, the motion compensating means producing each prediction subblock by applying motion compensation to the prediction picture in response to a motion vector derived for the prediction subblock from the at least one or more representative motion vectors.
10. The apparatus according to claim 9, additionally comprising multiplexing means for multiplexing the coded quantized transform coefficients, the at least one or more representative motion vectors, and the selected pattern signal.
11. The apparatus according to claim 9, wherein the representative motion vector generating means additionally includes:
means for calculating a difference between each of plural pairs of the motion vectors of the subblocks constituting the block to provide a difference vector for each of the plural pairs of motion vectors; and
representative motion vector calculating means for calculating the at least one or more representative motion vectors for the current block using the difference vectors.
12. The apparatus according to claim 11, wherein:
the representative motion vector calculating means includes means for comparing the difference vectors with a predetermined threshold to provide a comparison result;
the pattern indicating means includes selecting means, operating in response to the comparison result, for selecting one of plural predetermined patterns each of the predetermined patterns indicating an arrangement within the current block of the subblocks whose motion vectors are all collectively represented by the at least one or more representative motion vectors, as the selected pattern for the current block, and for providing the selected pattern signal; and
the representative motion vector calculating means additionally includes means tier calculating the at least one or more representative motion vectors for the current block from the motion vectors of the subblocks indicated by the selected pattern for the current block.
13. An apparatus for expanding a compressed motion picture signal including a compressed picture block obtained by compressing a block of a motion picture signal, the compressed picture block including coded transform coefficients and coded vector data representing the block of the motion picture signal, the coded vector data including at least one representative motion vector collectively representing motion vectors of all of a number of subblocks obtained by dividing the block of the motion picture signal by at least four, the apparatus providing an output picture signal, and comprising:
demultiplexing means for separating the coded transform coefficients and the coded vector data from the compressed picture block;
vector decoding means for detecting and for decoding the plural representative motion vectors in the coded vector data, the vector decoding means decoding fewer representative motion vectors than the number of subblocks;
calculating means for calculating the motion vectors of all of the subblocks from the representative motion vectors, the calculating means calculating, from a single representative motion vector, the motion vectors for at least two of the subblocks whose difference between the motion vectors is below a predetermined threshold; and
deriving means for deriving a block of the output picture signal from the coded transform coefficients and the motion vectors.
14. The apparatus according to claim 13, wherein:
the apparatus additionally comprises decoding means for deriving a prediction error block from the coded transform coefficients; and
the deriving means includes:
motion compensation means for applying motion compensation to a prediction picture to provide a prediction block, the prediction block being constituted of a prediction subblock corresponding to each of the subblocks constituting the current block, the motion compensating means producing each prediction subblock by applying motion compensation to the prediction picture in response to a respective one of the motion vectors calculated by the calculating means, and
means for producing the block of the output picture signal by summing the prediction block and the prediction error block.
15. The decoding apparatus according to claim 14, wherein the decoding means includes:
inverse variable length coding means for applying inverse variable length coding to the coded transform coefficients to provide quantized transform coefficients;
inverse quantizing means for inverse quantizing the quantized transform coefficients to provide transform coefficients; and
inverse orthogonal transform means for inversely orthogonally transforming the transform coefficients to provide the prediction error block.
16. An apparatus for expanding a compressed motion picture signal including a compressed picture block obtained by compressing a block of a motion picture signal, the compressed picture block including coded transform coefficients and coded vector data representing the block of the motion picture signal, the coded vector data including at least one representative motion vector for the block, the at least one representative motion vector representing motion vectors of all of a number of subblocks obtained by dividing the block of the motion picture signal by at least four, the coded vector information additionally including coded pattern information for the block, the coded pattern information indicating an arrangement within the block of the subblocks whose motion vectors are all collectively represented by the at least one representative motion vector, the apparatus comprising:
demultiplexing means for separating the coded transform coefficients and the coded vector data from the compressed picture block;
vector decoding means for decoding the coded vector data to provide the pattern information and the at least one representative motion vector, the vector decoding means decoding fewer representative motion vectors than the number of subblocks, and for calculating the motion vectors of each of the subblocks constituting the block from the representative motion vectors in response to the pattern information such that the motion vectors for at least two of the subblocks whose difference between the motion vectors is below a predetermined threshold are calculated from a single representative motion vector; and
deriving means for deriving a block of the output picture signal from the coded transform coefficients and the motion vectors.
17. The apparatus according to claim 16, additionally including means for applying inverse variable length decoding to the coded vector data.
18. The apparatus according to claim 16, wherein:
the apparatus additionally comprises decoding means for deriving a prediction error block from the coded transform coefficients; and
the deriving means includes:
motion compensation means for applying motion compensation to a prediction picture to provide a prediction block, the prediction block being constituted of a prediction subblock corresponding to each of the subblocks constituting the current block, the motion compensating means producing each prediction subblock by applying motion compensation to the prediction picture in response to a respective one of the motion vectors calculated by the calculating means, and
means for producing the block of the output picture signal by summing the prediction block and the prediction error block.
19. The apparatus according to claim 18, wherein the decoding means includes:
inverse variable length coding means for applying inverse variable length coding to the coded transform coefficients to provide quantized transform coefficients;
inverse quantizing means for inverse quantizing the quantized transform coefficients to provide transform coefficients; and
inverse orthogonal transform means for inversely orthogonally transforming the transform coefficients to provide the prediction error block.
US08/289,999 1991-09-20 1994-08-12 Picture signal encoding and/or decoding apparatus Expired - Lifetime US5512952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/289,999 US5512952A (en) 1991-09-20 1994-08-12 Picture signal encoding and/or decoding apparatus

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP3-270393 1991-09-20
JP27039391 1991-09-20
JP27028691 1991-09-20
JP3-270286 1991-09-20
JP3-277312 1991-09-27
JP27731291 1991-09-27
US94292792A 1992-09-10 1992-09-10
US08/289,999 US5512952A (en) 1991-09-20 1994-08-12 Picture signal encoding and/or decoding apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US94292792A Continuation 1991-09-20 1992-09-10

Publications (1)

Publication Number Publication Date
US5512952A true US5512952A (en) 1996-04-30

Family

ID=27335791

Family Applications (3)

Application Number Title Priority Date Filing Date
US08/286,562 Expired - Lifetime US5510841A (en) 1991-09-20 1994-08-05 Encoding apparatus in which flat subblocks are identified in blocks of a motion signal prior to coding the blocks and complementary decoding apparatus
US08/289,999 Expired - Lifetime US5512952A (en) 1991-09-20 1994-08-12 Picture signal encoding and/or decoding apparatus
US08/456,963 Expired - Lifetime US6064435A (en) 1991-09-20 1995-06-01 Motion picture signal compressing apparatus which encodes the motion vector of the block of a motion picture signal picture as a motion vector difference

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US08/286,562 Expired - Lifetime US5510841A (en) 1991-09-20 1994-08-05 Encoding apparatus in which flat subblocks are identified in blocks of a motion signal prior to coding the blocks and complementary decoding apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US08/456,963 Expired - Lifetime US6064435A (en) 1991-09-20 1995-06-01 Motion picture signal compressing apparatus which encodes the motion vector of the block of a motion picture signal picture as a motion vector difference

Country Status (2)

Country Link
US (3) US5510841A (en)
EP (1) EP0533195A2 (en)

Cited By (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774206A (en) * 1995-05-10 1998-06-30 Cagent Technologies, Inc. Process for controlling an MPEG decoder
US5812791A (en) * 1995-05-10 1998-09-22 Cagent Technologies, Inc. Multiple sequence MPEG decoder
US5825928A (en) * 1994-11-30 1998-10-20 Samsung Electronics Co., Ltd. Decoding apparatus for error concealment considering motion directions of moving images
US5828789A (en) * 1996-02-19 1998-10-27 Fuji Xerox Co., Ltd. Image coding system and image decoding system
US5903673A (en) * 1997-03-14 1999-05-11 Microsoft Corporation Digital video signal encoder and encoding method
US6067322A (en) * 1997-06-04 2000-05-23 Microsoft Corporation Half pixel motion estimation in motion video signal encoding
US6115420A (en) * 1997-03-14 2000-09-05 Microsoft Corporation Digital video signal encoder and encoding method
US6118817A (en) * 1997-03-14 2000-09-12 Microsoft Corporation Digital video signal encoder and encoding method having adjustable quantization
US6271885B2 (en) * 1998-06-24 2001-08-07 Victor Company Of Japan, Ltd. Apparatus and method of motion-compensated predictive coding
US6349152B1 (en) 1996-03-28 2002-02-19 Microsoft Corporation Table-based compression with embedded coding
US6404923B1 (en) 1996-03-29 2002-06-11 Microsoft Corporation Table-based low-level image classification and compression system
US20020158989A1 (en) * 2001-04-25 2002-10-31 Nec Corporation Image encoding device, image encoding method to be used therein and storage medium storing program
US6571016B1 (en) 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US6584226B1 (en) 1997-03-14 2003-06-24 Microsoft Corporation Method and apparatus for implementing motion estimation in video compression
US6639945B2 (en) 1997-03-14 2003-10-28 Microsoft Corporation Method and apparatus for implementing motion detection in video compression
US6750900B1 (en) 1998-07-23 2004-06-15 Eastman Kodak Company Method and apparatus for computing motion tracking parameters
US20040184538A1 (en) * 2002-04-15 2004-09-23 Kiyofumi Abe Image encoding method and image decoding method
US20040228410A1 (en) * 2003-05-12 2004-11-18 Eric Ameres Video compression method
US8611415B1 (en) 2010-11-15 2013-12-17 Google Inc. System and method for coding using improved motion estimation
US8693547B2 (en) 2011-04-06 2014-04-08 Google Inc. Apparatus and method for coding using motion vector segmentation
US8705620B1 (en) 2011-04-28 2014-04-22 Google Inc. Method and apparatus for encoding anchor frame by encoding features using layers
US8780996B2 (en) 2011-04-07 2014-07-15 Google, Inc. System and method for encoding and decoding video data
US8780971B1 (en) 2011-04-07 2014-07-15 Google, Inc. System and method of encoding using selectable loop filters
US8781004B1 (en) 2011-04-07 2014-07-15 Google Inc. System and method for encoding video using variable loop filter
US8804819B1 (en) 2011-04-19 2014-08-12 Google Inc. Method and apparatus for encoding video using data frequency
US8885706B2 (en) 2011-09-16 2014-11-11 Google Inc. Apparatus and methodology for a video codec system with noise reduction capability
US8891626B1 (en) 2011-04-05 2014-11-18 Google Inc. Center of motion for encoding motion fields
US8897591B2 (en) 2008-09-11 2014-11-25 Google Inc. Method and apparatus for video coding using adaptive loop filter
US8908767B1 (en) 2012-02-09 2014-12-09 Google Inc. Temporal motion vector prediction
US8989256B2 (en) 2011-05-25 2015-03-24 Google Inc. Method and apparatus for using segmentation-based coding of prediction information
US9014265B1 (en) 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
US9014266B1 (en) 2012-06-05 2015-04-21 Google Inc. Decimated sliding windows for multi-reference prediction in video coding
US9094689B2 (en) 2011-07-01 2015-07-28 Google Technology Holdings LLC Motion vector prediction design simplification
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US9131073B1 (en) 2012-03-02 2015-09-08 Google Inc. Motion estimation aided noise reduction
US9154799B2 (en) 2011-04-07 2015-10-06 Google Inc. Encoding and decoding motion via image segmentation
US9153017B1 (en) 2014-08-15 2015-10-06 Google Inc. System and method for optimized chroma subsampling
US9172970B1 (en) 2012-05-29 2015-10-27 Google Inc. Inter frame candidate selection for a video encoder
US9185428B2 (en) 2011-11-04 2015-11-10 Google Technology Holdings LLC Motion vector scaling for non-uniform motion vector grid
US9210424B1 (en) 2013-02-28 2015-12-08 Google Inc. Adaptive prediction block size in video coding
US9210432B2 (en) 2012-10-08 2015-12-08 Google Inc. Lossless inter-frame video coding
US9225979B1 (en) 2013-01-30 2015-12-29 Google Inc. Remote access encoding
US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
US9262670B2 (en) 2012-02-10 2016-02-16 Google Inc. Adaptive region of interest
US9288484B1 (en) 2012-08-30 2016-03-15 Google Inc. Sparse coding dictionary priming
US9286653B2 (en) 2014-08-06 2016-03-15 Google Inc. System and method for increasing the bit depth of images
US9300906B2 (en) 2013-03-29 2016-03-29 Google Inc. Pull frame interpolation
US9313493B1 (en) 2013-06-27 2016-04-12 Google Inc. Advanced motion estimation
US9344729B1 (en) 2012-07-11 2016-05-17 Google Inc. Selective prediction signal filtering
US9369732B2 (en) 2012-10-08 2016-06-14 Google Inc. Lossless intra-prediction video coding
US9374596B2 (en) 2008-09-11 2016-06-21 Google Inc. System and method for video encoding using constructed reference frame
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
US9392280B1 (en) 2011-04-07 2016-07-12 Google Inc. Apparatus and method for using an alternate reference frame to decode a video frame
US9392272B1 (en) 2014-06-02 2016-07-12 Google Inc. Video coding using adaptive source variance based partitioning
US9407915B2 (en) 2012-10-08 2016-08-02 Google Inc. Lossless video coding with sub-frame level optimal quantization values
US9426459B2 (en) 2012-04-23 2016-08-23 Google Inc. Managing multi-reference picture buffers and identifiers to facilitate video data coding
US9485515B2 (en) 2013-08-23 2016-11-01 Google Inc. Video coding using reference motion vectors
US9503746B2 (en) 2012-10-08 2016-11-22 Google Inc. Determine reference motion vectors
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
US9578324B1 (en) 2014-06-27 2017-02-21 Google Inc. Video coding using statistical-based spatially differentiated partitioning
US9609341B1 (en) 2012-04-23 2017-03-28 Google Inc. Video data encoding and decoding using reference picture lists
DE60015566C5 (en) * 1999-08-11 2017-08-10 Nokia Technologies Oy METHOD AND DEVICE FOR COMPRESSING A MOTION VECTOR FIELD
US9749638B1 (en) 2011-04-28 2017-08-29 Google Inc. Method and apparatus for encoding video with dynamic quality improvement
US9756346B2 (en) 2012-10-08 2017-09-05 Google Inc. Edge-selective intra coding
US9756331B1 (en) 2013-06-17 2017-09-05 Google Inc. Advance coded reference prediction
US9807416B2 (en) 2015-09-21 2017-10-31 Google Inc. Low-latency two-pass video coding
US9924161B2 (en) 2008-09-11 2018-03-20 Google Llc System and method for video coding using adaptive segmentation
US10102613B2 (en) 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
US11317101B2 (en) 2012-06-12 2022-04-26 Google Inc. Inter frame candidate selection for a video encoder

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0181032B1 (en) * 1995-03-20 1999-05-01 배순훈 Object-based encoding and apparatus using an interleaving technique
KR960036647A (en) * 1995-03-20 1996-10-28 배순훈 Bit Plan Compression Transmission Device Using Scanning
US5774597A (en) * 1995-09-05 1998-06-30 Ge Medical Systems, Inc. Image compression and decompression using overlapped cosine transforms
KR0181069B1 (en) * 1995-11-08 1999-05-01 배순훈 Motion estimation apparatus
US6160845A (en) 1996-12-26 2000-12-12 Sony Corporation Picture encoding device, picture encoding method, picture decoding device, picture decoding method, and recording medium
WO1998030028A1 (en) * 1996-12-26 1998-07-09 Sony Corporation Picture coding device, picture coding method, picture decoding device, picture decoding method, and recording medium
US7194138B1 (en) * 1998-11-04 2007-03-20 International Business Machines Corporation Reduced-error processing of transformed digital data
GB2352905B (en) * 1999-07-30 2003-10-29 Sony Uk Ltd Data compression
US7646814B2 (en) * 2003-12-18 2010-01-12 Lsi Corporation Low complexity transcoding between videostreams using different entropy coding
JP4501676B2 (en) * 2004-12-22 2010-07-14 日本電気株式会社 Video compression encoding method, video compression encoding apparatus, and program
JP4501675B2 (en) * 2004-12-22 2010-07-14 日本電気株式会社 Video compression encoding method, video compression encoding apparatus, and program
JP5422168B2 (en) 2008-09-29 2014-02-19 株式会社日立製作所 Video encoding method and video decoding method
US9832460B2 (en) * 2011-03-09 2017-11-28 Canon Kabushiki Kaisha Image coding apparatus, method for coding image, program therefor, image decoding apparatus, method for decoding image, and program therefor

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4689673A (en) * 1985-06-10 1987-08-25 Nec Corporation Movement compensation predictive encoder for a moving picture signal with a reduced amount of information
US4777530A (en) * 1984-11-07 1988-10-11 Sony Corporation Apparatus for detecting a motion of a picture of a television signal
US4796087A (en) * 1986-05-29 1989-01-03 Jacques Guichard Process for coding by transformation for the transmission of picture signals
US4862259A (en) * 1987-06-09 1989-08-29 Sony Corp. Motion vector reduction in television images
US4901149A (en) * 1987-08-10 1990-02-13 U.S. Philips Corporation Method and apparatus for providing an enhanced television signal
US5012336A (en) * 1989-04-27 1991-04-30 Sony Corporation Motion dependent video signal processing
US5019901A (en) * 1989-03-31 1991-05-28 Matsushita Electric Industrial Co., Ltd. Image motion vector sensor for sensing image displacement amount
US5021879A (en) * 1987-05-06 1991-06-04 U.S. Philips Corporation System for transmitting video pictures
US5021881A (en) * 1989-04-27 1991-06-04 Sony Corporation Motion dependent video signal processing
US5113255A (en) * 1989-05-11 1992-05-12 Matsushita Electric Industrial Co., Ltd. Moving image signal encoding apparatus and decoding apparatus
US5126841A (en) * 1989-10-13 1992-06-30 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
US5128756A (en) * 1990-12-11 1992-07-07 At&T Bell Laboratories High definition television coding arrangement with graceful degradation
US5142360A (en) * 1990-03-06 1992-08-25 Victor Company Of Japan, Ltd. Motion vector detection circuit used in hierarchical processing of moving picture signal
US5142361A (en) * 1990-06-21 1992-08-25 Graphics Communication Technologies, Ltd. Motion vector detecting apparatus for video telephone/teleconference systems
US5157742A (en) * 1990-02-28 1992-10-20 Victor Company Of Japan, Ltd. Motion image data compression system
US5162907A (en) * 1990-09-28 1992-11-10 Sony Broadcast & Communications Limited Motion dependent video signal processing
US5196933A (en) * 1990-03-23 1993-03-23 Etat Francais, Ministere Des Ptt Encoding and transmission method with at least two levels of quality of digital pictures belonging to a sequence of pictures, and corresponding devices
US5210605A (en) * 1991-06-11 1993-05-11 Trustees Of Princeton University Method and apparatus for determining motion vectors for image sequences

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4816914A (en) * 1987-01-07 1989-03-28 Pictel Corporation Method and apparatus for efficiently encoding and decoding image sequences
JPS6417608A (en) * 1987-07-10 1989-01-20 Yoshitaka Nakagawa Book stand
JPH02200082A (en) * 1989-01-30 1990-08-08 Hitachi Ltd Image encoder
DE3929280A1 (en) * 1989-09-04 1991-03-07 Philips Patentverwaltung Video image block coder with comparator - uses specified line function magnitude, related to block and image spacing, system parameter, and number of bits

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4777530A (en) * 1984-11-07 1988-10-11 Sony Corporation Apparatus for detecting a motion of a picture of a television signal
US4689673A (en) * 1985-06-10 1987-08-25 Nec Corporation Movement compensation predictive encoder for a moving picture signal with a reduced amount of information
US4796087A (en) * 1986-05-29 1989-01-03 Jacques Guichard Process for coding by transformation for the transmission of picture signals
US5021879A (en) * 1987-05-06 1991-06-04 U.S. Philips Corporation System for transmitting video pictures
US4862259A (en) * 1987-06-09 1989-08-29 Sony Corp. Motion vector reduction in television images
US4901149A (en) * 1987-08-10 1990-02-13 U.S. Philips Corporation Method and apparatus for providing an enhanced television signal
US5019901A (en) * 1989-03-31 1991-05-28 Matsushita Electric Industrial Co., Ltd. Image motion vector sensor for sensing image displacement amount
US5021881A (en) * 1989-04-27 1991-06-04 Sony Corporation Motion dependent video signal processing
US5012336A (en) * 1989-04-27 1991-04-30 Sony Corporation Motion dependent video signal processing
US5113255A (en) * 1989-05-11 1992-05-12 Matsushita Electric Industrial Co., Ltd. Moving image signal encoding apparatus and decoding apparatus
US5126841A (en) * 1989-10-13 1992-06-30 Matsushita Electric Industrial Co., Ltd. Motion compensated prediction interframe coding system
US5157742A (en) * 1990-02-28 1992-10-20 Victor Company Of Japan, Ltd. Motion image data compression system
US5142360A (en) * 1990-03-06 1992-08-25 Victor Company Of Japan, Ltd. Motion vector detection circuit used in hierarchical processing of moving picture signal
US5196933A (en) * 1990-03-23 1993-03-23 Etat Francais, Ministere Des Ptt Encoding and transmission method with at least two levels of quality of digital pictures belonging to a sequence of pictures, and corresponding devices
US5142361A (en) * 1990-06-21 1992-08-25 Graphics Communication Technologies, Ltd. Motion vector detecting apparatus for video telephone/teleconference systems
US5162907A (en) * 1990-09-28 1992-11-10 Sony Broadcast & Communications Limited Motion dependent video signal processing
US5128756A (en) * 1990-12-11 1992-07-07 At&T Bell Laboratories High definition television coding arrangement with graceful degradation
US5210605A (en) * 1991-06-11 1993-05-11 Trustees Of Princeton University Method and apparatus for determining motion vectors for image sequences

Cited By (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5825928A (en) * 1994-11-30 1998-10-20 Samsung Electronics Co., Ltd. Decoding apparatus for error concealment considering motion directions of moving images
US5774206A (en) * 1995-05-10 1998-06-30 Cagent Technologies, Inc. Process for controlling an MPEG decoder
US5812791A (en) * 1995-05-10 1998-09-22 Cagent Technologies, Inc. Multiple sequence MPEG decoder
US5828789A (en) * 1996-02-19 1998-10-27 Fuji Xerox Co., Ltd. Image coding system and image decoding system
US20030185452A1 (en) * 1996-03-28 2003-10-02 Wang Albert S. Intra compression of pixel blocks using predicted mean
US6349152B1 (en) 1996-03-28 2002-02-19 Microsoft Corporation Table-based compression with embedded coding
US7162091B2 (en) 1996-03-28 2007-01-09 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US6404923B1 (en) 1996-03-29 2002-06-11 Microsoft Corporation Table-based low-level image classification and compression system
US6639945B2 (en) 1997-03-14 2003-10-28 Microsoft Corporation Method and apparatus for implementing motion detection in video compression
US6584226B1 (en) 1997-03-14 2003-06-24 Microsoft Corporation Method and apparatus for implementing motion estimation in video compression
US7154951B2 (en) 1997-03-14 2006-12-26 Microsoft Corporation Motion video signal encoder and encoding method
US6118817A (en) * 1997-03-14 2000-09-12 Microsoft Corporation Digital video signal encoder and encoding method having adjustable quantization
US6115420A (en) * 1997-03-14 2000-09-05 Microsoft Corporation Digital video signal encoder and encoding method
US7072396B2 (en) 1997-03-14 2006-07-04 Microsoft Corporation Motion video signal encoder and encoding method
US6317459B1 (en) 1997-03-14 2001-11-13 Microsoft Corporation Digital video signal encoder and encoding method
US5903673A (en) * 1997-03-14 1999-05-11 Microsoft Corporation Digital video signal encoder and encoding method
US7139313B2 (en) 1997-03-14 2006-11-21 Microsoft Corporation Digital video signal encoder and encoding method
US20050220188A1 (en) * 1997-03-14 2005-10-06 Microsoft Corporation Digital video signal encoder and encoding method
US6707852B1 (en) 1997-03-14 2004-03-16 Microsoft Corporation Digital video signal encoder and encoding method
US6937657B2 (en) 1997-03-14 2005-08-30 Microsoft Corporation Motion video signal encoder and encoding method
US20040184533A1 (en) * 1997-03-14 2004-09-23 Microsoft Corporation Motion video signal encoder and encoding method
US20040184534A1 (en) * 1997-03-14 2004-09-23 Microsoft Corporation Motion video signal encoder and encoding method
US20040184535A1 (en) * 1997-03-14 2004-09-23 Microsoft Corporation Motion video signal encoder and encoding method
US6571016B1 (en) 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US20050259877A1 (en) * 1997-05-05 2005-11-24 Wang Albert S Intra compression of pixel blocks using predicted mean
US7181072B2 (en) 1997-05-05 2007-02-20 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US6067322A (en) * 1997-06-04 2000-05-23 Microsoft Corporation Half pixel motion estimation in motion video signal encoding
US6473461B1 (en) 1997-06-04 2002-10-29 Microsoft Corporation Half-pixel motion estimation in motion video signal encoding
US6271885B2 (en) * 1998-06-24 2001-08-07 Victor Company Of Japan, Ltd. Apparatus and method of motion-compensated predictive coding
US6750900B1 (en) 1998-07-23 2004-06-15 Eastman Kodak Company Method and apparatus for computing motion tracking parameters
DE60015566C5 (en) * 1999-08-11 2017-08-10 Nokia Technologies Oy METHOD AND DEVICE FOR COMPRESSING A MOTION VECTOR FIELD
US20020158989A1 (en) * 2001-04-25 2002-10-31 Nec Corporation Image encoding device, image encoding method to be used therein and storage medium storing program
US8526748B2 (en) 2002-04-15 2013-09-03 Panasonic Corporation Picture coding method and picture decoding method
US7697770B2 (en) 2002-04-15 2010-04-13 Panasonic Corporation Picture coding method and picture decoding method
US20060233451A1 (en) * 2002-04-15 2006-10-19 Kiyofumi Abe Picture coding method and picture decoding method
US20060269154A1 (en) * 2002-04-15 2006-11-30 Kiyofumi Abe Picture coding method and picture decoding method
US20060233449A1 (en) * 2002-04-15 2006-10-19 Kiyofumi Abe Picture coding method and picture decoding method
US20060233450A1 (en) * 2002-04-15 2006-10-19 Kiyofumi Abe Picture coding method and picture decoding method
US7095896B2 (en) * 2002-04-15 2006-08-22 Matsushita Electric Industrial Co., Ltd. Image encoding method and image decoding method
US7184598B2 (en) 2002-04-15 2007-02-27 Matsushita Electric Industrial Co., Ltd. Picture coding method and picture decoding method
US7305134B2 (en) 2002-04-15 2007-12-04 Matsushita Electric Industrial Co., Ltd. Picture coding method and picture decoding method
US7308144B2 (en) 2002-04-15 2007-12-11 Matsushita Electric Industrial Co., Ltd. Picture coding method and picture decoding method
US7308149B2 (en) 2002-04-15 2007-12-11 Matsushita Electric Industrial Co., Ltd. Picture coding method and picture decoding method
US7308143B2 (en) 2002-04-15 2007-12-11 Matsushita Electric Industrial Co., Ltd. Picture coding method and picture decoding method
US20080056362A1 (en) * 2002-04-15 2008-03-06 Kiyofumi Abe Picture coding method and picture decoding method
US20080063055A1 (en) * 2002-04-15 2008-03-13 Kiyofumi Abe Picture coding method and picture decoding method
US20080063056A1 (en) * 2002-04-15 2008-03-13 Kiyofumi Abe Picture coding method and picture decoding method
US20080117969A1 (en) * 2002-04-15 2008-05-22 Kiyofumi Abe Picture coding method and picture decoding method
US20080175315A1 (en) * 2002-04-15 2008-07-24 Kiyofumi Abe Picture coding method and picture decoding method
US20090034616A1 (en) * 2002-04-15 2009-02-05 Kiyofumi Abe Picture coding method and picture decoding method
US7693340B2 (en) 2002-04-15 2010-04-06 Panasonic Corporation Picture coding method and picture decoding method
US8867855B2 (en) 2002-04-15 2014-10-21 Panasonic Intellectual Property Corporation Of America Picture coding method and picture decoding method
US7769238B2 (en) 2002-04-15 2010-08-03 Panasonic Corporation Picture coding method and picture decoding method
US8139878B2 (en) 2002-04-15 2012-03-20 Panasonic Corporation Picture coding method and picture decoding method
US8265403B2 (en) 2002-04-15 2012-09-11 Panasonic Corporation Picture coding method and picture decoding method
US8290286B2 (en) 2002-04-15 2012-10-16 Panasonic Corporation Picture coding method and picture decoding method
US20060239575A1 (en) * 2002-04-15 2006-10-26 Kiyofumi Abe Picture coding method and picture decoding method
US20040184538A1 (en) * 2002-04-15 2004-09-23 Kiyofumi Abe Image encoding method and image decoding method
US10616576B2 (en) 2003-05-12 2020-04-07 Google Llc Error recovery using alternate reference frame
US20040228410A1 (en) * 2003-05-12 2004-11-18 Eric Ameres Video compression method
US8824553B2 (en) 2003-05-12 2014-09-02 Google Inc. Video compression method
US8942290B2 (en) 2003-05-12 2015-01-27 Google Inc. Dynamic coefficient reordering
US9924161B2 (en) 2008-09-11 2018-03-20 Google Llc System and method for video coding using adaptive segmentation
US9374596B2 (en) 2008-09-11 2016-06-21 Google Inc. System and method for video encoding using constructed reference frame
US8897591B2 (en) 2008-09-11 2014-11-25 Google Inc. Method and apparatus for video coding using adaptive loop filter
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
US8611415B1 (en) 2010-11-15 2013-12-17 Google Inc. System and method for coding using improved motion estimation
US8891626B1 (en) 2011-04-05 2014-11-18 Google Inc. Center of motion for encoding motion fields
US8693547B2 (en) 2011-04-06 2014-04-08 Google Inc. Apparatus and method for coding using motion vector segmentation
US9392280B1 (en) 2011-04-07 2016-07-12 Google Inc. Apparatus and method for using an alternate reference frame to decode a video frame
US8780996B2 (en) 2011-04-07 2014-07-15 Google, Inc. System and method for encoding and decoding video data
US8780971B1 (en) 2011-04-07 2014-07-15 Google, Inc. System and method of encoding using selectable loop filters
US9154799B2 (en) 2011-04-07 2015-10-06 Google Inc. Encoding and decoding motion via image segmentation
US8781004B1 (en) 2011-04-07 2014-07-15 Google Inc. System and method for encoding video using variable loop filter
US8804819B1 (en) 2011-04-19 2014-08-12 Google Inc. Method and apparatus for encoding video using data frequency
US8705620B1 (en) 2011-04-28 2014-04-22 Google Inc. Method and apparatus for encoding anchor frame by encoding features using layers
US9749638B1 (en) 2011-04-28 2017-08-29 Google Inc. Method and apparatus for encoding video with dynamic quality improvement
US8989256B2 (en) 2011-05-25 2015-03-24 Google Inc. Method and apparatus for using segmentation-based coding of prediction information
US9094689B2 (en) 2011-07-01 2015-07-28 Google Technology Holdings LLC Motion vector prediction design simplification
US8885706B2 (en) 2011-09-16 2014-11-11 Google Inc. Apparatus and methodology for a video codec system with noise reduction capability
US9185428B2 (en) 2011-11-04 2015-11-10 Google Technology Holdings LLC Motion vector scaling for non-uniform motion vector grid
US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
US9014265B1 (en) 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
US8908767B1 (en) 2012-02-09 2014-12-09 Google Inc. Temporal motion vector prediction
US9262670B2 (en) 2012-02-10 2016-02-16 Google Inc. Adaptive region of interest
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US9131073B1 (en) 2012-03-02 2015-09-08 Google Inc. Motion estimation aided noise reduction
US9609341B1 (en) 2012-04-23 2017-03-28 Google Inc. Video data encoding and decoding using reference picture lists
US9426459B2 (en) 2012-04-23 2016-08-23 Google Inc. Managing multi-reference picture buffers and identifiers to facilitate video data coding
US9172970B1 (en) 2012-05-29 2015-10-27 Google Inc. Inter frame candidate selection for a video encoder
US9014266B1 (en) 2012-06-05 2015-04-21 Google Inc. Decimated sliding windows for multi-reference prediction in video coding
US11317101B2 (en) 2012-06-12 2022-04-26 Google Inc. Inter frame candidate selection for a video encoder
US9344729B1 (en) 2012-07-11 2016-05-17 Google Inc. Selective prediction signal filtering
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
US9288484B1 (en) 2012-08-30 2016-03-15 Google Inc. Sparse coding dictionary priming
US9756346B2 (en) 2012-10-08 2017-09-05 Google Inc. Edge-selective intra coding
US9369732B2 (en) 2012-10-08 2016-06-14 Google Inc. Lossless intra-prediction video coding
US9407915B2 (en) 2012-10-08 2016-08-02 Google Inc. Lossless video coding with sub-frame level optimal quantization values
US9210432B2 (en) 2012-10-08 2015-12-08 Google Inc. Lossless inter-frame video coding
US9503746B2 (en) 2012-10-08 2016-11-22 Google Inc. Determine reference motion vectors
US9225979B1 (en) 2013-01-30 2015-12-29 Google Inc. Remote access encoding
US9210424B1 (en) 2013-02-28 2015-12-08 Google Inc. Adaptive prediction block size in video coding
US9300906B2 (en) 2013-03-29 2016-03-29 Google Inc. Pull frame interpolation
US9756331B1 (en) 2013-06-17 2017-09-05 Google Inc. Advance coded reference prediction
US9313493B1 (en) 2013-06-27 2016-04-12 Google Inc. Advanced motion estimation
US9485515B2 (en) 2013-08-23 2016-11-01 Google Inc. Video coding using reference motion vectors
US10986361B2 (en) 2013-08-23 2021-04-20 Google Llc Video coding using reference motion vectors
US9392272B1 (en) 2014-06-02 2016-07-12 Google Inc. Video coding using adaptive source variance based partitioning
US9578324B1 (en) 2014-06-27 2017-02-21 Google Inc. Video coding using statistical-based spatially differentiated partitioning
US9286653B2 (en) 2014-08-06 2016-03-15 Google Inc. System and method for increasing the bit depth of images
US9153017B1 (en) 2014-08-15 2015-10-06 Google Inc. System and method for optimized chroma subsampling
US10102613B2 (en) 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
US9807416B2 (en) 2015-09-21 2017-10-31 Google Inc. Low-latency two-pass video coding

Also Published As

Publication number Publication date
EP0533195A3 (en) 1994-03-30
US5510841A (en) 1996-04-23
US6064435A (en) 2000-05-16
EP0533195A2 (en) 1993-03-24

Similar Documents

Publication Publication Date Title
US5512952A (en) Picture signal encoding and/or decoding apparatus
US4805017A (en) System for coding and transmitting motion image signals
US5272529A (en) Adaptive hierarchical subband vector quantization encoder
US5859932A (en) Vector quantization coding apparatus and decoding apparatus
US6025881A (en) Video decoder and decoding method which decodes a code string having a plurality of components which are arranged in a descending order of importance
JP3135061B2 (en) Image decoding method
US6771826B2 (en) Digital image encoding and decoding method and digital image encoding and decoding device using the same
EP0542261B1 (en) Method of performing high efficiency coding of image signal and system therefor
EP0565219A2 (en) Image compression and decompression
US5944851A (en) Error concealment method and apparatus
US5428395A (en) Encoding and decoding method and apparatus thereof using a variable picture partitioning technique
US6219383B1 (en) Method and apparatus for selectively detecting motion vectors of a wavelet transformed video signal
US5781239A (en) System and method for performing an optimized inverse discrete cosine transform with improved efficiency
EP0736843B1 (en) A motion video compression system with adaptive quantisation
US5689312A (en) Block matching motion estimation method
WO1995004432A1 (en) Coding image data
JPH0410788A (en) Method for controlling coding variable of image signal
JPH05227522A (en) Picture encoder and decoder
JP3171951B2 (en) Image encoding / decoding device and image encoding / decoding method
JPH08186817A (en) Moving image compressor and its method
AU696445B2 (en) Coding image data
CA2488796C (en) Digital image encoding and decoding method and digital image encoding and decoding device using the same
JPH0514873A (en) Image encoder
JPH04367183A (en) Picture encoder
JPH10191339A (en) Image signal compensating method and image signal decoder

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12