CA2154885C - Efficient transcoding device and method - Google Patents
Efficient transcoding device and methodInfo
- Publication number
- CA2154885C CA2154885C CA002154885A CA2154885A CA2154885C CA 2154885 C CA2154885 C CA 2154885C CA 002154885 A CA002154885 A CA 002154885A CA 2154885 A CA2154885 A CA 2154885A CA 2154885 C CA2154885 C CA 2154885C
- Authority
- CA
- Canada
- Prior art keywords
- vector
- quantization error
- operably coupled
- quantized
- adder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B14/00—Transmission systems not characterised by the medium used for transmission
- H04B14/02—Transmission systems not characterised by the medium used for transmission characterised by the use of pulse modulation
- H04B14/06—Transmission systems not characterised by the medium used for transmission characterised by the use of pulse modulation using differential modulation, e.g. delta modulation
- H04B14/066—Transmission systems not characterised by the medium used for transmission characterised by the use of pulse modulation using differential modulation, e.g. delta modulation using differential modulation with several bits [NDPCM]
- H04B14/068—Transmission systems not characterised by the medium used for transmission characterised by the use of pulse modulation using differential modulation, e.g. delta modulation using differential modulation with several bits [NDPCM] with adaptive feedback
Abstract
The present invention provides a transcoder (400) for transcoding of digitally encoded bit streams produced by predictive encoders. The transcoder of this invention is less complex than previously known transcoders in terms of both computational and storage requirements. This efficiency is achieved by combining decoding and re-encoding steps into one step. This one-step transcoding is accomplished by modifying the originally quantized vectors and then requantizing and noiseless encoding these vectors into a bit stream in accordance with a predetermined objective (1200).
Description
2~5488~
EFFICIENT TRANSCODING DEVICE AND METHOD
Field of the Invention This invention relates generally to transcoding of digitally encoded signals, and more particularly to transcoding of signals that are digitally encoded by predictive coders.
Background A predictive waveform encoder is a device for compressing the amount of information in a waveform (e.g., speech, image or video) by removing the statistical redundancy among its neighboring samples using prediction methods. Several ITU-T
Recommendations for speech coding (ITU-T stands for the Telecommunication Standardization Sector of the International Telecommunication Union; ~ITU-T is formerly known as CCITT, or International Telegraph and Telephone Consultative Committee), have adopted predictive coding techniques (for example, 2 0 differential pulse-code mudulation, or DPCM, is used in Recommendation G.721 ). In these predictive speech coders, an original speech sample is predicted based on past speech samples, and the prediction error (the difference between the .
original and the predicted samples), instead of the original 2 5 sample, is quantized, and then digitally encoded by a noiseless coder to a bit stream. Since the energy of the prediction error is) on average, much smaller than the original speech signal, a high compression ratio can generally be obtained.
EFFICIENT TRANSCODING DEVICE AND METHOD
Field of the Invention This invention relates generally to transcoding of digitally encoded signals, and more particularly to transcoding of signals that are digitally encoded by predictive coders.
Background A predictive waveform encoder is a device for compressing the amount of information in a waveform (e.g., speech, image or video) by removing the statistical redundancy among its neighboring samples using prediction methods. Several ITU-T
Recommendations for speech coding (ITU-T stands for the Telecommunication Standardization Sector of the International Telecommunication Union; ~ITU-T is formerly known as CCITT, or International Telegraph and Telephone Consultative Committee), have adopted predictive coding techniques (for example, 2 0 differential pulse-code mudulation, or DPCM, is used in Recommendation G.721 ). In these predictive speech coders, an original speech sample is predicted based on past speech samples, and the prediction error (the difference between the .
original and the predicted samples), instead of the original 2 5 sample, is quantized, and then digitally encoded by a noiseless coder to a bit stream. Since the energy of the prediction error is) on average, much smaller than the original speech signal, a high compression ratio can generally be obtained.
3 0 Predictive coding methods have also been used for image and video compression. In these applications, the spatial correlation among neighboring pixels in an image and, in the case of video, the temporal correlation between successive images can be exploited.
_2154885 Typical predictive coders perform the prediction based on a replica of the reconstructed waveform. This ensures that the quantization error does not accumulate during reconstruction.
Although the prediction accuracy is reduced (for coarse quantization)) overall compression performance is generally improved.
State-of-the-art digital video coding systems utilize transform coding for spatial compression and a form of predictive coding known motion-compensated prediction as (MCP) for temporal compression. Video compression techniques that have recently been adopted in international standards (e.g., the MPEG standard developed by the International Standards Organization's Motion PictureExperts Group (ISO's MPEG) and ITU-_ T's H.261), or others that are under consideration for future standards, all employ a so-called block-matching MCP technique.
In this method, each image in a video sequence is partitioned into NxN blocks, called macro blocks (MB's), where N is a predetermined integer. For each MB, a replica of the previously 2 0 decoded image is searchedto find an NxN window that best resembles that MB, and the pixels in that window are used as a prediction for that MB. The prediction error is then encoded using a combination of transform coding and scalar quantization followed by variable-length noiseless encoding.
Transcoding will be required in many applications of compressed digital video. For example, in some instances, ' it may be desirable to change the rate of a digital video bit stream in the network. Alternatively, when constant bit-rate 3 0 (CBR) video traffic is to be carried over a cell-relay or Asynchronous Transfer Mode (ATM) network) it may be desirable to convert the CBR stream into a variable bit-rate (VBR) stream to save bandwidth through statistical multiplexing. Transcoding may also be required for conversion 3 5 between two video compression formats. For example, it may be necessary to convert an MPEG-encoded video bit stream into an H.261 bit stream, or vice versa. Another important application of transcoding is multipoint video conferencing;
here, transcoding may be needed to implement video mixing for continuous presence multipoint bridging.
FIG. 1, numeral 100, is a block diagram schematic of a predictive waveform encoder as is known in the art. A sequence of vectors consisting of a group of samples r; taken from an 1 0 original waveform are processed to generate a sequence of quantized vectors Y;, where i = 0, 1,.... is a time index indicating the order in which the input vectors are processed. The dimensionality L of the input vectors is arbitrary. In typical speech applications L = 1, whereas in many video compression applications, L. > 1.
The encoder operates' iteratively such that: (1 ) a predictor unit (102) generates a prediction of the input vector r;
represented by the vector p; based on one or more past 2 0 reconstructed vectors z~ , j < i, using a predetermined linear prediction operator P;; (2) the vector p; is subtracted from r; at a first combiner (104) to obtain the prediction error vector e; = r; -p;, wherein the predictor P; is typically chosen to minimize the average energy of the prediction error e;; (3) the prediction error 2 S vector e; is transformed by a transformation unit (106) according to E; = A;[e;], where A;[ ] represents a linear transformation; (4) the vector E; is quantized using a quantizer Q; .(108) to obtain tha quantized vector Y; = E; + D;, where D; is a quantization error vector, and the quantized vector Y; is encoded into a binary word 3 0 using a noiseless encoding method (e.g., a Huffman code), and then it is transmitted or stored; (5) the quantized vector Y; is then inverse transformed at Inverse Transformation Unit A;-1 (110) to find the vector y; = A;-~ [Y;], where A;-~ [ ] is an inverse transformation (i.e., A;-~ [A;[x]] = x); and (6) the vector p; is added - '~ - _ 215 ~8~~
_2154885 Typical predictive coders perform the prediction based on a replica of the reconstructed waveform. This ensures that the quantization error does not accumulate during reconstruction.
Although the prediction accuracy is reduced (for coarse quantization)) overall compression performance is generally improved.
State-of-the-art digital video coding systems utilize transform coding for spatial compression and a form of predictive coding known motion-compensated prediction as (MCP) for temporal compression. Video compression techniques that have recently been adopted in international standards (e.g., the MPEG standard developed by the International Standards Organization's Motion PictureExperts Group (ISO's MPEG) and ITU-_ T's H.261), or others that are under consideration for future standards, all employ a so-called block-matching MCP technique.
In this method, each image in a video sequence is partitioned into NxN blocks, called macro blocks (MB's), where N is a predetermined integer. For each MB, a replica of the previously 2 0 decoded image is searchedto find an NxN window that best resembles that MB, and the pixels in that window are used as a prediction for that MB. The prediction error is then encoded using a combination of transform coding and scalar quantization followed by variable-length noiseless encoding.
Transcoding will be required in many applications of compressed digital video. For example, in some instances, ' it may be desirable to change the rate of a digital video bit stream in the network. Alternatively, when constant bit-rate 3 0 (CBR) video traffic is to be carried over a cell-relay or Asynchronous Transfer Mode (ATM) network) it may be desirable to convert the CBR stream into a variable bit-rate (VBR) stream to save bandwidth through statistical multiplexing. Transcoding may also be required for conversion 3 5 between two video compression formats. For example, it may be necessary to convert an MPEG-encoded video bit stream into an H.261 bit stream, or vice versa. Another important application of transcoding is multipoint video conferencing;
here, transcoding may be needed to implement video mixing for continuous presence multipoint bridging.
FIG. 1, numeral 100, is a block diagram schematic of a predictive waveform encoder as is known in the art. A sequence of vectors consisting of a group of samples r; taken from an 1 0 original waveform are processed to generate a sequence of quantized vectors Y;, where i = 0, 1,.... is a time index indicating the order in which the input vectors are processed. The dimensionality L of the input vectors is arbitrary. In typical speech applications L = 1, whereas in many video compression applications, L. > 1.
The encoder operates' iteratively such that: (1 ) a predictor unit (102) generates a prediction of the input vector r;
represented by the vector p; based on one or more past 2 0 reconstructed vectors z~ , j < i, using a predetermined linear prediction operator P;; (2) the vector p; is subtracted from r; at a first combiner (104) to obtain the prediction error vector e; = r; -p;, wherein the predictor P; is typically chosen to minimize the average energy of the prediction error e;; (3) the prediction error 2 S vector e; is transformed by a transformation unit (106) according to E; = A;[e;], where A;[ ] represents a linear transformation; (4) the vector E; is quantized using a quantizer Q; .(108) to obtain tha quantized vector Y; = E; + D;, where D; is a quantization error vector, and the quantized vector Y; is encoded into a binary word 3 0 using a noiseless encoding method (e.g., a Huffman code), and then it is transmitted or stored; (5) the quantized vector Y; is then inverse transformed at Inverse Transformation Unit A;-1 (110) to find the vector y; = A;-~ [Y;], where A;-~ [ ] is an inverse transformation (i.e., A;-~ [A;[x]] = x); and (6) the vector p; is added - '~ - _ 215 ~8~~
by a second combiner (112) to y; to obtain the reconstructed vector z; = y; + p;, which is stored for use in later iterations.
In most applications, the transformation A; is fixed a priori, i.e., is predetermined, whereas Q; and P; are varied using preselected adaptation algorithms. In some applications, the transformation A; is not used; then A; = I, where I is an LXL
identity matrix. In so-called forward adaptation, the parameters of Q; , P; and A; are passed to the decoder as side information, 1 0 while in so-called backward adaptation, Q; , P; and A; are determined at the decoder from previously received information, so no side information needs to be sent.
Given the information on Q;, P; and A;, a decoder can 1 S reconstruct the vector z;. The decoder (200) first recovers the quantized vectors ~Y;} from the received bit stream by decoding the noiseless source code 'and then obtains z;, As shown in Fig. 2, numeral 200, (1 ) the quantized vector Y; is first inverse transformed using the inverse transformation unit A;-~ (202) to 2 0 obtain y; = A;-~ [Y;]; (2) a predictor (206) obtains the prediction p;
of the input vector r; from one or more past reconstructed vectors z~ , j < i , using the prediction operator P;, as in the encoder; and (3) a combiner (204), operably coupled to the predictor (206) and to the transformation unit (A;-~ ) (202) adds 2 5 the vector p; to y; to obtain the reconstructed vector z;.
The reconstructed vector z; can be represented as z; = r; + d;;
where d; = A;-~ [D;] is an inverse-transformed version of the quantization error vector D;. In other words, z; differs from the 3 0 original vector r; only by d; = A;-~ [D;]. To obtain good performance, the transformation A; is chosen such that the error A;-~ [D;], or an appropriately weighted version of it, is kept small.
A transcoder first recovers the sequence of quantized 3 5 vectors {Y; } from the received bit stream by decoding the ;#'.
-- -- _21~4~~
noiseless source code, converts {Y; } into a sequence of transcoded vectors {Y;'}, and then generates a new bit stream representing {Y;'} using the noiseless source code. The transcoder has full knowledge of the operators Q;, A; and P; used at the 5 original encoder and decoder, either a priori or through received side information.
In prior art "decode and re-encode" transcoding, a quantized vector Y; is first decoded using the decoder of Fig. 2 to obtain the 1 0 reconstructed vector z; = r; + d; and then z; is re-encoded using an encoder, possibly with a different quantizer Q;', a different predictor P;' or even a different transformation A;', to obtain the transcoded vector Y;'. The transcoded vector can be decoded by the decoder of Fig. 2 using Q;', P;' and A;'. The final reconstructed vector z;' can then be represented as z;' = r; + d; + d;', where d;'=
(A;')-~ [Q;'] is a transformed version of the quantization error introduced by the transcoder.
Although conceptually straightforward, the implementation 2 0 of the decode and re-encode method can be quite costly because of its high computational and storage requirements. Thus, there is a need for an efficient transcoding device and method that can be implemented with low complexity.
Eirief Descriptions of the Drawings FIG. 1 is a block diagram schematic of a predictive waveform encoder as is known in the art.
FIG. 2 is a block diagram schematic of a decoder that typically operates in conjunction with an encoder of FIG. 1, as is known in the art.
_2154885 FIG. 3, numeral 300, is a general block diagram schematic of a system for transcoding.
FIG. 4 is a block diagram schematic of a device in accordance with the present invention.
FIG. 5 is a block diagram schematic of the device of FIG. 4 shown with greater particularity.
1 0 FIG. 6 is an alternative block diagram schematic of the device of FIG. 4 shown with greater particularity.
FIG. 7 shows a typical configuration of an H.261 video encoder, as is known in the art.
FIG. 8 illustrates the partition of an image into increasingly smaller blocks in the H.261 standard.
FIG. 9 shows the indexing of the coefficients in the transform 2 0 domain for the H.261 standard.
FIG. 10 shows a block diagram schematic of one embodiment of the one-step transcoder of the present invention.
2 5 FIG. 11 is a block diagram schematic of an H.261 decoder as is known in the art.
FIG. 12 is a flow chart of one embodiment of steps in accordance with the method of the present invention.
- _2154885 Detailed Description of a Preferred Embodiment FIG. 3, numeral 300, is a general block diagram schematic of a system for transcoding as is known in the art. The input of the transcoder (304) is a bit stream generated by a predictive waveform encoder {302) such as the H.261 video encoder. Its output is another bit stream which can be decoded by a predictive waveform decoder (306). The transcoder (304) modifies the bit stream according to a predetermined objective.
The transcoder device of the present invention, referred to herein as a "one-step transcoder," achieves the performance of the "decode and re-encode" transcoder with only two transformations and one prediction operation, provided that the 1 S predictor P;' and the transformation A;' used in the transcoder are the same as the operators P; and A; used in the original encoder, thus decreasing complexity in the transcoding operation. In addition, the one-step transcoder reduces the storage requirements.
In the one-step transcoder, shown in a block diagram schematic in Fig. 4, numeral 400, a modified version S; of the quantization error vector is subtracted from the received quantized vector Y; , and the difference vector E;' is re-quantized 2 5 to obtain the transcoded vector Y; '. A quantization error calculator then computes the inverse-transformed quantization error vector d;' = A;-~ [D;'], where D;' = Y; ' - E;' is the quantization~
error vector and A;-~ is an inverse transformation. A modifying circuitry determines the modified quantization error vector S;
3 0 based on past vectors dj', j < i.
The present invention includes a device for transcoding a sequence of quantized vectors Y; generated by a predictive waveform encoder. utilizing modified quantization error vectors 3 5 S;. The device (400) includes an adder (402), a quantizer (404), a - _214885 quantization error vector calculator (QEVC) (406), and modifying circuitry (408). The adder (402) is operably coupled to receive at least a first quantized vector Y; and at least a first modified quantization error vector S; and is utilized for generating a S difference vector E;' = Y; - S; between the quantized vector Y; and the modified quantization error vector S;. The quantizer (404} is operably coupled to the adder (402) and is used for quantizing the difference vector E;' to obtain a transcoded vector Y;'. The quantization error vector calculator (QEVC) (406) is operably coupled to receive at least two of the vectors Y;, E;' and Y;' and is utilized for generating the inverse-transformed error vector d;' -A;-~ [D;'], where D;' is a quantization error vector and A;-1 is an inverse transformation. The modifying circuitry (408) operably couples the QEVC (406) to the adder (402) and is used for 1 5 _ generating the modified quantization error vector S; based on the past values of the vector d;'.
In an alternate embodiment, the transcoding device of the present invention may be selected to include an adder (402), a 2 0 generalized quantizer (410), and modifying circuitry (408). In this implementation, the adder (402) is operably coupled to receive at least a first quantized vector Y; and at least a first modified quantization error vector S; and is used for generating a difference vector E;' = Y; - S; between the quantized vector Y; and 2 5 the modified quantization error vector S;; the generalized quantizer (410} is operably coupled to the adder (402) and is used for quantizing the difference vector E;' to obtain a transcoded ' vector Y;', and for receiving at least one of the vectors Y; and E;' and generating an inverse-transformed error vector d;' = A;-~ [D;'], 3 0 where D;' is a quantization error vector and A;-~ is an inverse transformation; and the modifying circuitry (408} is operably coupled to the generalized quantizer (410) and is used for generating the modified quantization error vector S; based on past values of the vector d;'.
_215~8~~
In most applications, the transformation A; is fixed a priori, i.e., is predetermined, whereas Q; and P; are varied using preselected adaptation algorithms. In some applications, the transformation A; is not used; then A; = I, where I is an LXL
identity matrix. In so-called forward adaptation, the parameters of Q; , P; and A; are passed to the decoder as side information, 1 0 while in so-called backward adaptation, Q; , P; and A; are determined at the decoder from previously received information, so no side information needs to be sent.
Given the information on Q;, P; and A;, a decoder can 1 S reconstruct the vector z;. The decoder (200) first recovers the quantized vectors ~Y;} from the received bit stream by decoding the noiseless source code 'and then obtains z;, As shown in Fig. 2, numeral 200, (1 ) the quantized vector Y; is first inverse transformed using the inverse transformation unit A;-~ (202) to 2 0 obtain y; = A;-~ [Y;]; (2) a predictor (206) obtains the prediction p;
of the input vector r; from one or more past reconstructed vectors z~ , j < i , using the prediction operator P;, as in the encoder; and (3) a combiner (204), operably coupled to the predictor (206) and to the transformation unit (A;-~ ) (202) adds 2 5 the vector p; to y; to obtain the reconstructed vector z;.
The reconstructed vector z; can be represented as z; = r; + d;;
where d; = A;-~ [D;] is an inverse-transformed version of the quantization error vector D;. In other words, z; differs from the 3 0 original vector r; only by d; = A;-~ [D;]. To obtain good performance, the transformation A; is chosen such that the error A;-~ [D;], or an appropriately weighted version of it, is kept small.
A transcoder first recovers the sequence of quantized 3 5 vectors {Y; } from the received bit stream by decoding the ;#'.
-- -- _21~4~~
noiseless source code, converts {Y; } into a sequence of transcoded vectors {Y;'}, and then generates a new bit stream representing {Y;'} using the noiseless source code. The transcoder has full knowledge of the operators Q;, A; and P; used at the 5 original encoder and decoder, either a priori or through received side information.
In prior art "decode and re-encode" transcoding, a quantized vector Y; is first decoded using the decoder of Fig. 2 to obtain the 1 0 reconstructed vector z; = r; + d; and then z; is re-encoded using an encoder, possibly with a different quantizer Q;', a different predictor P;' or even a different transformation A;', to obtain the transcoded vector Y;'. The transcoded vector can be decoded by the decoder of Fig. 2 using Q;', P;' and A;'. The final reconstructed vector z;' can then be represented as z;' = r; + d; + d;', where d;'=
(A;')-~ [Q;'] is a transformed version of the quantization error introduced by the transcoder.
Although conceptually straightforward, the implementation 2 0 of the decode and re-encode method can be quite costly because of its high computational and storage requirements. Thus, there is a need for an efficient transcoding device and method that can be implemented with low complexity.
Eirief Descriptions of the Drawings FIG. 1 is a block diagram schematic of a predictive waveform encoder as is known in the art.
FIG. 2 is a block diagram schematic of a decoder that typically operates in conjunction with an encoder of FIG. 1, as is known in the art.
_2154885 FIG. 3, numeral 300, is a general block diagram schematic of a system for transcoding.
FIG. 4 is a block diagram schematic of a device in accordance with the present invention.
FIG. 5 is a block diagram schematic of the device of FIG. 4 shown with greater particularity.
1 0 FIG. 6 is an alternative block diagram schematic of the device of FIG. 4 shown with greater particularity.
FIG. 7 shows a typical configuration of an H.261 video encoder, as is known in the art.
FIG. 8 illustrates the partition of an image into increasingly smaller blocks in the H.261 standard.
FIG. 9 shows the indexing of the coefficients in the transform 2 0 domain for the H.261 standard.
FIG. 10 shows a block diagram schematic of one embodiment of the one-step transcoder of the present invention.
2 5 FIG. 11 is a block diagram schematic of an H.261 decoder as is known in the art.
FIG. 12 is a flow chart of one embodiment of steps in accordance with the method of the present invention.
- _2154885 Detailed Description of a Preferred Embodiment FIG. 3, numeral 300, is a general block diagram schematic of a system for transcoding as is known in the art. The input of the transcoder (304) is a bit stream generated by a predictive waveform encoder {302) such as the H.261 video encoder. Its output is another bit stream which can be decoded by a predictive waveform decoder (306). The transcoder (304) modifies the bit stream according to a predetermined objective.
The transcoder device of the present invention, referred to herein as a "one-step transcoder," achieves the performance of the "decode and re-encode" transcoder with only two transformations and one prediction operation, provided that the 1 S predictor P;' and the transformation A;' used in the transcoder are the same as the operators P; and A; used in the original encoder, thus decreasing complexity in the transcoding operation. In addition, the one-step transcoder reduces the storage requirements.
In the one-step transcoder, shown in a block diagram schematic in Fig. 4, numeral 400, a modified version S; of the quantization error vector is subtracted from the received quantized vector Y; , and the difference vector E;' is re-quantized 2 5 to obtain the transcoded vector Y; '. A quantization error calculator then computes the inverse-transformed quantization error vector d;' = A;-~ [D;'], where D;' = Y; ' - E;' is the quantization~
error vector and A;-~ is an inverse transformation. A modifying circuitry determines the modified quantization error vector S;
3 0 based on past vectors dj', j < i.
The present invention includes a device for transcoding a sequence of quantized vectors Y; generated by a predictive waveform encoder. utilizing modified quantization error vectors 3 5 S;. The device (400) includes an adder (402), a quantizer (404), a - _214885 quantization error vector calculator (QEVC) (406), and modifying circuitry (408). The adder (402) is operably coupled to receive at least a first quantized vector Y; and at least a first modified quantization error vector S; and is utilized for generating a S difference vector E;' = Y; - S; between the quantized vector Y; and the modified quantization error vector S;. The quantizer (404} is operably coupled to the adder (402) and is used for quantizing the difference vector E;' to obtain a transcoded vector Y;'. The quantization error vector calculator (QEVC) (406) is operably coupled to receive at least two of the vectors Y;, E;' and Y;' and is utilized for generating the inverse-transformed error vector d;' -A;-~ [D;'], where D;' is a quantization error vector and A;-1 is an inverse transformation. The modifying circuitry (408) operably couples the QEVC (406) to the adder (402) and is used for 1 5 _ generating the modified quantization error vector S; based on the past values of the vector d;'.
In an alternate embodiment, the transcoding device of the present invention may be selected to include an adder (402), a 2 0 generalized quantizer (410), and modifying circuitry (408). In this implementation, the adder (402) is operably coupled to receive at least a first quantized vector Y; and at least a first modified quantization error vector S; and is used for generating a difference vector E;' = Y; - S; between the quantized vector Y; and 2 5 the modified quantization error vector S;; the generalized quantizer (410} is operably coupled to the adder (402) and is used for quantizing the difference vector E;' to obtain a transcoded ' vector Y;', and for receiving at least one of the vectors Y; and E;' and generating an inverse-transformed error vector d;' = A;-~ [D;'], 3 0 where D;' is a quantization error vector and A;-~ is an inverse transformation; and the modifying circuitry (408} is operably coupled to the generalized quantizer (410) and is used for generating the modified quantization error vector S; based on past values of the vector d;'.
_215~8~~
One embodiment of the one-step transcoder device of FIG. 4 is shown with greater particularity in Fig. 5, numeral 500. The device includes a first adder (502), a quantizer (504), a quantization error vector calculator (QEVC) (506) that includes a second adder (510) coupled to an inverse transformation unit (512), and modifying circuitry (508) that includes a predictor (514) coupled to a transformation unit {516). The first adder (502) and the quantizer (504) are coupled as shown for the adder (402) and the quantizer (404), respectively, of FIG. 4. The second 1 0 adder (510) is operably coupled to receive vectors E;' and Y;' and is utilized for generating the quantization error vector D;'. T h a inverse transformation unit (512) is operably coupled to the second adder (510) and is utilized for generating the inverse-transformed error vector d;' = A;-~ [D;'], where D;' is a quantization error vector and A;-~ is an inverse transformation. The predictor (514) of the modifying circuitry (508) is operably coupled to the inverse transformation unit' (512) of the QEVC (506) and generates the predicted quantization error vector s;. The transformation unit (516) is operably coupled to the predictor 2 0 (514) and is used for transforming the predicted quantization error vector s; to vector S; as described more fully below and providing the modified quantization error vector S; to the first adder (502) based on past values of the vector d;'.
2 5 Past inverse-transformed quantization error vectors dj' = A;-~ [Dj'], j < i, are passed through the prediction operator P;
to obtain the the predicted quantization error vector s;. The vector s; is transformed again to obtain the modified quantization error vector S; = A;[s;]. Then the vector S; is subtracted from the 3 0 received quantized vector Y; to obtain the error vector E;' = Y; - S;.
The error vector E;' is quantized using the quantizer Q;' (504) to obtain the transcoded vector Y;' = E;' + D;', where D;' is the quantization error vector introduced by the transcoder. The transformed quantization error vector '.d;' is obtained by first 3 5 subtracting the vector E;' from Y;' (using the second adder (510)) 21 y 885 to obtain the quantization error vector D;' = E;' + Y;', and then transforming D;' (using the inverse transformation unit (512)) to obtain d;' = A;-1 [D;'].
5 The one-step transcoder generates the same transcoded sequence {Y;'} as the decode and re-encode transcoder. This can be shown by proving that the signal at the input of the quantizer is the same in both cases:
2 5 Past inverse-transformed quantization error vectors dj' = A;-~ [Dj'], j < i, are passed through the prediction operator P;
to obtain the the predicted quantization error vector s;. The vector s; is transformed again to obtain the modified quantization error vector S; = A;[s;]. Then the vector S; is subtracted from the 3 0 received quantized vector Y; to obtain the error vector E;' = Y; - S;.
The error vector E;' is quantized using the quantizer Q;' (504) to obtain the transcoded vector Y;' = E;' + D;', where D;' is the quantization error vector introduced by the transcoder. The transformed quantization error vector '.d;' is obtained by first 3 5 subtracting the vector E;' from Y;' (using the second adder (510)) 21 y 885 to obtain the quantization error vector D;' = E;' + Y;', and then transforming D;' (using the inverse transformation unit (512)) to obtain d;' = A;-1 [D;'].
5 The one-step transcoder generates the same transcoded sequence {Y;'} as the decode and re-encode transcoder. This can be shown by proving that the signal at the input of the quantizer is the same in both cases:
10 First consider the decode and re-encode transcoder. In this case, the input to the quantizer in the re-encoder can be written as Bi = Ai[Zi] - Ai[Pi]
1 S = Zi - A;[rp; + dp; + dp;'], where Z; = A;[z;] is a transformed version of the decoder output z;
(in the transcoder) and rp;, dp; and dp;' represent the outputs of the predictor P; at time i if excited individually by the sequences 2 0 {r;}, {d;} and {d;'}, respectively.
Similarly, the input of the quantizer in the one-step transcoder can be written as 2 5 C; = Y; - A;[dp;'], = Z; - A;[rp; + dpi] - Ai[dpi'].
Since A; is a linear operator, it follows that B; = C;.
3 0 The present invention relies on the fact that the quantized vector Y; can be re-quantized without any error accumulation in the reconstruction, provided that a modified quantization error vector S; is added to Y; before re-quantization. This compensates for the transcoder quantization error added by the prediction loop 3 5 in . the reconstruction.
_ _ _ 254885 . 11 The one-step transcoder of the present invention can also be implemented in other ways. One alternative structure is shown in Fig. 6.
FIG. 6, numeral 600, sets forth a block diagram schematic of an implementation of a device in accordance with the present invention wherein the difference between the input Y; and the output Y;' is fed through the feedback loop. The quantization error vector calculator (606) includes a second adder (610) that is operably coupled to receive Y; and Y;' for obtaining said difference {X;}, an inverse transformation unit (612) that is operably coupled to the second adder (610) for providing vector x;, and a third adder (614) for receiving and combining x; and s;, and modifying circuitry (608) that includes a predictor (616) that is operably coupled to the third adder (614), for utilizing d;' to provide the predicted quantization error vector s;, and a tranformation unit (618) that is operably coupled to the predictor (616), for generating modified quantization error vector S;. The difference 2 0 in the implementation of FIG. 6 lies in the way the transformed quantization error vector d;' is generated:
The input vector Y; is subtracted from the transcoded vector Y;' to obtain the vector X; = Y;' - Y;, which is then transformed to 2 S find x; = A;-~ [X;']. The predicted quantization error vector s; is added to x; to determine d;' _, x; + s;.
That the inverse-transformed quantization vector d;' is the same in both structures can be shown by noting that the vector x;
3 0 in FIG. 6 can be written as x; = A;-~ [Y;' - E;'] - s;' .
Therefore, the implementations of FIGs. 5 and 6 provide the same 3 S performance.
_ ~ 215 4~~
The one-step transcoder may -apse be utilized to change the rate of a bit stream generated by a video encoder operating according to the ITU-T Recommendation H.261. First, a typical configuration of an H.261 encoder is shown in FIG. 7, numeral 700.
The input to the encoder consists of a sequence of images scanned progressively at a nominal image rate of about 30 images per second. Each image consists of a luminance component Y and 1 0 two color difference components CB and CR, sampled according to one of two formats, CIF (802) and QCIF (804), wherein:
CIF: 352 x 288 (Y), 176 x 144 (CR), 176 x 144 (CB) QCIF: 176 x 144 (Y), 88 x 72 (CR), 88 x 72 (CB).
Each image is partitioned into increasingly smaller segments as illustrated in FIG. 8, numeral 800. CIF images are divided into 12 Group Of Blocks (GOB's) and QCIF images are divided into three GOB's. Each GOB (806) consists of 33 macro blocks (MB's), and a 2 0 MB consists of four luminance blocks (808) and two color difference blocks {810, 812), where each block has 64 pixels (814) arranged on an 8x8 grid. Each pixel is represented by an integer between 1 and 254.
2 5 MB's are the fundamental coding elements in H.26. The six 8x8 blocks in a MB are numbered from 1 to 6 as shown in FIG. 8.
Let r;,k,m,n represent the pixel in position (m,n) in the k'th block of the i'th MB, where i = 0, 1,....., k = 1,..., 6 and m, n = 0, 1,...,7.
Then the input vector r; for the i'th MB can be represented as:
r; _ [r;,1 ,0,0,...., r;,1,7,7, r;,2,0,0,...., r;,2,7,7, r;,8,0,0,...., r;,3,7,7 r;,4,p,0,...., r;,4,7,7, r;,~,p,0~...., r;,5,7,7, r;,g,0,0~...., r;,g,7,7] .
The operation of the encoder in FIG. 7, numeral 700, for the 3 S i'th MB is described as follows: First, a motion estimation unit - 21548$5 _ .~ _ (702) utilizes an algorithm to determine a motion vector m; _ (m;~ , m;2). Typically, the algorithm searches the luminance pixels in the previous reconstructed image (stored in a frame buffer (704)) to find a 16x16 window W; for which the "distance"
S between the pixels in that window and the corresponding pixels in the current MB is minimum. The motion vector m; represents the spatial offset between the window W; and the current (i'th) MB.
The pixels that lie in the window W; form the motion-1 0 compensated vector u; _ [u;,~ ,o,o,...., u;,s,7,7] that is stored in the motion compensation unit (706). Thus, the motion estimation unit {702) is operably coupled to the frame buffer (704), and provides a motion compensated vector to the motion compensation unit (706), which is also operably coupled to the 15 frame buffer (704). The mode unit (710) that is operably coupled to receive the input vector r; determines the encoding mode {Inter/lntra?). The prediction p; _ [p;,1,0,0,...., p;,6,7,7] of the input vector r; is obtained from u; based on the encoding mode of the current MB:
a. In the intra mode, set p; = 0.
b. In the inter (predictive) mode:
2 5 b1. If the loop filter (708) is "out," set p; = u;, b2. If the loop filter (708) is "in," filter the elements of u; {block-by-block) using a separable, two-dimensional, 3-tap -FIR filter, and set the output of the loop filter equal to Pi.
3 0 In each case, the loop filter (708) is operably coupled to the motion compensation unit (706) to operate as set forth above.
At the first adder (712), the vector p; is subtracted from the 35 input r; to obtain the prediction error vector e; _ [e;,~,o,o,...., ei,s,7,7] = ri - pi. The vector e; is transformed to find E; _ [Ei,~ ,o,o~...., Ei,s,7,~] = A[ei]~ where A[ ] here represents the Discrete Cosine Transform (DCT) at a DCT unit (714) that is operably coupled to the first adder (712). The DCT is applied independently to each 8x8 block in the MB to obtain the transform coefficients E;,k,s,t according to:
Ei,k,s,t = 0.25 C(s) C(t) ~~ o<m,n<7 ri,k,m,n cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16], where C(s) = 1 hI2 for s = 0, and 1 otherwise, and C(t) = 1 hI2 for t - 0, and 1 otherwise. Here s and t are the transform domain variables. Note that the same transformation A[ ] is used in every MB. FIG. 9, numeral 900, shows how the indices t (902) and s 1 5 _ (904), both in the range 0 to 7, are used in the transform domain.
A quantizer (716) is operably coupled to receive the transform coefficients E;,k,s,t and quantizes the coefficients using a scalar quantizer which is uniform with step size o; except for a 2 0 dead-zone around 0. The reconstruction values of the quantizer are {0, ~a;, ~(a; + e;), ~(a; + 20;),...., -2048 <_ ~(a; + 126O;) < 2048a where e; = 2, 4,..., 62 and a; = 3O;/2 when O; is odd and a; = 30;/2 -1, otherwise. The same step size 0; is utilized for all transform coefficients in the MB, except in the intra mode, the sample 2 5 E;,k,o,o is quantized using a uniform scalar quantizer of step size e; = 8 with no dead-zone (also, since E;,k,o,o ? 0, only the positive reconstruction values are needed in this case). The decision regions of the quantizer are selected to improve the image quality as much as possible.
The output of the quantizer (716) is the quantized transform vector Y; _ [Yi,~,o,o~...., Yi,s,7,~1 = Ei + D;, where D; _ [D;,~,o,o,-..., Di,s,7,7] is the quantization error vector.
= 2154885 The quantized vector Y; is input into an inverse DCT unit (724) and is further transformed to generate the vector y; _ [Yi,~ ,o,o~....) yi,s,7,7] = A'~ [Yi]~ where A-~ [ ] is an inverse DCT. The pixels y;,k,m,n are determined according to:
Yi,k,m,n _= 0.25 ~~ o<_s,t<7 C(s) C(t) Y;,k,s,t cos[~(2m+1 )s/16]
cos[n(2n+1 )t/16].
The vector p; is input to a second adder (726) and is added to 1 0 y; to obtain the reconstructed vector z; _ [z;,~,o,o,...., zi,s,~,7] = Yi + p; , and the pixels z;,k,m,n are stored in the frame buffer (704).
The quantized transform coefficients Y;,k,s,t are typically encoded into a CBR bit stream, for example, by utilizing a variable-length encoder (720) with an output buffer (722) and then transmitted (or stored). First the coefficients in each block are converted from the 8x8 matrix format into a serial format using what is known as 'zig-zag scanning' (see FIG. 9), and then the coefficients in each block are represented by a sequence of 2 0 (Run, Level) values where "Run" represents the number of zeroes before the next non-zero value "Level." These (Run, Level) values are then encoded using a binary variable-length code. The output of the variable-length encoder is typically buffered (720) in order to generate a CBR bit stream, and the quantization step size is 2 5 adjusted by a quantizer control unit (718) to prevent buffer overflows (or underflows). The quantizer control unit (718) is operably coupled to the buffer (722) and provides an adjusting ' signal to the quantizer (716).
3 0 In addition to the quantized transform coefficients Y;,k,s,t the encoder also transmits side information to allow the decoder to correctly reconstruct the coded signal. Side information includes the source format (CIF/QCIF), quantizer step size a;, inter/intra decision, motion vector m; (in inter mode only) and 3 5 the loop filter in/out (when motion vector is present).
= 215 4885 . 1s The step size 0; can be kept fixed for an entire GOB. In that case, only one step size value per GOB needs to be transmitted as side information. It is also possible to change O; inside the GOB.
This provides a finer adjustment of the step size at the expense of a larger overhead.
The H.261 encoder also transmits side information to allow the encoder skip a block or a MB. For example, when all the 1 0 coefficients Y;,k,s,t in an 8x8 block are zero, the encoder does not code these blocks at all. Similarly, when there is little motion, or when the motion estimation is nearly perfect, all coefficients in a MB may be zero. In that case, the encoder may skip the entire MB. When a block or a MB is skipped in the encoder, the decoder simply substitutes zeroes for the missing coefficients.
The maximum image rate in H.261 is approximately 30 images/sec, but the Recommendation allows the encoder to regularly skip 0, 1, 2 or 3 images to achieve effective image 2 0 rates of 30, 15, 10 and 7.5 images/sec. The encoder may also skip an image occasionally. This can be used, for example, immediately after encoding an image in the intra mode. Since the intra mode typically generates a large number of bits, skipping an image can help reduce the buffering delay.
The rate of a bit stream generated by an H.261 encoder can be changed using the transcoder of the present invention. A
transcoder first decodes the received bit stream using a decoder for the variable-length code to obtain the sequence of (Run, Level) 3 0 values, and then recovers the sequence of quantized vectors Yi =
fYi,i,o,o~...., Y;,s,7,7] generated by the encoder. The decoder also recovers all side information. If a block or MB is not coded, the decoder inserts zeroes for the corresponding missing coefficients.
. - _ 215 4885 Typical subsequent operations of the one-step transcoder are described below (see FIG. 10, numeral 1000). A variable-length decoder (1002) outputs vector Y; to a first adder (1004) and provides framing information, inter/intra information, information about which blocks are coded (coded block pattern or CBP), step size 0;, motion vectors m; and loop filter information to various elements of the transcoder along a feedforward path.
The first adder {1004) combines vector Y; and a modified quantization error vector S; to provide vector E;' to a quantizer 1 0 (1006) and to a second adder {1014). The quantizer (1006) is operably coupled to the first adder (1004) and provides a quantized output vector Y;' to a variable-length encoder (1010) and to the second adder (1014). The variable length encoder (1010) is operably coupled to the quantizer (1006) and to receive information from the variable-length decoder (1002) and generates the output bits. The buffer (1012) is operably coupled to the variable-length encoder (1010) and provides a means for storing output bits before transmission and also provides an input to a quantizer control (1008). The quantizer control (1008) is 2 0 operably coupled to the buffer (1012) and provides a control signal to the quantizer (1006) as described more fully above. The second adder (1014) is operably coupled to receive the vectors E;' and Y;' and provides D;'= E;' - Y;'. An inverse DCT unit (1016) is operably coupled to the second adder (1014) and provides an 2 5 inverse transform vector d;'. The frame buffer (1018) is operably coupled to the inverse DCT unit (1016) and provides output vector z;. A motion compensation unit (1020) is operably coupled to the' frame buffer and to receive a motion vector from the variable-length decoder (1002) and is utilized to provide an output vector 3 0 w;. The loop filter (1022) is operably coupled to the motion compensation unit (1020), receives in/out loop filter information from the variable-length decoder {1002), and outputs the predicted quantization error vector s;. A DCT unit (1024) is operably coupled to the loop filter (1022) and outputs vector S;.
_ 2154885 The above-cited vectors are further described as follows:
( 1 ) No motion estimation is performed in the transcoder.
Instead, the motion vector m; _ (m;l, m;2) received from the encoder is used to determine the 16x16 window W;, and the pixels in the transformed quantization error buffer (see below) that lie in that window are used to form the vector w; _ [w;,i ,o,o,...., wi,s,7,7]~ Again, the motion vector m; represents the spatial offset between the window W; and the current (i'th) MB.
The predicted quantization error vector s; _ [s;,~,o,o,...., si,s,7,7] is obtained from w; based on the encoding mode of the current MB:
a. In the intra mode, set s; = 0.
b. In the inter (predictive) mode:
b1. If the loop filter is out, set s; = wi.
b2. If the loop filter is in, filter the elements of w;
2 0 (block-by-block) using a separable, two-dimensional 3-tap FIR
filter, and set the output of the filter equal to s;.
(2) The vector s; is transformed again to obtain the modified quantization error vector S; = A[s;], where A[ ] represents the DCT
2 5 operation, according to:
Si,k,s,t = 0.25 C(s) C(t) ~~ o<m,n<7 si,k,m,n cos[n(2m+1 )s/16]' cos[n{2n+1 }t/16]
3 0 (3} The modified quantizationerror vectorS; _ [Si,~ ,o,o~...., S;,s,7,7] is subtractedthe coded vector from Y; to obtain the error vector E;' _ [E';,~,o,o,...., E';,s,7,7]Yi - Si.
=
(4) The coefficients E';,k,s,t are (re)-quantized using an 3 5 H.261 quantizer as in the encoder, possibly with a different step . 19 size 0;', except for the term E';,k,o,o in the intra mode the same step size is used: D; = e;' = 8. The result is the transcoded transform vector Y;' _ [Y';,~ ,o,o,....) Y';,s,7,7] = Ei'+ D;', where D;' _ [D';,~,o,o,...., D';,s,7,7] is the quantization noise vector for the transcoder.
(5) The error vector E;' is subtracted from Y;' to obtain the quantization error vector D;', and then D;' is inverse transformed using an inverse DCT to obtain d;' = A-~ [D;']:
d'i,k,m,n = 0.25 ~~ 0<_s,t<_7 C(s) C(t) D';,k,s,t cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16];
The pixels d';,k,m,n are stored in a reconstructed quantization frame buffer for future iterations.
In the following example, the transcoder uses the motion vector mi, the interlintra decision and the loop filter decision received from the H.261 encoder without any modifications. The 2 0 source format (CIF/QCIF) is also not modified. This greatly simplifies the implementation of the transcoder.
The transcoder converts the transcoded sequence Y;' into a bit stream using a noiseless source encoder as in the H.261 encoder.
The bit stream generated by the transcoder is typically decoded by an H.261 decoder, as shown in FIG. 11, numeral 1100. ' Here, after recovering the sequence Y;' from the received bit stream as usual, the decoder reconstructs the signal z;' _ 3 0 [z;,~ ,o,o,...., z;,s,7,7] = ri + d; + d;', as follows:
(1 ) The received vector Y;' , typically output by a variable-length decoder (1102)) is first transformed using an inverse DCT
unit (1104) that is operably coupled to the variable-length 21548$5 decoder (1102), wherein the inverse DCT unit (1104) determines an inverse DCT to obtain the vector y;' _ [y'i,~,o,o~...., y'i,s,7,7]:
Y'i,k,m,n = 0.25 ~~ 0<s,t<7 C(s)C(t) Y';,k,s,t cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16].
An adder (1106) is operably coupled to the inverse DCT unit (1104) and to a loop filter (1108) to combine y;' and p;', and generate output vector z;'. Vector z;' is an output of the H.261 1 0 decoder and is also utilized as a feedback vector to a frame buffer (1112).
(2) A motion compensation unit (1110) is operably coupled to receive the output of the frame buffer (1112) and a motion vector 1 5 _ m; _ (m;~ , m;2) from the encoder that is used to determine a 16x16 window W;. The pixels in the reconstruction buffer that lie in that window are used to form the vector u;' _ [u';,~,o,o,...., u'i,s,7,7] .
2 0 A loop filter (1108) is operably coupled to the motion compensation unit (1110) for providing vector p;' to the adder (1106). The prediction value p;' _ [p';,~,o,o,...., p'i,s,7,7] is obtained from u;' based on the encoding mode of the current MB:
2 5 a. In the intra mode, set p;' = 0.
b. In the predictive (inter) mode:
b1. If the loop filter (1108) is out, set p;' - u;', 3 0 b2, If the loop filter (1108) is in, filter the elements of u;' (block-by-block) using a separable, two-dimensional 3-tap FIR filter, and set the output of the filter equal to p;'.
21548g5 (3) The vector p;' is added to y;' to obtain the reconstructed vector z;' = y;' + p;'.
The overall quantization error is then the sum of the quantization errors d; and d;' introduced by the encoder and the transcoder, respectively.
FIG. 12, numeral 1200, is a flow chart of one embodiment of the steps in accordance with the method of the present invention.
1 0 The method provides for transcoding a sequence of quantized vectors Y; generated by a predictive waveform encoder utilizing modified quantization error vectors S;. The vectors cited below are described with greater particularity above.
1 5 The method comprises the steps of: (1 ) generating a difference vector E;' = Y; - S; between at least a first quantized vector Y; and a modified quantization error vector S; (1202); (2) quantizing the difference vector E;' to obtain a transcoded vector Y;' (1204); (3) receiving at least two of the vectors Y;, E;' and Y;' 2 0 and generating the inverse-transformed error vector d;' = A;-~ [D;'], where D;' is a quantization error vector and A;-~ is an inverse transformation (1206); and generating the modified quantization error vector S; based on past values of the vector d;' (1208).
2 5 As described for the device of the present invention, the transformation A;-1 may be selected to be constant, the quantized vector Y; may have dimensionality L = 1, and the inverse transformation A;-~ [ ] may be an identity matrix.
3 0 The step of generating the modified quantization error vector S; based on past values of the vector d;' may be selected to include: (1 ) generating a predicted quantization error vector s;, and (2) utilizing s; to generate a modified quantization error vector S; such that S; = A;[s;] where A; is a predetermined 3 5 transformation. In addition, this step may include one of {1 )-(2):
_2154885 (1 ) utilizing received vectors Y;' and E;', for determining a quantization error D;' = Y;' - E;', and utilizing an inverse transformation A;-1 to generate the vector d;' according to d;' _ A;-~ [D;'], and (2) receiving and utilizing vectors Y; and Y;' for determining an error X;' = Y;' - Y;, utilizing inverse transformation A;-1 for generating a vector x; according to x; = A;-~ [X;'], and adding x; and s; to obtain the vector d;'.
The quantized vector Y; may be generated by a predictive digital video encoder and represents quantized transform coefficients of a macro block (MB), a fundamental coding element in H.261. A motion-compensated prediction (MCP) may be utilized for prediction. Transformation A[ ] is a Discrete Cosine Transform (DCT).
In one embodiment, the video encoder may be selected to be an H.261 encoder.
The present invention may be embodied in other specific 2 0 forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing 2 5 description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
1 S = Zi - A;[rp; + dp; + dp;'], where Z; = A;[z;] is a transformed version of the decoder output z;
(in the transcoder) and rp;, dp; and dp;' represent the outputs of the predictor P; at time i if excited individually by the sequences 2 0 {r;}, {d;} and {d;'}, respectively.
Similarly, the input of the quantizer in the one-step transcoder can be written as 2 5 C; = Y; - A;[dp;'], = Z; - A;[rp; + dpi] - Ai[dpi'].
Since A; is a linear operator, it follows that B; = C;.
3 0 The present invention relies on the fact that the quantized vector Y; can be re-quantized without any error accumulation in the reconstruction, provided that a modified quantization error vector S; is added to Y; before re-quantization. This compensates for the transcoder quantization error added by the prediction loop 3 5 in . the reconstruction.
_ _ _ 254885 . 11 The one-step transcoder of the present invention can also be implemented in other ways. One alternative structure is shown in Fig. 6.
FIG. 6, numeral 600, sets forth a block diagram schematic of an implementation of a device in accordance with the present invention wherein the difference between the input Y; and the output Y;' is fed through the feedback loop. The quantization error vector calculator (606) includes a second adder (610) that is operably coupled to receive Y; and Y;' for obtaining said difference {X;}, an inverse transformation unit (612) that is operably coupled to the second adder (610) for providing vector x;, and a third adder (614) for receiving and combining x; and s;, and modifying circuitry (608) that includes a predictor (616) that is operably coupled to the third adder (614), for utilizing d;' to provide the predicted quantization error vector s;, and a tranformation unit (618) that is operably coupled to the predictor (616), for generating modified quantization error vector S;. The difference 2 0 in the implementation of FIG. 6 lies in the way the transformed quantization error vector d;' is generated:
The input vector Y; is subtracted from the transcoded vector Y;' to obtain the vector X; = Y;' - Y;, which is then transformed to 2 S find x; = A;-~ [X;']. The predicted quantization error vector s; is added to x; to determine d;' _, x; + s;.
That the inverse-transformed quantization vector d;' is the same in both structures can be shown by noting that the vector x;
3 0 in FIG. 6 can be written as x; = A;-~ [Y;' - E;'] - s;' .
Therefore, the implementations of FIGs. 5 and 6 provide the same 3 S performance.
_ ~ 215 4~~
The one-step transcoder may -apse be utilized to change the rate of a bit stream generated by a video encoder operating according to the ITU-T Recommendation H.261. First, a typical configuration of an H.261 encoder is shown in FIG. 7, numeral 700.
The input to the encoder consists of a sequence of images scanned progressively at a nominal image rate of about 30 images per second. Each image consists of a luminance component Y and 1 0 two color difference components CB and CR, sampled according to one of two formats, CIF (802) and QCIF (804), wherein:
CIF: 352 x 288 (Y), 176 x 144 (CR), 176 x 144 (CB) QCIF: 176 x 144 (Y), 88 x 72 (CR), 88 x 72 (CB).
Each image is partitioned into increasingly smaller segments as illustrated in FIG. 8, numeral 800. CIF images are divided into 12 Group Of Blocks (GOB's) and QCIF images are divided into three GOB's. Each GOB (806) consists of 33 macro blocks (MB's), and a 2 0 MB consists of four luminance blocks (808) and two color difference blocks {810, 812), where each block has 64 pixels (814) arranged on an 8x8 grid. Each pixel is represented by an integer between 1 and 254.
2 5 MB's are the fundamental coding elements in H.26. The six 8x8 blocks in a MB are numbered from 1 to 6 as shown in FIG. 8.
Let r;,k,m,n represent the pixel in position (m,n) in the k'th block of the i'th MB, where i = 0, 1,....., k = 1,..., 6 and m, n = 0, 1,...,7.
Then the input vector r; for the i'th MB can be represented as:
r; _ [r;,1 ,0,0,...., r;,1,7,7, r;,2,0,0,...., r;,2,7,7, r;,8,0,0,...., r;,3,7,7 r;,4,p,0,...., r;,4,7,7, r;,~,p,0~...., r;,5,7,7, r;,g,0,0~...., r;,g,7,7] .
The operation of the encoder in FIG. 7, numeral 700, for the 3 S i'th MB is described as follows: First, a motion estimation unit - 21548$5 _ .~ _ (702) utilizes an algorithm to determine a motion vector m; _ (m;~ , m;2). Typically, the algorithm searches the luminance pixels in the previous reconstructed image (stored in a frame buffer (704)) to find a 16x16 window W; for which the "distance"
S between the pixels in that window and the corresponding pixels in the current MB is minimum. The motion vector m; represents the spatial offset between the window W; and the current (i'th) MB.
The pixels that lie in the window W; form the motion-1 0 compensated vector u; _ [u;,~ ,o,o,...., u;,s,7,7] that is stored in the motion compensation unit (706). Thus, the motion estimation unit {702) is operably coupled to the frame buffer (704), and provides a motion compensated vector to the motion compensation unit (706), which is also operably coupled to the 15 frame buffer (704). The mode unit (710) that is operably coupled to receive the input vector r; determines the encoding mode {Inter/lntra?). The prediction p; _ [p;,1,0,0,...., p;,6,7,7] of the input vector r; is obtained from u; based on the encoding mode of the current MB:
a. In the intra mode, set p; = 0.
b. In the inter (predictive) mode:
2 5 b1. If the loop filter (708) is "out," set p; = u;, b2. If the loop filter (708) is "in," filter the elements of u; {block-by-block) using a separable, two-dimensional, 3-tap -FIR filter, and set the output of the loop filter equal to Pi.
3 0 In each case, the loop filter (708) is operably coupled to the motion compensation unit (706) to operate as set forth above.
At the first adder (712), the vector p; is subtracted from the 35 input r; to obtain the prediction error vector e; _ [e;,~,o,o,...., ei,s,7,7] = ri - pi. The vector e; is transformed to find E; _ [Ei,~ ,o,o~...., Ei,s,7,~] = A[ei]~ where A[ ] here represents the Discrete Cosine Transform (DCT) at a DCT unit (714) that is operably coupled to the first adder (712). The DCT is applied independently to each 8x8 block in the MB to obtain the transform coefficients E;,k,s,t according to:
Ei,k,s,t = 0.25 C(s) C(t) ~~ o<m,n<7 ri,k,m,n cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16], where C(s) = 1 hI2 for s = 0, and 1 otherwise, and C(t) = 1 hI2 for t - 0, and 1 otherwise. Here s and t are the transform domain variables. Note that the same transformation A[ ] is used in every MB. FIG. 9, numeral 900, shows how the indices t (902) and s 1 5 _ (904), both in the range 0 to 7, are used in the transform domain.
A quantizer (716) is operably coupled to receive the transform coefficients E;,k,s,t and quantizes the coefficients using a scalar quantizer which is uniform with step size o; except for a 2 0 dead-zone around 0. The reconstruction values of the quantizer are {0, ~a;, ~(a; + e;), ~(a; + 20;),...., -2048 <_ ~(a; + 126O;) < 2048a where e; = 2, 4,..., 62 and a; = 3O;/2 when O; is odd and a; = 30;/2 -1, otherwise. The same step size 0; is utilized for all transform coefficients in the MB, except in the intra mode, the sample 2 5 E;,k,o,o is quantized using a uniform scalar quantizer of step size e; = 8 with no dead-zone (also, since E;,k,o,o ? 0, only the positive reconstruction values are needed in this case). The decision regions of the quantizer are selected to improve the image quality as much as possible.
The output of the quantizer (716) is the quantized transform vector Y; _ [Yi,~,o,o~...., Yi,s,7,~1 = Ei + D;, where D; _ [D;,~,o,o,-..., Di,s,7,7] is the quantization error vector.
= 2154885 The quantized vector Y; is input into an inverse DCT unit (724) and is further transformed to generate the vector y; _ [Yi,~ ,o,o~....) yi,s,7,7] = A'~ [Yi]~ where A-~ [ ] is an inverse DCT. The pixels y;,k,m,n are determined according to:
Yi,k,m,n _= 0.25 ~~ o<_s,t<7 C(s) C(t) Y;,k,s,t cos[~(2m+1 )s/16]
cos[n(2n+1 )t/16].
The vector p; is input to a second adder (726) and is added to 1 0 y; to obtain the reconstructed vector z; _ [z;,~,o,o,...., zi,s,~,7] = Yi + p; , and the pixels z;,k,m,n are stored in the frame buffer (704).
The quantized transform coefficients Y;,k,s,t are typically encoded into a CBR bit stream, for example, by utilizing a variable-length encoder (720) with an output buffer (722) and then transmitted (or stored). First the coefficients in each block are converted from the 8x8 matrix format into a serial format using what is known as 'zig-zag scanning' (see FIG. 9), and then the coefficients in each block are represented by a sequence of 2 0 (Run, Level) values where "Run" represents the number of zeroes before the next non-zero value "Level." These (Run, Level) values are then encoded using a binary variable-length code. The output of the variable-length encoder is typically buffered (720) in order to generate a CBR bit stream, and the quantization step size is 2 5 adjusted by a quantizer control unit (718) to prevent buffer overflows (or underflows). The quantizer control unit (718) is operably coupled to the buffer (722) and provides an adjusting ' signal to the quantizer (716).
3 0 In addition to the quantized transform coefficients Y;,k,s,t the encoder also transmits side information to allow the decoder to correctly reconstruct the coded signal. Side information includes the source format (CIF/QCIF), quantizer step size a;, inter/intra decision, motion vector m; (in inter mode only) and 3 5 the loop filter in/out (when motion vector is present).
= 215 4885 . 1s The step size 0; can be kept fixed for an entire GOB. In that case, only one step size value per GOB needs to be transmitted as side information. It is also possible to change O; inside the GOB.
This provides a finer adjustment of the step size at the expense of a larger overhead.
The H.261 encoder also transmits side information to allow the encoder skip a block or a MB. For example, when all the 1 0 coefficients Y;,k,s,t in an 8x8 block are zero, the encoder does not code these blocks at all. Similarly, when there is little motion, or when the motion estimation is nearly perfect, all coefficients in a MB may be zero. In that case, the encoder may skip the entire MB. When a block or a MB is skipped in the encoder, the decoder simply substitutes zeroes for the missing coefficients.
The maximum image rate in H.261 is approximately 30 images/sec, but the Recommendation allows the encoder to regularly skip 0, 1, 2 or 3 images to achieve effective image 2 0 rates of 30, 15, 10 and 7.5 images/sec. The encoder may also skip an image occasionally. This can be used, for example, immediately after encoding an image in the intra mode. Since the intra mode typically generates a large number of bits, skipping an image can help reduce the buffering delay.
The rate of a bit stream generated by an H.261 encoder can be changed using the transcoder of the present invention. A
transcoder first decodes the received bit stream using a decoder for the variable-length code to obtain the sequence of (Run, Level) 3 0 values, and then recovers the sequence of quantized vectors Yi =
fYi,i,o,o~...., Y;,s,7,7] generated by the encoder. The decoder also recovers all side information. If a block or MB is not coded, the decoder inserts zeroes for the corresponding missing coefficients.
. - _ 215 4885 Typical subsequent operations of the one-step transcoder are described below (see FIG. 10, numeral 1000). A variable-length decoder (1002) outputs vector Y; to a first adder (1004) and provides framing information, inter/intra information, information about which blocks are coded (coded block pattern or CBP), step size 0;, motion vectors m; and loop filter information to various elements of the transcoder along a feedforward path.
The first adder {1004) combines vector Y; and a modified quantization error vector S; to provide vector E;' to a quantizer 1 0 (1006) and to a second adder {1014). The quantizer (1006) is operably coupled to the first adder (1004) and provides a quantized output vector Y;' to a variable-length encoder (1010) and to the second adder (1014). The variable length encoder (1010) is operably coupled to the quantizer (1006) and to receive information from the variable-length decoder (1002) and generates the output bits. The buffer (1012) is operably coupled to the variable-length encoder (1010) and provides a means for storing output bits before transmission and also provides an input to a quantizer control (1008). The quantizer control (1008) is 2 0 operably coupled to the buffer (1012) and provides a control signal to the quantizer (1006) as described more fully above. The second adder (1014) is operably coupled to receive the vectors E;' and Y;' and provides D;'= E;' - Y;'. An inverse DCT unit (1016) is operably coupled to the second adder (1014) and provides an 2 5 inverse transform vector d;'. The frame buffer (1018) is operably coupled to the inverse DCT unit (1016) and provides output vector z;. A motion compensation unit (1020) is operably coupled to the' frame buffer and to receive a motion vector from the variable-length decoder (1002) and is utilized to provide an output vector 3 0 w;. The loop filter (1022) is operably coupled to the motion compensation unit (1020), receives in/out loop filter information from the variable-length decoder {1002), and outputs the predicted quantization error vector s;. A DCT unit (1024) is operably coupled to the loop filter (1022) and outputs vector S;.
_ 2154885 The above-cited vectors are further described as follows:
( 1 ) No motion estimation is performed in the transcoder.
Instead, the motion vector m; _ (m;l, m;2) received from the encoder is used to determine the 16x16 window W;, and the pixels in the transformed quantization error buffer (see below) that lie in that window are used to form the vector w; _ [w;,i ,o,o,...., wi,s,7,7]~ Again, the motion vector m; represents the spatial offset between the window W; and the current (i'th) MB.
The predicted quantization error vector s; _ [s;,~,o,o,...., si,s,7,7] is obtained from w; based on the encoding mode of the current MB:
a. In the intra mode, set s; = 0.
b. In the inter (predictive) mode:
b1. If the loop filter is out, set s; = wi.
b2. If the loop filter is in, filter the elements of w;
2 0 (block-by-block) using a separable, two-dimensional 3-tap FIR
filter, and set the output of the filter equal to s;.
(2) The vector s; is transformed again to obtain the modified quantization error vector S; = A[s;], where A[ ] represents the DCT
2 5 operation, according to:
Si,k,s,t = 0.25 C(s) C(t) ~~ o<m,n<7 si,k,m,n cos[n(2m+1 )s/16]' cos[n{2n+1 }t/16]
3 0 (3} The modified quantizationerror vectorS; _ [Si,~ ,o,o~...., S;,s,7,7] is subtractedthe coded vector from Y; to obtain the error vector E;' _ [E';,~,o,o,...., E';,s,7,7]Yi - Si.
=
(4) The coefficients E';,k,s,t are (re)-quantized using an 3 5 H.261 quantizer as in the encoder, possibly with a different step . 19 size 0;', except for the term E';,k,o,o in the intra mode the same step size is used: D; = e;' = 8. The result is the transcoded transform vector Y;' _ [Y';,~ ,o,o,....) Y';,s,7,7] = Ei'+ D;', where D;' _ [D';,~,o,o,...., D';,s,7,7] is the quantization noise vector for the transcoder.
(5) The error vector E;' is subtracted from Y;' to obtain the quantization error vector D;', and then D;' is inverse transformed using an inverse DCT to obtain d;' = A-~ [D;']:
d'i,k,m,n = 0.25 ~~ 0<_s,t<_7 C(s) C(t) D';,k,s,t cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16];
The pixels d';,k,m,n are stored in a reconstructed quantization frame buffer for future iterations.
In the following example, the transcoder uses the motion vector mi, the interlintra decision and the loop filter decision received from the H.261 encoder without any modifications. The 2 0 source format (CIF/QCIF) is also not modified. This greatly simplifies the implementation of the transcoder.
The transcoder converts the transcoded sequence Y;' into a bit stream using a noiseless source encoder as in the H.261 encoder.
The bit stream generated by the transcoder is typically decoded by an H.261 decoder, as shown in FIG. 11, numeral 1100. ' Here, after recovering the sequence Y;' from the received bit stream as usual, the decoder reconstructs the signal z;' _ 3 0 [z;,~ ,o,o,...., z;,s,7,7] = ri + d; + d;', as follows:
(1 ) The received vector Y;' , typically output by a variable-length decoder (1102)) is first transformed using an inverse DCT
unit (1104) that is operably coupled to the variable-length 21548$5 decoder (1102), wherein the inverse DCT unit (1104) determines an inverse DCT to obtain the vector y;' _ [y'i,~,o,o~...., y'i,s,7,7]:
Y'i,k,m,n = 0.25 ~~ 0<s,t<7 C(s)C(t) Y';,k,s,t cos[n(2m+1 )s/16]
cos[n(2n+1 )t/16].
An adder (1106) is operably coupled to the inverse DCT unit (1104) and to a loop filter (1108) to combine y;' and p;', and generate output vector z;'. Vector z;' is an output of the H.261 1 0 decoder and is also utilized as a feedback vector to a frame buffer (1112).
(2) A motion compensation unit (1110) is operably coupled to receive the output of the frame buffer (1112) and a motion vector 1 5 _ m; _ (m;~ , m;2) from the encoder that is used to determine a 16x16 window W;. The pixels in the reconstruction buffer that lie in that window are used to form the vector u;' _ [u';,~,o,o,...., u'i,s,7,7] .
2 0 A loop filter (1108) is operably coupled to the motion compensation unit (1110) for providing vector p;' to the adder (1106). The prediction value p;' _ [p';,~,o,o,...., p'i,s,7,7] is obtained from u;' based on the encoding mode of the current MB:
2 5 a. In the intra mode, set p;' = 0.
b. In the predictive (inter) mode:
b1. If the loop filter (1108) is out, set p;' - u;', 3 0 b2, If the loop filter (1108) is in, filter the elements of u;' (block-by-block) using a separable, two-dimensional 3-tap FIR filter, and set the output of the filter equal to p;'.
21548g5 (3) The vector p;' is added to y;' to obtain the reconstructed vector z;' = y;' + p;'.
The overall quantization error is then the sum of the quantization errors d; and d;' introduced by the encoder and the transcoder, respectively.
FIG. 12, numeral 1200, is a flow chart of one embodiment of the steps in accordance with the method of the present invention.
1 0 The method provides for transcoding a sequence of quantized vectors Y; generated by a predictive waveform encoder utilizing modified quantization error vectors S;. The vectors cited below are described with greater particularity above.
1 5 The method comprises the steps of: (1 ) generating a difference vector E;' = Y; - S; between at least a first quantized vector Y; and a modified quantization error vector S; (1202); (2) quantizing the difference vector E;' to obtain a transcoded vector Y;' (1204); (3) receiving at least two of the vectors Y;, E;' and Y;' 2 0 and generating the inverse-transformed error vector d;' = A;-~ [D;'], where D;' is a quantization error vector and A;-~ is an inverse transformation (1206); and generating the modified quantization error vector S; based on past values of the vector d;' (1208).
2 5 As described for the device of the present invention, the transformation A;-1 may be selected to be constant, the quantized vector Y; may have dimensionality L = 1, and the inverse transformation A;-~ [ ] may be an identity matrix.
3 0 The step of generating the modified quantization error vector S; based on past values of the vector d;' may be selected to include: (1 ) generating a predicted quantization error vector s;, and (2) utilizing s; to generate a modified quantization error vector S; such that S; = A;[s;] where A; is a predetermined 3 5 transformation. In addition, this step may include one of {1 )-(2):
_2154885 (1 ) utilizing received vectors Y;' and E;', for determining a quantization error D;' = Y;' - E;', and utilizing an inverse transformation A;-1 to generate the vector d;' according to d;' _ A;-~ [D;'], and (2) receiving and utilizing vectors Y; and Y;' for determining an error X;' = Y;' - Y;, utilizing inverse transformation A;-1 for generating a vector x; according to x; = A;-~ [X;'], and adding x; and s; to obtain the vector d;'.
The quantized vector Y; may be generated by a predictive digital video encoder and represents quantized transform coefficients of a macro block (MB), a fundamental coding element in H.261. A motion-compensated prediction (MCP) may be utilized for prediction. Transformation A[ ] is a Discrete Cosine Transform (DCT).
In one embodiment, the video encoder may be selected to be an H.261 encoder.
The present invention may be embodied in other specific 2 0 forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing 2 5 description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (36)
OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A device for transcoding a sequence of quantized vectors Y i generated by a predictive waveform encoder utilizing modified quantization error vectors S i, comprising:
(A) an adder, operably coupled to receive at least a first quantized vector Y
i and at least a first modified quantization error vector S 1, for generating a difference vector E i '=Y i -S; between the quantized vector Y i and the modified quantization error vector S i;
(B) a quantizer, operably coupled to the adder, for quantizing the difference vector E i' to obtain a transcoded vector Y i ';
(C) a quantization error vector calculator (QEVC), operably coupled to receive at least two of the vectors Y i, E i ' and Y i', for generating an inverse-transformed error vector d i'=A i-1' [D i '], where D i ' is a quantization error vector and A i-1' is an inverse transformation; and (D) modifying circuitry, operably coupled to the QEVC, for generating a modified quantization error vector S i based on past values of the vector d i', where S
i is input to the adder.
(A) an adder, operably coupled to receive at least a first quantized vector Y
i and at least a first modified quantization error vector S 1, for generating a difference vector E i '=Y i -S; between the quantized vector Y i and the modified quantization error vector S i;
(B) a quantizer, operably coupled to the adder, for quantizing the difference vector E i' to obtain a transcoded vector Y i ';
(C) a quantization error vector calculator (QEVC), operably coupled to receive at least two of the vectors Y i, E i ' and Y i', for generating an inverse-transformed error vector d i'=A i-1' [D i '], where D i ' is a quantization error vector and A i-1' is an inverse transformation; and (D) modifying circuitry, operably coupled to the QEVC, for generating a modified quantization error vector S i based on past values of the vector d i', where S
i is input to the adder.
2. The device of claim 1, wherein said inverse transformation A i-1' is constant.
3. The device of claim 2, where the inverse transformation A i-1' is linear.
4. The device of claim 3, wherein said quantized vector Y i has dimensionality L=1.
5. The device of claim 4, wherein said inverse transformation A i-1' is an identity matrix.
6. The device of claim 3, wherein said modifying circuitry includes:
a predictor, operably coupled to the quantization error vector calculator, for generating a predicted quantization error vector s i, and a transformation unit, operably coupled to the predictor, for utilizing a forward transformation A 1 [ ] for generating the modified quantization error vector S i.
a predictor, operably coupled to the quantization error vector calculator, for generating a predicted quantization error vector s i, and a transformation unit, operably coupled to the predictor, for utilizing a forward transformation A 1 [ ] for generating the modified quantization error vector S i.
7. The device of claim 6, wherein said quantization error vector calculator includes:
a second adder, operably coupled to receive Y i ' and E i ', for determining the quantization error D i '=Y i '-E i', and an inverse transformation unit, operably coupled to receive D i ', for utilizing an inverse transformation A i-1 to generate the vector d i' according to d i '=A i-1 [D i '].
a second adder, operably coupled to receive Y i ' and E i ', for determining the quantization error D i '=Y i '-E i', and an inverse transformation unit, operably coupled to receive D i ', for utilizing an inverse transformation A i-1 to generate the vector d i' according to d i '=A i-1 [D i '].
8. The device of claim 6, wherein said quantization error vector calculator includes:
a second adder, operably coupled to receive Y i and Y i' , for determining an error X i '=Y i'-Y i, an inverse transformation A i-1 unit, operably coupled to the second adder, for generating a vector x i according to x i =A i-1 [X i'], and a third adder, operably coupled to receive the vector x i from the inverse transformation unit and to receive a feedback signal s i =A i-1 [X i'] from the predictor, for adding x i and s i to obtain the vector d i'.
a second adder, operably coupled to receive Y i and Y i' , for determining an error X i '=Y i'-Y i, an inverse transformation A i-1 unit, operably coupled to the second adder, for generating a vector x i according to x i =A i-1 [X i'], and a third adder, operably coupled to receive the vector x i from the inverse transformation unit and to receive a feedback signal s i =A i-1 [X i'] from the predictor, for adding x i and s i to obtain the vector d i'.
9. The device of claim 6 wherein the quantized vector Y i is generated by a predictive digital video encoder.
10. The device of claim 6 wherein said quantized vector Y i represents quantized transform coefficients of a macro block (MB).
11. The device of claim 9, wherein said predictor uses motion-compensated prediction (MCP) and transformation A[ ] is a Discrete Cosine Transform (DCT) and transformation A-1 is an inverse Discrete Cosine Transform (IDCT).
12. The device of claim 11, wherein said video encoder is an H.261 encoder.
13. A method for transcoding a sequence of quantized vectors Y i generated by a predictive waveform encoder utilizing modified quantization error vectors S i, comprising the steps of:
(A) generating a difference vector E i '=Y; -S i between at least a first quantized vector Y i and a modified quantization error vector S i ;
(B) quantizing the difference vector E i ' to obtain a transcoded vector Y i ';
(C) receiving at least two of the vectors Y i, E i ' and Y i ' and generating an inverse-transformed error vector d i'=A i-1 [D i'], where D i' is a quantization error vector and A i-1 is an inverse transformation; and (D) generating the modified quantization error vector S i based on past values of the vector d i'.
(A) generating a difference vector E i '=Y; -S i between at least a first quantized vector Y i and a modified quantization error vector S i ;
(B) quantizing the difference vector E i ' to obtain a transcoded vector Y i ';
(C) receiving at least two of the vectors Y i, E i ' and Y i ' and generating an inverse-transformed error vector d i'=A i-1 [D i'], where D i' is a quantization error vector and A i-1 is an inverse transformation; and (D) generating the modified quantization error vector S i based on past values of the vector d i'.
14. The method of claim 13, wherein said inverse transformation A i-1 is constant.
15. The method of claim 14, where the inverse transformation A i-1 is linear.
16. The method of claim 15 wherein said quantized vector Y i has dimensionality L=1.
17. The method of claim 16, wherein said inverse transformation A i-1 is an identity matrix.
18. The method of claim 15 wherein the step of generating the modified quantization error vector S i based on past values of the vector d i' includes:
generating a predicted quantization error vector s i, and utilizing s i to generate the modified quantization error vector S i such that S i =A i [s i] where A i is a predetermined transformation.
generating a predicted quantization error vector s i, and utilizing s i to generate the modified quantization error vector S i such that S i =A i [s i] where A i is a predetermined transformation.
19. The method of claim 18, further including:
utilizing received vectors Y i' and E i') for determining a quantization error D i' =Y i-1 -E i', and utilizing an inverse transformation A i-1 to generate the vector d i' according to d i'=A i-1 [D i'].
utilizing received vectors Y i' and E i') for determining a quantization error D i' =Y i-1 -E i', and utilizing an inverse transformation A i-1 to generate the vector d i' according to d i'=A i-1 [D i'].
20. The method of claim 18, further including:
receiving and utilizing vectors Y i and Y i' for determining an error X i'=Y i '-Y i, utilizing inverse transformation A i-1 for generating a vector x i according to x i =A i-1 [X i'], and adding x i and s i to obtain the vector d i'.
receiving and utilizing vectors Y i and Y i' for determining an error X i'=Y i '-Y i, utilizing inverse transformation A i-1 for generating a vector x i according to x i =A i-1 [X i'], and adding x i and s i to obtain the vector d i'.
21. The method of claim 18 wherein the quantized vector Y i is generated by a predictive digital video encoder.
22. The method of claim 18 wherein said quantized vector Y i represents quantized transform coefficients of a macro block (MB), a fundamental coding element in H.261.
23. The method of claim 22, further including utilizing motion-compensated prediction (MCP), and wherein transformation A[] is a Discrete Cosine Transform (DCT) and transformation A-1 is an inverse Discrete Cosine Transform (IDCT).
24. The method of claim 21, wherein said video encoder is an H.261 encoder.
25. A transcoding device for converting a sequence of quantized vectors Y i generated by a predictive waveform encoder utilizing modified quantization error vectors S i, comprising:
(A) an adder, operably coupled to receive at least a first quantized vector Y
i and at least a first modified quantization error vector Si, for generating a difference vector E i'=Y i-S i between the quantized vector Y i and the modified quantization error vector S i;
(B) a generalized quantizer, operably coupled to the adder and having a quantization error vector calculator, for quantizing the difference vector E
i' to obtain a transcoded vector Y i', receiving at least one of the vectors Y i and E i', for generating an inverse-transformed error vector d i'=A i-1 [D i'], where D i' is a quantization error vector and A i-1 is an inverse transformation; and (C) modifying circuitry, operably coupling the quantization error vector calculator to the adder, for generating the modified quantization error vector S i based on past values of the vector d i'.
(A) an adder, operably coupled to receive at least a first quantized vector Y
i and at least a first modified quantization error vector Si, for generating a difference vector E i'=Y i-S i between the quantized vector Y i and the modified quantization error vector S i;
(B) a generalized quantizer, operably coupled to the adder and having a quantization error vector calculator, for quantizing the difference vector E
i' to obtain a transcoded vector Y i', receiving at least one of the vectors Y i and E i', for generating an inverse-transformed error vector d i'=A i-1 [D i'], where D i' is a quantization error vector and A i-1 is an inverse transformation; and (C) modifying circuitry, operably coupling the quantization error vector calculator to the adder, for generating the modified quantization error vector S i based on past values of the vector d i'.
26. The device of claim 25, wherein said inverse transformation A i' is constant.
27. The device of claim 26, where the inverse transformation A i-1 is linear.
28. The device of claim 27, wherein said quantized vector Y i has dimensionality L=1.
29. The device of claim 28, wherein said inverse transformation A i-1 is an identity matrix.
30. The device of claim 27, wherein said modifying circuitry includes:
a predictor, operably coupled to the generalized quantizer, for generating a predicted quantization error vector s i, and a transformation unit, operably coupled to the predictor, for utilizing a forward transformation A i [] for generating the modified quantization error vector S i.
a predictor, operably coupled to the generalized quantizer, for generating a predicted quantization error vector s i, and a transformation unit, operably coupled to the predictor, for utilizing a forward transformation A i [] for generating the modified quantization error vector S i.
31. The device of claim 30, wherein said generalized quantizer includes:
a quantizer, operably coupled to the adder, for quantizing an input vector E
i' to provide Y i', a second adder, operably coupled to receive E i' and Y i', for determining the quantization error D i'=Y i'-E i', where Y i' represents a quantized vector for an input E i' and an inverse transformation unit, operably coupled to receive D i', for utilizing an inverse transformation A i-1 to generate the vector d i' according to d i'= A i' [D i'].
a quantizer, operably coupled to the adder, for quantizing an input vector E
i' to provide Y i', a second adder, operably coupled to receive E i' and Y i', for determining the quantization error D i'=Y i'-E i', where Y i' represents a quantized vector for an input E i' and an inverse transformation unit, operably coupled to receive D i', for utilizing an inverse transformation A i-1 to generate the vector d i' according to d i'= A i' [D i'].
32. The device of claim 30, wherein said quantization error vector calculator includes:
a second adder, operably coupled to receive Y i and Y i', for determining an error X i ' = Y i ' - Y i' , an inverse transformation A i' unit, operably coupled to the second adder, for generating a vector x i according to x i= A i-1 [X i'], and a third adder, operably coupled to receive the vector x i from the inverse transformation unit and to receive a feedback signal s i = A i-1 [X i'] from the predictor, for adding x i and s i to obtain the vector d i'.
a second adder, operably coupled to receive Y i and Y i', for determining an error X i ' = Y i ' - Y i' , an inverse transformation A i' unit, operably coupled to the second adder, for generating a vector x i according to x i= A i-1 [X i'], and a third adder, operably coupled to receive the vector x i from the inverse transformation unit and to receive a feedback signal s i = A i-1 [X i'] from the predictor, for adding x i and s i to obtain the vector d i'.
33. The device of claim 30 wherein the quantized vector Y i is generated by a predictive digital video encoder.
34. The device of claim 30 wherein said quantized vector Y i represents quantized transform coefficients of a macro block (MB).
35. The device of claim 30, wherein said predictor uses motion-compensated prediction (MCP) and transformation A[] is a Discrete Cosine Transform (DCT) and transformation A-1 is an inverse Discrete Cosine Transform (IDCT).
36. The device of claim 33, wherein said video encoder is an H.261 encoder.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/179,020 US5537440A (en) | 1994-01-07 | 1994-01-07 | Efficient transcoding device and method |
US08/179,020 | 1994-01-07 | ||
PCT/US1994/013840 WO1995019072A1 (en) | 1994-01-07 | 1994-12-01 | Efficient transcoding device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2154885A1 CA2154885A1 (en) | 1995-07-13 |
CA2154885C true CA2154885C (en) | 1999-07-20 |
Family
ID=22654894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002154885A Expired - Fee Related CA2154885C (en) | 1994-01-07 | 1994-12-01 | Efficient transcoding device and method |
Country Status (6)
Country | Link |
---|---|
US (1) | US5537440A (en) |
EP (1) | EP0691054B1 (en) |
JP (1) | JPH08507669A (en) |
CA (1) | CA2154885C (en) |
DE (1) | DE69432142T2 (en) |
WO (1) | WO1995019072A1 (en) |
Families Citing this family (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5835532A (en) * | 1994-03-21 | 1998-11-10 | Rca Thomson Licensing Corporation | Blind equalizer for a vestigial sideband signal |
US5781237A (en) * | 1994-03-25 | 1998-07-14 | Matsushita Electric Industrial Co., Ltd. | Video coding apparatus and video coding method |
US5715009A (en) | 1994-03-29 | 1998-02-03 | Sony Corporation | Picture signal transmitting method and apparatus |
JP3013698B2 (en) * | 1994-04-20 | 2000-02-28 | 松下電器産業株式会社 | Vector quantization encoding device and decoding device |
US5940130A (en) * | 1994-04-21 | 1999-08-17 | British Telecommunications Public Limited Company | Video transcoder with by-pass transfer of extracted motion compensation data |
SG43051A1 (en) * | 1994-04-21 | 1997-10-17 | British Telecomm | A transcoder |
DE4416967A1 (en) * | 1994-05-13 | 1995-11-16 | Thomson Brandt Gmbh | Method and device for transcoding bit streams with video data |
US5828421A (en) * | 1994-10-11 | 1998-10-27 | Hitachi America, Ltd. | Implementation efficient digital picture-in-picture decoding methods and apparatus |
US5825970A (en) * | 1994-12-20 | 1998-10-20 | Lg Electronics Inc. | Quantization number selecting apparatus for DVCR and method therefor |
EP0755610B1 (en) * | 1995-02-15 | 2000-04-12 | Koninklijke Philips Electronics N.V. | Method and device for transcoding video signals |
JP3418485B2 (en) * | 1995-09-14 | 2003-06-23 | シャープ株式会社 | Image data storage device |
JP3788823B2 (en) * | 1995-10-27 | 2006-06-21 | 株式会社東芝 | Moving picture encoding apparatus and moving picture decoding apparatus |
JPH1051766A (en) | 1996-08-05 | 1998-02-20 | Mitsubishi Electric Corp | Image coding data converter |
US6038256A (en) * | 1996-12-31 | 2000-03-14 | C-Cube Microsystems Inc. | Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics |
US5870146A (en) * | 1997-01-21 | 1999-02-09 | Multilink, Incorporated | Device and method for digital video transcoding |
SE9703849L (en) * | 1997-03-14 | 1998-09-15 | Ericsson Telefon Ab L M | Scaling down images |
CN1110963C (en) * | 1997-03-26 | 2003-06-04 | 松下电器产业株式会社 | Image decoding device |
WO1999005870A2 (en) * | 1997-07-22 | 1999-02-04 | Koninklijke Philips Electronics N.V. | Method of switching between video sequences and corresponding device |
US6507672B1 (en) * | 1997-09-10 | 2003-01-14 | Lsi Logic Corporation | Video encoder for digital video displays |
US6731811B1 (en) * | 1997-12-19 | 2004-05-04 | Voicecraft, Inc. | Scalable predictive coding method and apparatus |
JPH11275592A (en) * | 1998-01-22 | 1999-10-08 | Victor Co Of Japan Ltd | Moving image code stream converter and its method |
US6243495B1 (en) * | 1998-02-13 | 2001-06-05 | Grass Valley (Us) Inc. | Method a group of picture structure in MPEG video |
US6058143A (en) * | 1998-02-20 | 2000-05-02 | Thomson Licensing S.A. | Motion vector extrapolation for transcoding video sequences |
CA2265089C (en) * | 1998-03-10 | 2007-07-10 | Sony Corporation | Transcoding system using encoding history information |
CN1179574C (en) * | 1998-03-31 | 2004-12-08 | 皇家菲利浦电子有限公司 | Method and device for modifying data in encoded date stream |
US6215824B1 (en) * | 1998-05-01 | 2001-04-10 | Boom Corporation | Transcoding method for digital video networking |
US6226328B1 (en) * | 1998-05-01 | 2001-05-01 | Boom Corporation | Transcoding apparatus for digital video networking |
US6477706B1 (en) | 1998-05-01 | 2002-11-05 | Cogent Technology, Inc. | Cable television system using transcoding method |
KR100304103B1 (en) * | 1998-05-29 | 2001-09-24 | 이계철 | Method for finding re-quantization step sizes resulting in abrupt bit-rate reduction and rate control method using it |
SE9802286L (en) | 1998-06-26 | 1999-12-27 | Ericsson Telefon Ab L M | Effective downscaling of DCT compressed images |
US6483543B1 (en) | 1998-07-27 | 2002-11-19 | Cisco Technology, Inc. | System and method for transcoding multiple channels of compressed video streams using a self-contained data unit |
JP2000059790A (en) * | 1998-08-05 | 2000-02-25 | Victor Co Of Japan Ltd | Dynamic image code string converter and method therefor |
US6310915B1 (en) * | 1998-11-20 | 2001-10-30 | Harmonic Inc. | Video transcoder with bitstream look ahead for rate control and statistical multiplexing |
KR100312421B1 (en) * | 1998-11-25 | 2001-12-12 | 오길록 | A conversion method of the compressed moving video on the video communication system |
US6618442B1 (en) * | 1998-12-29 | 2003-09-09 | Intel Corporation | Method and apparatus for transcoding digital video signals |
ES2569491T3 (en) | 1999-02-09 | 2016-05-11 | Sony Corporation | Coding system and associated method |
EP1032212A3 (en) * | 1999-02-23 | 2004-04-28 | Matsushita Electric Industrial Co., Ltd. | Transcoder, transcoding system, and recording medium |
CN1204751C (en) * | 1999-04-13 | 2005-06-01 | 松下电器产业株式会社 | Coded data converting method, recoding method, recoding system and data recording medium |
US6549147B1 (en) * | 1999-05-21 | 2003-04-15 | Nippon Telegraph And Telephone Corporation | Methods, apparatuses and recorded medium for reversible encoding and decoding |
EP1125439A1 (en) | 1999-07-20 | 2001-08-22 | Koninklijke Philips Electronics N.V. | Drift-free transcoder and related method |
US6401132B1 (en) | 1999-08-03 | 2002-06-04 | International Business Machines Corporation | Subchaining transcoders in a transcoding framework |
US6441754B1 (en) * | 1999-08-17 | 2002-08-27 | General Instrument Corporation | Apparatus and methods for transcoder-based adaptive quantization |
KR100634660B1 (en) * | 1999-09-13 | 2006-10-16 | 마츠시타 덴끼 산교 가부시키가이샤 | Apparatus and method for encoding |
DE19946267C2 (en) * | 1999-09-27 | 2002-09-26 | Harman Becker Automotive Sys | Digital transcoding system |
JP2003520512A (en) * | 2000-01-14 | 2003-07-02 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Transmission encoding method and transmission encoding device |
CN1593062A (en) * | 2000-02-04 | 2005-03-09 | 皇家菲利浦电子有限公司 | Quantization method for bit rate transcoding applications |
US6711212B1 (en) | 2000-09-22 | 2004-03-23 | Industrial Technology Research Institute | Video transcoder, video transcoding method, and video communication system and method using video transcoding with dynamic sub-window skipping |
KR20020064963A (en) * | 2000-10-24 | 2002-08-10 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Method of transcoding and transcoding device with embedded filters |
US6650707B2 (en) | 2001-03-02 | 2003-11-18 | Industrial Technology Research Institute | Transcoding apparatus and method |
US8391482B2 (en) * | 2001-05-04 | 2013-03-05 | Hewlett-Packard Development Company, L.P. | Signal format that facilitates easy scalability of data streams |
US20030007784A1 (en) * | 2001-06-20 | 2003-01-09 | Loui Alexander C. | System and method for authoring a multimedia enabled disc |
US7236529B2 (en) * | 2001-10-30 | 2007-06-26 | Industrial Technology Research Institute | Methods and systems for video transcoding in DCT domain with low complexity |
DE10300048B4 (en) * | 2002-01-05 | 2005-05-12 | Samsung Electronics Co., Ltd., Suwon | Image coding method for motion picture expert groups, involves image quantizing data in accordance with quantization parameter, and coding entropy of quantized image data using entropy coding unit |
US7236521B2 (en) * | 2002-03-27 | 2007-06-26 | Scientific-Atlanta, Inc. | Digital stream transcoder |
US7190723B2 (en) * | 2002-03-27 | 2007-03-13 | Scientific-Atlanta, Inc. | Digital stream transcoder with a hybrid-rate controller |
US7469012B2 (en) * | 2002-05-14 | 2008-12-23 | Broadcom Corporation | System and method for transcoding entropy-coded bitstreams |
JP4196726B2 (en) * | 2003-05-14 | 2008-12-17 | ソニー株式会社 | Image processing apparatus, image processing method, recording medium, and program |
TWI230547B (en) * | 2004-02-04 | 2005-04-01 | Ind Tech Res Inst | Low-complexity spatial downscaling video transcoder and method thereof |
US20050232497A1 (en) * | 2004-04-15 | 2005-10-20 | Microsoft Corporation | High-fidelity transcoding |
US7692683B2 (en) * | 2004-10-15 | 2010-04-06 | Lifesize Communications, Inc. | Video conferencing system transcoder |
US20060248210A1 (en) * | 2005-05-02 | 2006-11-02 | Lifesize Communications, Inc. | Controlling video display mode in a video conferencing system |
US8422546B2 (en) * | 2005-05-25 | 2013-04-16 | Microsoft Corporation | Adaptive video encoding using a perceptual model |
US7818632B2 (en) * | 2005-06-30 | 2010-10-19 | Motorola Mobility, Inc. | Code-word list algorithm |
US20070237237A1 (en) * | 2006-04-07 | 2007-10-11 | Microsoft Corporation | Gradient slope detection for video compression |
US8059721B2 (en) | 2006-04-07 | 2011-11-15 | Microsoft Corporation | Estimating sample-domain distortion in the transform domain with rounding compensation |
US8503536B2 (en) | 2006-04-07 | 2013-08-06 | Microsoft Corporation | Quantization adjustments for DC shift artifacts |
US7995649B2 (en) | 2006-04-07 | 2011-08-09 | Microsoft Corporation | Quantization adjustment based on texture level |
US8711925B2 (en) | 2006-05-05 | 2014-04-29 | Microsoft Corporation | Flexible quantization |
US20070286277A1 (en) * | 2006-06-13 | 2007-12-13 | Chen Xuemin Sherman | Method and system for video compression using an iterative encoding algorithm |
US8238424B2 (en) | 2007-02-09 | 2012-08-07 | Microsoft Corporation | Complexity-based adaptive preprocessing for multiple-pass video compression |
US20080240257A1 (en) * | 2007-03-26 | 2008-10-02 | Microsoft Corporation | Using quantization bias that accounts for relations between transform bins and quantization bins |
US8498335B2 (en) * | 2007-03-26 | 2013-07-30 | Microsoft Corporation | Adaptive deadzone size adjustment in quantization |
US8243797B2 (en) | 2007-03-30 | 2012-08-14 | Microsoft Corporation | Regions of interest for quality adjustments |
US8442337B2 (en) | 2007-04-18 | 2013-05-14 | Microsoft Corporation | Encoding adjustments for animation content |
US8331438B2 (en) * | 2007-06-05 | 2012-12-11 | Microsoft Corporation | Adaptive selection of picture-level quantization parameters for predicted video pictures |
US8319814B2 (en) * | 2007-06-22 | 2012-11-27 | Lifesize Communications, Inc. | Video conferencing system which allows endpoints to perform continuous presence layout selection |
US8139100B2 (en) * | 2007-07-13 | 2012-03-20 | Lifesize Communications, Inc. | Virtual multiway scaler compensation |
US8457958B2 (en) * | 2007-11-09 | 2013-06-04 | Microsoft Corporation | Audio transcoder using encoder-generated side information to transcode to target bit-rate |
US8150187B1 (en) * | 2007-11-29 | 2012-04-03 | Lsi Corporation | Baseband signal quantizer estimation |
US8189933B2 (en) | 2008-03-31 | 2012-05-29 | Microsoft Corporation | Classifying and controlling encoding quality for textured, dark smooth and smooth video content |
US8897359B2 (en) | 2008-06-03 | 2014-11-25 | Microsoft Corporation | Adaptive quantization for enhancement layer video coding |
US8326075B2 (en) | 2008-09-11 | 2012-12-04 | Google Inc. | System and method for video encoding using adaptive loop filter |
US8514265B2 (en) * | 2008-10-02 | 2013-08-20 | Lifesize Communications, Inc. | Systems and methods for selecting videoconferencing endpoints for display in a composite video image |
US20100110160A1 (en) * | 2008-10-30 | 2010-05-06 | Brandt Matthew K | Videoconferencing Community with Live Images |
US8396114B2 (en) * | 2009-01-29 | 2013-03-12 | Microsoft Corporation | Multiple bit rate video encoding using variable bit rate and dynamic resolution for adaptive video streaming |
US8311115B2 (en) * | 2009-01-29 | 2012-11-13 | Microsoft Corporation | Video encoding using previously calculated motion information |
US8643695B2 (en) * | 2009-03-04 | 2014-02-04 | Lifesize Communications, Inc. | Videoconferencing endpoint extension |
US8456510B2 (en) * | 2009-03-04 | 2013-06-04 | Lifesize Communications, Inc. | Virtual distributed multipoint control unit |
US8270473B2 (en) * | 2009-06-12 | 2012-09-18 | Microsoft Corporation | Motion based dynamic resolution multiple bit rate video encoding |
US8350891B2 (en) * | 2009-11-16 | 2013-01-08 | Lifesize Communications, Inc. | Determining a videoconference layout based on numbers of participants |
US8705616B2 (en) | 2010-06-11 | 2014-04-22 | Microsoft Corporation | Parallel multiple bitrate video encoding to reduce latency and dependences between groups of pictures |
US9591318B2 (en) | 2011-09-16 | 2017-03-07 | Microsoft Technology Licensing, Llc | Multi-layer encoding and decoding |
US11089343B2 (en) | 2012-01-11 | 2021-08-10 | Microsoft Technology Licensing, Llc | Capability advertisement, configuration and control for video coding and decoding |
US9635334B2 (en) * | 2012-12-03 | 2017-04-25 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Audio and video management for parallel transcoding |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL8005950A (en) * | 1980-10-30 | 1982-05-17 | Philips Nv | DIFFERENTIAL PULSE CODE MODULATION TRANSFER SYSTEM. |
JPH01238229A (en) * | 1988-03-17 | 1989-09-22 | Sony Corp | Digital signal processor |
US4972260A (en) * | 1988-08-22 | 1990-11-20 | Matsushita Electric Industrial Co., Ltd. | Apparatus for coding a moving-picture signal |
JPH0828875B2 (en) * | 1989-08-21 | 1996-03-21 | 三菱電機株式会社 | Encoding device and decoding device |
JPH0472909A (en) * | 1990-07-13 | 1992-03-06 | Sony Corp | Quantization error reduction device for audio signal |
JP2753126B2 (en) * | 1990-09-20 | 1998-05-18 | 株式会社東芝 | Digital sigma-delta modulator |
GB9022326D0 (en) * | 1990-10-15 | 1990-11-28 | British Telecomm | Signal coding |
JPH05167998A (en) * | 1991-12-16 | 1993-07-02 | Nippon Telegr & Teleph Corp <Ntt> | Image-encoding controlling method |
-
1994
- 1994-01-07 US US08/179,020 patent/US5537440A/en not_active Expired - Lifetime
- 1994-12-01 WO PCT/US1994/013840 patent/WO1995019072A1/en active IP Right Grant
- 1994-12-01 JP JP7518478A patent/JPH08507669A/en active Pending
- 1994-12-01 DE DE69432142T patent/DE69432142T2/en not_active Expired - Fee Related
- 1994-12-01 CA CA002154885A patent/CA2154885C/en not_active Expired - Fee Related
- 1994-12-01 EP EP95904792A patent/EP0691054B1/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
EP0691054A4 (en) | 1999-04-14 |
EP0691054B1 (en) | 2003-02-19 |
DE69432142D1 (en) | 2003-03-27 |
WO1995019072A1 (en) | 1995-07-13 |
JPH08507669A (en) | 1996-08-13 |
DE69432142T2 (en) | 2003-07-24 |
CA2154885A1 (en) | 1995-07-13 |
EP0691054A1 (en) | 1996-01-10 |
US5537440A (en) | 1996-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2154885C (en) | Efficient transcoding device and method | |
US5870146A (en) | Device and method for digital video transcoding | |
Bjork et al. | Transcoder architectures for video coding | |
US7813427B2 (en) | Method and apparatus for accomplishing multiple description coding for video | |
US6426974B2 (en) | Image conversion apparatus for transforming compressed image data of different resolutions wherein side information is scaled | |
US7881370B2 (en) | Method of selecting among n spatial video CODECs the optimum CODEC for a same input signal | |
US5500678A (en) | Optimized scanning of transform coefficients in video coding | |
AU685444B2 (en) | A transcoder | |
AU691268B2 (en) | Image coded data re-encoding apparatus | |
US7058127B2 (en) | Method and system for video transcoding | |
US6526099B1 (en) | Transcoder | |
JP3093458B2 (en) | Variable rate codec / decoder | |
Assuncao et al. | Transcoding of single-layer MPEG video into lower rates | |
JPH10136386A (en) | Coded video signal processing system | |
JP2000013799A (en) | Device and method for motion compensation encoding and decoding | |
WO2007109993A1 (en) | Error control system, method, encoder and decoder for video coding | |
CA2187793C (en) | A transcoder | |
JPH08111870A (en) | Method and device for re-coding image information | |
Sun et al. | The improved SP frame coding technique for the JVT standard | |
Assuncao et al. | Rate-reduction techniques for MPEG-2 video bit streams | |
JP2971094B2 (en) | Image coding device | |
KR100386194B1 (en) | Apparatus and method for image improvement by DC value additional compensation of quantization error in image compression | |
Mersereau et al. | Methods for low bit-rate video compression: some issues and answers | |
KR100386374B1 (en) | A transcoder | |
Kodavalla et al. | CHROMA CODING IN DISTRIBUTED VIDEO CODING |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |