CA2265089C - Transcoding system using encoding history information - Google Patents

Transcoding system using encoding history information Download PDF

Info

Publication number
CA2265089C
CA2265089C CA002265089A CA2265089A CA2265089C CA 2265089 C CA2265089 C CA 2265089C CA 002265089 A CA002265089 A CA 002265089A CA 2265089 A CA2265089 A CA 2265089A CA 2265089 C CA2265089 C CA 2265089C
Authority
CA
Canada
Prior art keywords
encoding
past
encoded video
video stream
present
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002265089A
Other languages
French (fr)
Other versions
CA2265089A1 (en
Inventor
Katsumi Tahara
Yoshihiro Murakami
Takuya Kitamura
Kanji Mihara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CA2265089A1 publication Critical patent/CA2265089A1/en
Application granted granted Critical
Publication of CA2265089C publication Critical patent/CA2265089C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • H04N19/428Recompression, e.g. by spatial or temporal decimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Abstract

The present invention provides a transcoder for changing a GOP structure and the bit rate of an encoded bitstream obtained as a result of an encoding process based on MPEG standards. According to the transcoder provided by the present invention, encoding parameters generated in a past encoding process can be transmitted to the MPEG encoder which performs a present encoding process as a history information. And optimum encoding parameters in commensurate with the present encoding process are selected from the transmitted encoding parameters, and the selected encoding parameters are reused in the present encoding process. As a result, the picture quality does not deteriorate even if decoding and encoding processes are carried out repeatedly.

Description

CA 02265089 1999-03-05BAQKGRQHND_QE_IHE_INMENIlQNThe present invention relates to a transcodingsystem, a video encoding apparatus, a stream procecessingsystem and a video decoding apparatus for changing a GOP(Group of Pictures) structure and the bit rate of anencoded bitstream obtained as a result of an encodingprocess based on MPEG (Moving Picture Experts Group)standards.In recent years, the broadcasting station forproducing and broadcasting television programs has beengenerally using an MPEG technology for compressing andencoding video data. In particular, the MPEG technologyis becoming a de-facto standard for recording video dataonto a tape or a random—accessible recording medium andfor transmitting video data through a cable or asatellite.The following is a brief description of typicalprocessing carried out by broadcasting stations up totransmission of a video program produced in the stationto each home. First, an encoder employed in a camcoder,(an apparatus integrating a video camera and a VTR into asingle bodY). encodes source video data and records theencoded data onto a magnetic tape of the VTR. At thattime, the encoder employed in the camcoder encodes thesource video data into an encoded bitstream suitable fora recording format of the magnetic tape of the VTR.Typically, the GOP structure of an MPEG bitstream2S99.P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05recorded on the magnetic tape is a structure wherein oneGOP is composed of two frames. An example of the GOPstructure is a structure comprising a sequence ofpictures of the types of I—, B—, I—, B—, I—, B- and so on.The bit rate of the MPEG bitstream recorded on themagnetic tape is 18 Mbps.Then, a central broadcasting station carries outedit processing to edit the video bitstream recorded onthe magnetic tape. For this purpose, the GOP structure ofthe video bitstream recorded on the magnetic tape isconverted into a GOP structure suitable for the editprocessing. A GOP structure suitable for edit processingis a structure wherein one GOP is composed of one frame.To be more specific, pictures of a GOP structure suitablefor edit processing are all I—pictures. This is becausein order to carry out edit processing in frame units, theI-picture having no correlation with other pictures ismost suitable. In the actual operation to convert the GOPstructure, the video bitstream recorded on the magnetictape is once decoded back into a base—band video data.Then, the base—band video data is re-encoded so as tocomprise all I-pictures. By carrying out the decoding andre—encoding processes in this way, it is possible togenerate a bitstream having a GOP structure suitable foredit processing.Subsequently, in order to transmit an editedvideo program obtained as a result of the edit processingfrom the central broadcasting station to a localbroadcasting station, it is necessary to change the GOPstructure and the bit rate of the bitstream of the editedvideo program to a GOP structure and a bit rate that are3' S99P0316USO0Ver1.1.docCA 02265089 1999-03-05suitable for the transmission. The GOP structure suitablefor transmission between broadcasting stations is a GOPstructure wherein one GOP is composed of 15 frames. Anexample of such a GOP structure is a structure comprisinga sequence of pictures of the types of I-, B—, B—, P-, B—,B—, P— and so on. As for the bit rate suitable fortransmission between broadcasting stations, a high bitrate of at least 50 Mbps is desirable since, in general,a dedicated line having a high transmission capacity suchas an optical fiber is installed between broadcastingstations. To put it concretely, the bitstream of a videoprogram completed the edit processing is once decodedback into a base-band video data. Then, the base-bandvideo data is re-encoded to result in a GOP structure anda bit rate suitable for transmission between broadcastingstations as described above.At the local broadcasting station, the videoprogram received from the central broadcasting station istypically subjected to edit processing to insertcommercials peculiar to the district where the localbroadcasting station is located. Much like the editprocessing carried out at the central broadcastingstation, the bitstream of the video program received fromthe central broadcasting station is once decoded backinto a base-band video data. Then, the base-band videodata is encoded so as to comprise all I—pictures. As aresult, it is possible to generate a bitstream having aGOP structure suitable for edit processing.Subsequently, in order to transmit the videoprogram completing the edit processing at the localbroadcasting station to each home through a cable or a4s99po316Usoover1.1.docCA 02265089 1999-03-05satellite, the GOP structure and the bit rate of thebitstream are converted into respectively a GOP structureand a bit rate that are suitable for the transmission toeach home. A GOP structure suitable for the transmissionto each home is a structure wherein one GOP is composedof 15 frames. An example of such a GOP structure is astructure comprising a sequence of pictures of the typesof I—, B—, B—, P-, B—, B—, P— and so on. A bit ratesuitable for transmission to each home has a typicalvalue of as low as about 5 Mbps. The bitstream of a videoprogram often completing the edit processing is decodedback into a base—band video data. Then, the base—bandvideo data is re-encoded into a GOP structure and a bitrate suitable for transmission to each home.As is obvious from the above description, avideo program transmitted from the central broadcastingstation to each home is subjected to repetitive decodingand encoding processes for a plurality of times duringthe transmission. In actuality, various kinds of signalprocessing other than the signal processing describedabove are carried out at a broadcasting station and thedecoding and encoding processes are usually carried outfor each kind of signal processing. As a result, thedecoding and encoding processes need to be carried outrepeatedly.However, encoding and decoding processes basedon MPEG standard are not 100 percent reverse processed toeach other as is generally known. To be more specific,base—band video data subjected to an encoding process isnot entirely the same as video data obtained as a resultof a decoding process carried out in transcoding of the5S99P0316USOOVerl.l.docCA 02265089 1999-03-05previous generation. Therefore, decoding and encodingprocesses cause the picture quality to deteriorate. As aresult, there is a problem of deterioration of thepicture quality that occurs each time decoding andencoding processes are carried out. In other words, theeffects of deterioration of the picture quality areaccumulated each time decoding and encoding processes arerepeated. IQNIt is thus an object of the present inventionaddressing the problem described above to provide atranscoding system, a video encoding apparatus, a streamprocecessing system and a video decoding apparatus whichcause no deterioration of the picture quality even ifencoding and decoding processes are carried outrepeatedly on a bitstream completing an encoding processbased on MPEG standard in order to change the GOPstructure and the bit rate of the bitstream. In addition,it is an object of the present invention to provide a atranscoding system, a video encoding apparatus, a streamprocecessing system and a video decoding apparatuscausing no deterioration of the picture quality even ifencoding and decoding processes are carried outrepeatedly.S99,PO316USO0Ver1 . 1 . docCA 02265089 1999-03-05SHMARX_QE_IHE_lN¥ENIlQNIn order to attain the above objects,according to the transcoder provided by the presentinvention, encoding parameters generated and used in aprevious encoding process can be utilized in the currentencoding process. As a result, the picture quality doesnot deteriorate even if decoding and encoding processesare carried out repeatedly. That is to say, it ispossible to lessen the accumulated deterioration in thequality of picture due to repetition of the encodingprocess.According to the transcoder provided by thepresent invention, encoding parameters generated and usedin a previous encoding process are described in a user-data area of an encoded bitstream obtained as a result ofthe current encoding process and the encoded bitstreamconforms to MPEG standard. It is thus possible to decodethe encoded bitstream by means of any existing decoders.In addition, it is not necessary to provide a dedicatedline for transmitting encoding parameters generated andused in a previous encoding process. As a result, it ispossible to transmit encoding parameters generated andused in a previous encoding process by utilizing theexisting data—stream transmission environment.According to the transcoder provided by thepresent invention, only selected previous encodingparameters generated and used in a encoding process aredescribed in a user—data area of an encoded bitstream7s99Po316Usoover1.1.docCA 02265089 1999-03-05obtained as a result of the current encoding process. Asa result, it is possible to transmit the encodingparameters generated and used in a encoding processcarried out in the past without the need to substantiallyincrease the bit rate of the output bitstream.According to the transcoder provided by thepresent invention, only encoding parameters optimum forthe current encoding process are selected from encodingparameters generated and used in a previous encodingprocess to be used in the current encoding process. As aresult, deterioration in the quality of picture isaccumulated even if the previous decoding and encodingprocesses are carried out repeatedly.According to the transcoder provided by thepresent invention, only encoding parameters optimum forthe current encoding process are selected in accordancewith the picture types from previous encoding parametersgenerated and used in a previous encoding process to beused in the current encoding process. As a result,deterioration of the quality in picture is by no meansaccumulated even if decoding and encoding processes arecarried out repeatedly.According to the transcoder provided by thepresent invention, a decision as to whether or not toreutilize previous encoding parameters generated and usedin a previous encoding process is made on the basis ofpicture types included in the previous encodingparameters. As a result, an optimum encoding process canbe carried out.S99.P0316USO0Ver1 . 1 . doc CA 02265089 1999-03-05BRIEE_DESCRlEIlQN_QE_IHE_DRAfllNGSFor a more complete understanding of the invention,reference is made to the following description andaccompanying drawings, in which:Fig. 1 is an explanatory diagram used fordescribing the principle of a high-efficiency encodingprocess;Fig. 2 is an explanatory diagram showing picturetypes used in compression of picture data;Fig. 3 is an explanatory diagram showing picturetypes used in compression of picture data;Fig. 4 is an explanatory diagram used fordescribing the principle of a process of encoding amoving—picture video signal;Fig. 5 is a block diagram showing theconfiguration of an apparatus used for encoding anddecoding a moving—picture video signal;Figs. 6A to 6C are explanatory diagrams used fordescribing format conversion;Fig. 7 is a block diagram showing theconfiguration of an encoder 18 employed in the apparatusshown in Fig. 5;Fig. 8 is an explanatory diagram used fordescribing the operation of a prediction—mode switchingcircuit 52 employed in the encoder 18 shown in Fig. 7;Fig. 9 is an explanatory diagram used fordescribing the operation of a prediction—mode switchingcircuit 52 employed in the encoder 18 shown in Fig. 7;Fig. 10 is an explanatory diagram used fordescribing the operation of a prediction—mode switching9S99P0316USOOVer1.1.docCA 02265089 1999-03-05circuit 52 employed in the encoder 18 shown in Fig. 7;Fig. 11 is an explanatory diagram used fordescribing the operation of a prediction-mode switchingcircuit 52 employed in the encoder 18 shown in Fig. 7;Fig. 12 is a block diagram showing theconfiguration of a decoder 31 employed in the apparatusshown in Fig. 5;Fig. 13 is an explanatory diagram used fordescribing SNR control based on picture types;Fig. 14 is a block diagram showing theconfiguration of a transcoder 101 provided by the presentinvention;Fig. 15 is a block diagram showing a moredetailed configuration of the transcoder 101 shown in Fig.14;Fig. 16 is a block diagram showing theconfiguration of a decoder 111 employed in a decodingapparatus 102 of the transcoder 101 shown in Fig. 14;Fig. 17 is an explanatory diagram showing pixelsof a macroblock;Fig. 18 is an explanatory diagram showing areasfor recording encoding parameters;Fig. 19 is a block diagram showing theconfiguration of an encoder 121 employed in an encodingapparatus 106 of the transcoder 101 shown in Fig. 14;Fig. 20 is a block diagram showing a typicalconfiguration of a history formatter 211 employed in thetranscoder 101 shown in Fig. 15;Fig. 21 is a block diagram showing a typicalconfiguration of a history decoder 203 employed in thetranscoder 101 shown in Fig. 15;10; S99P0316US0OVer1.1.docCA 02265089 1999-03-05Fig. 22 is a block diagram showing a typicalconfiguration of a converter 212 employed in thetranscoder 101 shown in Fig. 15;Fig. 23 is a block diagram showing a typicalconfiguration of a stuff circuit 323 employed in theconverter 212 shown in Fig. 22;Figs. 24A to I are timing charts used forexplaining the operation of the converter 212 shown inFig. 22;Fig. 25 is a block diagram showing a typicalconfiguration of a converter 202 employed in thetranscoder 101 shown in Fig. 15;Fig. 26 is a block diagram showing a typicalconfiguration of a delete circuit 343 employed in theconverter 202 shown in Fig. 25;Fig. 27 is a block diagram showing anothertypical configuration of the converter 212 employed inthe transcoder 101 shown in Fig. 15;Fig. 28 is a block diagram showing anothertypical configuration of the converter 202 employed inthe transcoder 101 shown in Fig. 15;Fig. 29 is a block diagram showing a typicalconfiguration of a user—data formatter 213 employed inthe transcoder 101 shown in Fig. 15;Fig. 30 is a block diagram showing theconfiguration of an actual system employing a pluralityof transcoders 101 each shown in Fig. 14;Fig. 31 is a diagram showing areas for recordingencoding parameters;Fig. 32 is a flowchart used for explainingprocessing carried out by the encoding apparatus 10611S99,P0316USO0Ver1 . 1 . doc.. ...........u..—.u.......................... V-V ...... .CA 02265089 1999-03-05employed in the transcoder 101 shown in Fig. 14 todetermine changeable picture types;Fig. 33 is a diagram showing an example ofchanging picture types;Fig. 34 is a diagram showing another example ofchanging picture types;Fig. 35 is an explanatory diagram used fordescribing quantization control processing carried out bythe encoding apparatus 106 employed in the transcoder 101shown in Fig. 14;Fig. 36 is a flowchart used for explainingquantization control processing carried out by theencoding apparatus 106 employed in the transcoder 101shown in Fig. 14;Fig. 37 is a block diagram showing theconfiguration of a tightly coupled transcoder 101;Fig. 38 is an explanatory diagram used fordescribing the syntax of an MPEG stream;Fig. 39 is an explanatory diagram used fordescribing the configuration of the syntax shown in Fig.38;Fig. 40 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 41 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 42 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 43 is an explanatory diagram used for12S99P0316USO0Ver1.1.doc. .....,.- ......r..........a................................u...... ..................... . .CA 02265089 1999-03-05describing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 44 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 45 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 46 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a fixed length;Fig. 47 is an explanatory diagram used fordescribing the syntax of history_stream() for recordinghistory information with a variable length;Fig. 48 is an explanatory diagram used fordescribing the syntax of sequence_header();Fig. 49 is an explanatory diagram used fordescribing the syntax of sequence_extension();Fig. 50 is an explanatory diagram used fordescribing the syntax of extension_and_user_data();Fig. 51 is an explanatory diagram used fordescribing the syntax of user_data();Fig. 52 is an explanatory diagram used fordescribing the syntax of group_of_picture_header();Fig. 53 is an explanatory diagram used fordescribing the syntax of picture_header();Fig. 54 is an explanatory diagram used fordescribing the syntax of picture_coding_extension();Fig. 55 is an explanatory diagram used fordescribing the syntax of extension_data();Fig. 56 is an explanatory diagram used for13S99P0316USO0Verl . 1 . docCA 02265089 1999-03-05describing the syntax of quant_matrix_extension();Fig. 57 is an explanatory diagram used fordescribing the syntax of copyright_extension(); AFig. 58 is an explanatory diagram used fordescribing the syntax of picture_display_extension();Fig. 59 is an explanatory diagram used fordescribing the syntax of picture_data();Fig. 60 is an explanatory diagram used fordescribing the syntax of slice();Fig. 61 is an explanatory diagram used fordescribing the syntax of macrob1ock();Fig. 62 is an explanatory diagram used fordescribing the syntax of macrob1ock_modes();Fig. 63 is an explanatory diagram used fordescribing the syntax of motion_vectors(s);Fig. 64 is an explanatory diagram used fordescribing the syntax of motion_vector(r, s);Fig. 65 is an explanatory diagram used fordescribing a variable-length code of macroblock_type foran I-picture;Fig. 66 is an explanatory diagram used fordescribing a variable-length code of macroblock_type fora P-picture; andFig. 67 is an explanatory diagram used fordescribing a variable-length code of macroblock_type fora B-picture.14S99PO316USO0Ver1.1.doc CA 02265089 1999-03-05DESCRIPTION OF THE PREFERRED EMBODIMENTSBefore describing a transcoder provided by thepresent invention, a process to compress and encode amoving-picture video signal is explained. It should benoted that a technical term ‘system’ used in thisspecification means a whole system comprising a pluralityof apparatuses and means.As described above, in a system for transmittinga moving-picture video signal to a remote destinationsuch as a television-conference system and a television-telephone system, the video signal is subjected tocompression and encoding processes using line correlationand intra-frame correlation of the video signal in orderto allow the transmission line to be utilized with a highdegree of efficiency. By using line correlation, a videosignal can be compressed by carrying out typically DCT(Discrete Cosine Transform) processing.By using intra-frame correlation, the videosignal can be further compressed and encoded. Assume thatframe pictures PC1, PC2 and PC3 are generated at pointsof time t1, t2 and t3 respectively as shown in Fig. 1. Inthis case, a difference in picture signal between theframe pictures PCI and PC2 is computed to generate aframe picture PC12. By the same token, a difference inpicture signal between the frame pictures PC2 and PC3 iscomputed to generate a frame picture PC23. Normally, adifference in picture signal between frame picturesadjacent to each other along the time axis is small. Thus,the amounts of information contained in the framepictures PC2l and PC23 are small and the amount of code15s99Po316usoover1.1.docCA 02265089 1999-03-05included in a difference signal obtained as a result ofcoding such a difference is also small.By merely transmitting a difference signal,however, the original picture can not be restored. Inorder to obtain the original picture, frame pictures areclassified into three types, namely, I-, P— and B-pictures each used as a smallest processing unit in thecompression and encoding processes of a video signal.Assume a GOP (Group of Pictures) of Fig. 2comprising seventeen frames, namely, frames F1 to F17which are each processed as a smallest unit of videosignals for processing. To be more specific, the firstframe F1, the second frame F2 and the third frame F3 areprocessed as I-, B— and P-pictures respectively. Thesubsequent frames, that is, the fourth to seventeenthframes F4 to F17, are processed as B- and P-picturesalternately.In the case of an I—picture, a video signal ofthe entire frame is transmitted. In the case of a P-picture or a B-picture, on the other hand, only adifference in video signal can be transmitted as analternative to the video signal of the entire frame. Tobe more specific, in the case of the third frame F3 of aP-picture shown in Fig. 2, only a difference in videosignal between the P-picture and a chronically precedingI— or P-picture is transmitted as a video signal. In thecase of the second frame F2 of a B—picture shown in Fig.3, for example, a difference in video signal between theB—picture and a chronically preceding frame, a succeedingframe or an average value of the preceding and achronically succeeding frames is transmitted as a video16s99Po316Usoover1.1.docCA 02265089 1999-03-05signal.Fig. 4 is a diagram showing the principle of atechnique of encoding a moving—picture video signal inaccordance with what is described above. As shown in Fig.4, the first frame F1 is processed as an I-picture. Thus,the video signal of the entire frame F1 is transmitted tothe transmission line as data FlX (intra-pictureencoding). On the other hand, the second frame F2 isprocessed as a B—picture. In this case, a differencebetween the second frame F2 and the chronically precedingframe F1, the succeeding frame F3 or an average value ofthe preceding frame F1 and the succeeding frame F3 istransmitted as data F2X.To put it in detail, the processing of a B-picture can be classified into four types. In theprocessing of the first type, the data of the originalframe F2 denoted by notation SP1 in Fig. 4 is transmittedas it is as data F2X as is the same case with an I-picture. Thus, the processing of the first type is thusthe so—called intra-picture encoding. In the processingof the second type, a difference denoted by notation SP2between the second frame F2 and the chronicallysucceeding third frame F3 is transmitted as data F2X.Since the succeeding frame is taken as a reference or aprediction picture, this processing is referred to asbackward prediction encoding. In the processing of thethird type, a difference denoted by notation SP3 betweenthe second frame F2 and the preceding first frame F1 istransmitted as data F2X as is the case with a P-picture.Since the preceding frame is taken as a predictionpicture, this processing is referred to as forward17S99PO316USOOVer1.1.docCA 02265089 1999-03-05prediction encoding. In the processing of the fourth type,a difference denoted by notation SP4 between the secondframe F2 and an average of the succeeding third frame F3and the preceding first frame F1 is transmitted as dataFZX. Since the preceding and succeeding frames are takenas a prediction picture, this processing is referred toas forward & backward prediction encoding. In actuality,one of the four processing types aforementioned isselected so as to generate a minimum amount oftransmission data obtained as a result of the processing.It should be noted that, in the case of adifference obtained as a result the processing of thesecond, third or fourth type described above, a motionvector between the pictures of the frames (predictionpicture) used in the computation of the difference isalso transmitted along the difference. To be motespecific, in the case of the forward prediction encoding,the motion vector is a motion vector x1 between theframes F1 and F2. In the case of the backward predictionencoding, the motion vector is a motion vector X2 betweenthe frames F2 and F3. In the case of the forward &backward prediction encoding, the motion vectors x1 andx2 are both transmitted.Much like the B-picture described above, in thecase of the frame F3 of the P—picture, the forwardprediction encoding or the intra—picture processing isselected to produce a minimum amount of transmitting dataobtained as a result of the processing. If the forwardprediction encoding is selected, a difference denoted bynotation SP3 between the third frame F3 and the precedingfirst frame F1 is transmitted as data F3X along with a18S99,P03l6USO0Ver1 . 1 . docCA 02265089 1999-03-05motion vector x3. If the intra—picture processing isselected, on the other hand, it is the data F3X of theoriginal frame F3 denoted by notation SP1 that istransmitted.Fig. 5 is a block diagram showing a typicalconfiguration of a system based on the principledescribed above to code a moving—picture video signal andtransmit as well as decode the coded signal. A signalencoding apparatus 1 encodes an input video signal andtransmits the encoded video signal to a signal decodingapparatus 2 through a recording medium 3 which serves asa transmission line. The signal decoding apparatus 2plays back the coded signal recorded on the recordingmedium 3 and decodes the playback signal in to an outputsignal.In the signal encoding apparatus 1, the inputvideo signal is supplied to a preprocessing circuit 11for splitting it into luminance and chrominance signals.In the case of this embodiment, the chrominance signal isa color—difference signal. The analog luminance andcolor—difference signals are then supplied to A/Dconverters 12 and 13 respectively to be each convertedinto a digital video signal. Digital video signalsresulting from the A/D conversion are then supplied to aframe—memory unit 14 to be stored therein. The frame-memory unit 14 comprises a luminance—signal frame memory15 for storing the luminance signal and a color-difference—signal frame memory 16 for storing the color-difference signal.A format converting circuit 17 converts theframe—format signals stored in the frame—memory unit 1419S99.P0316USO0Verl . 1 .docCA 02265089 1999-03-05into a block-format signal as shown in Figs. 6A to 6C. Toput it in detail, a video signal is stored in the frame-memory unit 14 as data of the frame format shown in Fig.6A. As shown in Fig. 6A, the frame format is a collectionof V lines each comprising H dots. The format convertingcircuit 17 divides the signal of one frame into N sliceseach comprising 16 lines as shown in Fig. 6B. Each sliceis then divided into M pieces of macroblocks as shown inFig. 6B. As shown in Fig. 6C, a macroblock includes aluminance signal Y corresponding to 16 X 16 pixels (dots).The luminance signal Y is further divided into blocksY[1] to Y[4] each comprising 8 X 8 dots. The l6X16—dotluminance signal is associated with a 8 X 8—dot Cb signaland a 8 X 8—dot Cr signal.The data with the block format obtained as aresult of the format conversion carried out by the formatconverting circuit 17 as described above is supplied toan encoder 18 for encoding the data. The configuration ofthe encoder 18 will be described later in detail byreferring to Fig. 7.A signal obtained as a result of the encodingcarried out by the encoder 18 is output to a transmissionline as a bitstream. Typically, the encoded signal issupplied to a recording circuit 19 for recording theencoded signal onto a recording medium 3 used as atransmission line as a digital signal.A playback circuit 30 employed in the signaldecoding apparatus 2 reproduces data from the recordingmedium 3, supplying the data to a decoder 31 of thedecoding apparatus for decoding the data. Theconfiguration of the decoder 31 will be described later20S99Po316USO0Ver1.l.docCA 02265089 1999-03-05in detail by referring to Fig. 12.Data obtained as a result of the decodingcarried out by the decoder 31 is supplied to a formatconverting circuit 32 for converting the block format ofthe data back into a frame format. Then, a luminancesignal having a frame format is supplied to a luminance-signal frame memory 34 of a frame-memory unit 33 to bestored therein. On the other hand, a color—differencesignal having a frame format is supplied to a color-difference—signal frame memory 35 of the frame-memoryunit 33 to be stored therein. The luminance signal isread back from the luminance—signal frame memory 34 andsupplied to a D/A converter 36. On the other hand, thecolor—difference signal is read back from the color-difference—signal frame memory 35 and supplied to a D/Aconverter 37. The D/A converters 36 and 37 convert thesignals into analog signals which are then supplied to apost-processing circuit 38 for synthesizing the luminanceand color—difference signals and generating a synthesizedoutput.Next, the configuration of the encoder 18 isdescribed by referring to Fig. 7. Picture data to beencoded is supplied to a motion-vector detecting circuit50 in macroblock units. The motion-vector detectingcircuit 50 processes picture data of each frame as an I-,P— or B—picture in accordance with a predeterminedsequence set in advance. To be more specific, picturedata of a GOP typically comprising frames the F1 to F17as shown in Figs. 2 and 3 is processed as a sequence ofI-, B-, P-, B-, P-, -—-, B- and P—pictures.Picture data of a frame to be processed by the21s99Po316Usoover1.1.docCA 02265089 1999-03-05motion-vector detecting circuit 50 as an I-picture suchas the frame F1 shown in Fig. 3 is supplied to a forward-source-picture area 51a of a frame-memory unit 51 to bestored therein. Picture data of a frame to be processedby the motion-vector detecting circuit 50 as a P—picturesuch as the frame F2 is supplied to a referenced—source—picture area 51b of the frame—memory unit 51 to be storedtherein. Picture data of a frame processed by the motion-vector detecting circuit 50 as a P—picture such as theframe F3 is supplied to a backward—source-picture area51c of the frame—memory unit 51 to be stored therein.When picture data of the next two frames such asthe frame F4 or F5 are supplied sequentially to themotion-vector detecting circuit 50 to be processed as Band P-pictures respectively, the areas 51a, 51b and 51care updated as follows. When picture data of the frame F4is processed by the motion-vector detecting circuit 50,the picture data of the frame F3 stored in the backward-source—picture area 51c is transferred to the forward-source-picture area 51a, overwriting the picture data ofthe frame Fl stored earlier in the source-picture area51b. The processed picture data of the frame F4 is storedin the referenced-source-picture area 51b, overwritingthe picture data on the frame F2 stored earlier in thereferenced-source-picture area 51b. Then, the processedpicture data of the frame F5 is stored in the backward-source-picture area 51c, overwriting the picture data ofthe frame F3 which has been transferred to the forward-source-picture area 51a any way. The operations describedabove are repeated to process the subsequent frames ofthe GOP.22S99P0316USO0Ver1.1.docCA 02265089 1999-03-05Signals of each picture stored in the frame-memory unit 51 are read out by a prediction-modeswitching circuit 52 to undergo a preparatory operationfor a frame-prediction mode or a field-prediction mode,that is, the type of processing to be carried out by aprocessing unit 53.Then, in either the frame-prediction mode or thefield-prediction mode, the signals are subjected tocomputing to obtain intra—picture prediction encoding,such as, forward, backward and forward & backwardprediction under control executed by an intra—picture?processing/forward/backward/forward & backward predictiondetermining circuit 54. The type of processing carriedout in the processing unit 53 is determined in accordancewith a prediction-error signal representing a differencebetween a referenced picture and a prediction picture forthe referenced picture. A referenced picture is a picturesubjected to the processing and a prediction picture is apicture preceding or succeeding the referenced picture.For this reason, the motion—vector detecting circuit 50(strictly speaking, the prediction-mode switching circuit52 employed in the vector detecting circuit 50 as will bedescribed later) generates a sum of the absolute valuesof prediction-error signals for use in determination ofthe type of processing carried out by the processing unit53. In place of a sum of the absolute values ofprediction-error signals, a sum of the squares ofprediction-error signals can also be used for suchdetermination.The prediction-mode switching circuit 52 carriesout the following preparatory operation for processing to23s99po316Usoover1.1.docCA 02265089 1999-03-05be carried out by the processing unit 53 in the frameprediction mode and the field prediction mode.The prediction-mode switching circuit 52receives four luminance blocks [Y1] to [Y4] suppliedthereto, by the motion—vector detecting circuit 50. Ineach block, data of lines of odd fields is mixed withdata of lines of even fields as shown in Fig. 8. The datamay be passed on to the processing unit 53 as it is.Processing of data with a configuration shown in Fig. 8to be performed by the processing unit 53 is referred toas processing in the frame—prediction mode whereinprediction processing is carried out for each macroblockcomprising the four luminance blocks and a motion vectorcorresponds to four luminance blocks.The prediction-mode switching circuit 52 thenreconfigures the signal supplied by the motion—vectordetecting circuit 50. In place of the signal with theconfiguration shown in Fig. 8, the signal with theconfiguration shown in Fig. 9 may be passed on to theprocessing unit 53. As shown in Fig. 9, the two luminanceblocks [Y1] and [Y2] are each composed of typically onlydots of lines of odd fields whereas the other twoluminance blocks [Y3] and [Y4] are each composed oftypically only dots of lines of even fields. Processingof data with the configuration shown in Fig. 9 to becarried out by the processing unit 53 is referred to asprocessing in the field-prediction mode wherein a motionvector corresponds to the two luminance blocks [Y1] and[Y2] whereas another motion vector corresponds to theother two luminance blocks [Y3] and [Y4].The prediction~mode switching circuit 52 selects24S99P0316USO0Ver1.1.doc CA 02265089 1999-03-05the data with the configuration shown in Fig. 8 or Fig. 9to be supplied to the processing unit 53 as follows.The prediction-mode switching circuit 52 computes a sumof the absolute values of prediction errors computed forthe frame—prediction mode, that is, for the data suppliedby the motion-vector detecting circuit 50 with theconfiguration shown in Fig. 8, and a sum of the absolutevalues of prediction errors computed for the field-prediction mode, that is, for the data with theconfiguration shown in Fig. 9 obtained as a result ofconversion of the data with the configuration shown inFig. 8. It should be noted that the prediction errorswill be described in detail later. The prediction-modeswitching circuit 52 then compares the sum of theabsolute values of prediction errors computed for data inorder to determine which mode produces the smaller sum.Then, the prediction-mode switching circuit 52 selectseither one of the configuration shown in Fig. 8 or Fig. 9for the frame—prediction mode or the field-predictionmode respectively that produces the smaller sum. Theprediction-mode switching circuit 52 finally outputs datawith the selected configuration to the processing unit 53for processing the data in the mode corresponding to theselected configuration.It should be noted that, in actuality, theprediction-mode switching circuit 52 is included in themotion-vector detecting circuit 50. That is to say, thepreparation of data with the configuration shown in Fig.9, the computation of the absolute values, the comparisonof the absolute values, the selection of the dataconfiguration and the operation to output data with the25S99P0316USOOVer1.l.doc CA 02265089 1999-03-05selected configuration to the processing unit 53 are allcarried out by the motion—vector detecting circuit 50 andthe prediction-mode switching circuit 52 merely outputsthe signal supplied by the motion—vector detectingcircuit 50 to the later stage of processing unit 53.It should be noted that, in the frame—predictionmode, a color-difference signal is supplied to theprocessing unit 53 with data of lines of odd fields mixedwith data of lines of even fields as shown in Fig. 8. Inthe field-prediction mode shown in Fig. 9, on the otherhand, four lines of the upper half of the color-difference block Cb are used as a color-difference signalof odd fields corresponding to the luminance blocks [Y1]and [Y2] while four lines of the lower half of the color-difference block Cb are used as a color-difference signalof even fields corresponding to the luminance blocks [Y3]and [Y4]. By the same token, four lines of the upper halfof the color-difference block Cr are used as a color-difference signal of odd fields corresponding to theluminance blocks [Y1] and [Y2] while four lines of thelower half of the color-difference block Cr are used as acolor-difference signal of even fields corresponding tothe luminance blocks [Y3] and [Y4].As described above, the motion—vector detectingcircuit 50 outputs a sum of the absolute values ofprediction errors to the prediction determining circuit54 for use in determination of whether the processingunit 53 should carry out intra—picture prediction,forward prediction, backward prediction or forward &backward prediction.To put it in detail, a sum of the absolute26S99P0316USO0Ver1.1.doc...- ....-‘...._...............«..«.u.u..».........¢......M..‘..‘.. . , , ,. V1 . , . CA 02265089 1999-03-05values of prediction errors in intra—picture predictionis found as follows. For the intra—picture prediction,the motion-vector detecting circuit 50 computes adifference between the absolute value |ZAij| of a sum 2Aij of signals Aij of macroblocks of a referenced pictureand a sum 2|Aij| of the absolute values |Aij| of thesignals Aij of the macroblocks of the same referencedpicture. For the forward prediction, the sum of theabsolute values of prediction errors is a sum Z|Aij-Bij|of the absolute values |Aij-Bij| of differences (Aij-Bij)between signals Aij of macroblocks of a referencedpicture and signals Bij of macroblocks of a forward-prediction picture or a preceding picture. The sum of theabsolute Values of prediction errors for the backwardprediction is found in the same way as that for theforward prediction, except that the prediction pictureused in the backward prediction is a backward-predictionpicture or a succeeding picture. As for the forward &backward prediction, averages of signals Bij ofmacroblocks of both the forward-prediction picture andthe‘backward—prediction picture are used in thecomputation of the sum.The sum of the absolute values of predictionerrors for each technique of prediction is supplied tothe prediction determining circuit 54 which selects theforward prediction, the backward prediction or theforward & backward prediction with a smallest sum as asum of absolute value of prediction error for an inter-picture prediction. The prediction determining circuit 54further compares the smallest sum with the sum for theintra—picture prediction and selects either the inter-27s99Po316Usoover1.1.docCA 02265089 1999-03-05picture prediction or the intra—picture prediction withthe much smaller sum as a prediction mode of processingto be carried out by the processing unit 53. To be morespecific, if the sum for the intra—picture prediction isfound smaller than the smallest sum for the inter—pictureprediction, the prediction determining circuit 54 selectsthe intra—picture prediction as a type of processing tobe carried out by the processing unit 53. If the smallestsum for the inter-picture prediction is found smallerthan the sum for the intra—picture prediction, on theother hand, the prediction determining circuit 54 selectsthe inter—picture prediction as a type of processing tobe carried out by the processing unit 53. As describedabove, the inter—picture prediction represents theforward prediction, the backward prediction or theforward & backward prediction selected as a predictionmode of processing with the smallest sum. The predictionmode is determined for each picture (or frame) while theframe-prediction or field-prediction mode is determinedfor each group of pictures.As described above, the motion—vector detectingcircuit 50 outputs signals of macroblocks of a referencedpicture for either the frame-prediction mode or thefield-prediction mode selected by the prediction—modeswitching circuit 52 to the processing unit 53 by way ofthe prediction—mode switching circuit 52. At the sametime, the motion—vector detecting circuit 50 also detectsa motion vector between a referenced picture and aprediction picture for one of the four prediction modesselected by the prediction determining circuit 54. Themotion—vector detecting circuit 50 then outputs the28s99Po316Usoover1.1.docCA 02265089 1999-03-05motion vector to a variable-length—encoding circuit 58and a motion-compensation circuit 64. In this way, themotion—vector detecting circuit 50 outputs a motionvector that corresponds to the minimum sum of theabsolute values of prediction errors for the selectedprediction mode as described earlier.While the motion—vector detecting circuit 50 isreading out picture data of an I—picture, the first frameof a GOP, from the forward source picture area 51a, theprediction determining circuit 54 sets the intra—pictureprediction, strictly speaking, the intra—frame or intra-field prediction, as a prediction mode, setting a switch53d employed in the processing unit 53 at a contact pointa. With the switch 53d set at this position, the data ofthe I-picture is supplied to a DCT-mode switching circuit55. As will be described later, the intra—pictureprediction mode is a mode in which no motion compensationis carried out.The DCT-mode switching circuit 55 receives thedata passed on thereto by the switch 53d from theprediction-mode switching circuit 52 in a mixed state ora frame-DCT mode shown Fig. 10. The DCT-mode switchingcircuit 55 then converts the data into a separated stateor a field-DCT mode shown in Fig. 11. In the frame-DCTmode, the data of lines of odd and even fields are mixedin each of the four luminance blocks. In the field-DCTmode, on the other hand, lines of odd fields are put intwo of the four luminance blocks while lines of evenfields are put in the other two blocks. The data of theI-picture in either the mixed or separated state will besupplied to a DCT circuit 56.29s99Po316Usoover1.1.docCA 02265089 1999-03-05Before supplying the data to the DCT circuit 56,the DCT-mode switching circuit 55 compares the encodingefficiency of DCT processing of the data with the linesof odd and even fields mixed with each other with theencoding efficiency of DCT processing of the data withthe lines of odd and even fields separated from eachother to select the data with a higher efficiency. Theframe-DCT mode or the field-DCT mode corresponding to theselected data is determined as the DCT mode.The encoding efficiencies are compared with eachother as follows. In the case of the data with the linesof odd and even fields mixed with each other as shown inFig. 10, a difference between the signal of a line of aneven field and the signal of a line of an odd fieldvertically adjacent to the even field is computed. Then,the sum or the square of the absolute value of thedifference is found. Finally, the sum of the absolutevalues or the squares of differences between two adjacenteven and odd fields is calculated.In the case of the data with the lines of oddand even fields separated from each other as shown in Fig.11, a difference between the signals of lines ofvertically adjacent even fields and a difference betweenthe signals of lines of vertically adjacent odd fieldsare computed. Then, the sum or the square of the absolutevalue of each difference is found. Finally, the sum ofthe absolute values or the squares of all differencesbetween two adjacent even fields and two adjacent oddfields is calculated.The sum calculated for the data shown in Fig. 10is compared with the sum calculated for the data shown in30S99P0316USOOVer1.l.docCA 02265089 1999-03-05Fig. 11 to select a DCT mode. To be more specific, if theformer is found smaller than the latter, the frame-DCTmode is selected. If the latter is found smaller than theformer, on the other hand, the field—DCT mode is selected.Finally, the data with a configurationcorresponding to the selected DCT mode is supplied to theDCT circuit 56, and at the same time, a DCT flag used forindicating the selected DCT mode is supplied to thevariable—length-encoding circuit 58 and the motion-compensation circuit 64.Comparison of the frame-prediction and field-prediction modes of Figs. 8 and 9 determined by theprediction-mode switching circuit 52 with the DCT modesof Figs. 10 and 11 determined by the DCT-mode switchingcircuit 55 clearly indicates that the data structures ofthe frame-prediction and field—prediction modes aresubstantially the same as the data structures of theframe-DCT mode and field—DCT modes respectively withregards to the luminous block.If the prediction-mode switching circuit 52selects the frame-prediction mode in which odd and evenlines are mixed with each other, it is quite within thebounds of possibility that the DCT-mode switching circuit55 also selects the frame-DCT prediction mode with oddand even lines mixed. If the prediction-mode switchingcircuit 52 selects the field—prediction mode in which oddand even fields are separated from each other, it isquite within the bounds of possibility that the DCT-modeswitching circuit 55 also selects the field-DCT mode withthe data of odd and even fields separated.It should be noted, however, that a selected31S99P0316USOOVer1.1.docCA 02265089 1999-03-05DCT-mode does not always correspond to a selectedprediction mode. In any circumstances, the prediction-mode switching circuit 52 selects a frame—prediction orfield-prediction mode that provides a smaller sum of theabsolute values of prediction errors and the DCT-modeswitching circuit 55 selects a DCT mode giving a betterencoding efficiency.As described above, the data of the I-picture isoutput by the DCT-mode switching circuit 55 to the DCTcircuit 56. The DCT circuit 56 then carries out DCTprocessing on the data for conversion into DCTcoefficients which are then supplied to a quantizationcircuit 57. The quantization circuit 57 then carries outa quantization process at a quantization scale adjustedto the amount of data stored in a transmission buffer 59which is fed back to the quantization circuit 57 as willbe described later. The data of the I-picture completingthe quantization process is then supplied to a variable-length—encoding circuit 58.The variable-length-encoding circuit 58 receivesthe data of the I-picture supplied from the quantizationcircuit 57, converting the picture data into variable-length code such as the Huffman code, at the quantizationscale supplied thereto also by the quantization circuit57. The variable-length code is then stored in thetransmission buffer 59.In addition to the picture data and thequantization scale supplied by the quantization circuit57, the variable-length-encoding circuit 58 also receivesinformation on a prediction mode from the predictiondetermining circuit 54, a motion vector from the motion-32S99P0316USO0Ver1.1.docCA 02265089 1999-03-05vector detecting circuit 50, a prediction flag from theprediction-mode switching circuit 52 and a DCT flag fromthe DCT-mode switching circuit 55. The information on aprediction mode indicates which type of processing iscarried out by the processing unit among intra—pictureencoding, forward-prediction encoding, backward-prediction encoding or forward & backward—predictionencoding. The prediction flag indicates that datasupplied from the prediction-mode switching circuit 52 tothe processing unit 53 is whether in the frame—predictionmode or field-prediction mode. The DCT flag indicatesthat data supplied from the DCT-mode switching circuit 55to the DCT circuit 56 is set whether in the frame—DCT orfield-DCT mode.The transmission buffer 59 temporarily storesinput data, feeding the amount of the data stored thereinback to the quantization circuit 57. When the amount ofdata stored in the transmission buffer 59 exceeds anupper limit of an allowable range, the quantizationcontrol signal increases the quantization scale of thequantization circuit 57 to reduce the amount of dataobtained as a result of the quantization. When the amountof data stored in the transmission buffer 59 becomessmaller than a lower limit of the allowable range, on theother hand, the quantization control signal decreases thequantization scale of the quantization circuit 57 toraise the amount of data obtained as a result of thequantization. In this way, an overflow and an underflowcan be prevented from occurring in the transmissionbuffer 59.Then, the data stored in the transmission buffer33S99P03l6USO0Ver1.1.docCA 02265089 1999-03-0559 is read back at predetermined timing to be supplied tothe recording circuit 19 for recording the data onto therecording medium 3 which serves as a transmission line.The data of the I-picture and the quantizationscale output by the quantization circuit 57 are suppliedalso to an inverse—quantization circuit 60 for carryingout inverse quantization on the data at an inverse-quantization scale corresponding to the quantizationscale. Data output by the inverse—quantization circuit 60is then supplied to an IDCT (Inverse Discrete CosineTransform) circuit 61 for carrying out inverse discretecosine transformation. Finally, data output by the IDCTcircuit 61 is supplied to a frame-memory unit 63 by wayof a processor 62 to be stored in a forward—predictionpicture area 63a of the frame-memory unit 63.A GOP supplied to the motion—picture detectingcircuit 50 to be processed thereby comprises a sequenceof pictures I-, B-, P-, B-, P-, B-, etc. In this case,after processing data of the first frame as an I-pictureas described above, data of the third frame is processedas a P—picture prior to processing of data of the secondframe as a B—picture. This is because a B-picture may besubjected to backward prediction which involves thesucceeding P—picture and backward prediction can not becarried out unless the succeeding P-picture has beenprocessed in advance. It should be noted that the data ofthe I-picture is transferred from the prediction-modeswitching circuit 52 to the processing unit 53 in theformat for the frame-prediction or field-prediction modeset by the prediction-mode switching circuit 52 for theGOP and always processed by the processing unit 53 in the34s99Po316Usoover1.1.docCA 02265089 1999-03-05intra-frame prediction mode as described earlier.For the reason described above, after processingdata of the first frame as an I—picture, the motion-picture detecting circuit 50 starts processing of the P-picture which has been stored in the backward—source—picture area 5lc. Then, the prediction-mode switchingcircuit 52 computes a sum of the absolute values ofdifferences between frames or prediction errors for dataof the P—picture supplied thereto by the motion-vectordetecting circuit 50 with the macroblock taken as a unitfor each prediction mode and supplies the sum to theprediction determining circuit 54 as described above. Thedata of the P—picture itself is transferred to theprocessing unit 53 in the format for the frame-predictionor field—prediction mode which has been set by theprediction-mode switching circuit 52 for the GOP of theP—picture when the I-picture of the first frame of theGOP was input. On the other hand, the predictiondetermining circuit 54 determines the prediction mode inwhich the data of the P—picture is to be processed by theprocessing unit 53, that is, selects either the intra-picture, forward, backward or forward & backwardprediction as the type of processing to be carried out bythe processing unit 53 on the data of the P—picture onthe basis of the sum of the absolute values of predictionerrors computed by the prediction—error switching circuit52 for each prediction mode. Strictly speaking, in thecase of a P—picture, the type of processing can be theintra—picture or forward—prediction mode as describedabove.In the first place, in the intra-picture35S99P0316USO0Ver1.1.docCA 02265089 1999-03-05prediction mode, the processing unit 53 sets the switch53d at the contact point a. Thus, the data of the P-picture is transferred to the transmission line by way ofthe DCT-mode switching circuit 55, the DCT circuit 56,the quantization circuit 57, the variable-length-encodingcircuit 58 and the transmission buffer 59 as is the casewith an I-picture. The data of the P—picture is alsosupplied to the frame-memory unit 63 to be stored in thebackward-prediction picture area 63b thereof by way ofthe quantization circuit 57, the inverse—quantizationcircuit 60, the IDCT circuit 61 and the processor 62.In the second place, in the forward—predictionmode, the processing unit 53 sets the switch 53d at acontact point b and the motion—compensation circuit 64reads out data from the forward—prediction picture area63a of the frame-memory unit 63, carrying out motioncompensation on the data in accordance with a motionvector supplied to the motion—compensation circuit 64 bythe motion—vector detecting circuit 50. In this case, thedata stored in the forward—prediction picture area 63a isthe data of the I-picture. That is to say, informed ofthe forward—prediction mode by the prediction determiningcircuit 54, the motion—compensation circuit 64 generatesdata of a forward—prediction picture data by reading thedata of the I-picture from a read address in the forward-prediction picture area 63a. The read address is aposition shifted away from the position of a macroblockcurrently output by the motion—vector detecting circuit50 by a distance corresponding to the motion vector.The data of the forward—prediction picture readout by the motion—compensation circuit 64 is associated36S99.P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05with the data of the referenced picture, that is, the P-picture, and supplied to a processor 53a employed in theprocessing unit 53. The processor 53a subtracts the dataof the forward-prediction picture, that is, the I-picture,supplied by the motion-compensation circuit 64 from thedata of macroblocks of the referenced picture supplied bythe prediction-mode switching circuit 52 to find adifference or an error in prediction. The difference datais transferred to the transmission line by way of theDCT-mode switching circuit 55, the DCT circuit 56, thequantization circuit 57, the variable—length—encodingcircuit 58 and the transmission buffer 59. The differencedata is also locally decoded by the inverse—quantizationcircuit 60 and the IDCT circuit 61 and a result obtainedfrom the local decoding is supplied to the processor 62.The data of the forward-prediction picturesupplied to the processor 53a by the motion-compensationcircuit 64 is also fed to the processor 62. In theprocessor 62, the data of the forward-prediction pictureis added to the difference data output by the IDCTcircuit 61 in order to produce the data of the originalP-picture. The data of the original P-picture is thenstored in the backward—prediction picture area 63b of theframe-memory unit 63.After the pieces of data of the I-picture andthe P-picture have been stored in the forward-predictionpicture area 63a and the backward—prediction picture area63b of the frame-memory unit 63 respectively as describedabove, the processing of the second frame of the B-picture is started by the motion-vector detecting circuit50. The B—picture is processed by the prediction-mode37S99P0316USOOVer1.1.docCA 02265089 1999-03-05switching circuit 52 in the same way as the P—picturedescribed above except that, in the case of the B-picture,the type of processing determined by the predictiondetermining circuit 54 can be the backward-predictionmode or the forward & backward-prediction mode inaddition to the intra—picture prediction mode or theforward-prediction mode.In the case of the intra—picture prediction modeor the forward-prediction mode, the switch 53d is set atthe contact point a or b respectively as described aboveas is the case with the P—picture. In this case, data ofthe B—picture output by the prediction—mode switchingcircuit 52 is processed and transferred in the same wayas the P—picture described above.In the case of the backward-prediction mode orthe forward & backward-prediction mode, on the other hand,the switch 53d is set at a contact point c or drespectively.In the backward-prediction mode wherein theswitch 53d is set at the contact point c, the motion-compensation circuit 64 reads out data from the backward-prediction picture area 63b of the frame-memory unit 63,carrying out motion compensation on the data inaccordance with a motion vector supplied to the motion-compensation circuit 64 by the motion-vector detectingcircuit 50. In this case, the data stored in thebackward-prediction picture area 63b is the data of theP—picture. That is to say, informed of the backward-prediction mode by the prediction determining circuit 54,the motion—compensation circuit 64 generates data of abackward-prediction picture by reading the data of the P-38s99Po316Usoover1.1.docCA 02265089 1999-03-05picture from a read address in the backward-predictionpicture area 63b. The read address is a position shiftedaway from the position of a macroblock currently outputby the motion—vector detecting circuit 50 by a distancecorresponding to the motion vector.The data of the backward-prediction picture readout by the motion—compensation circuit 64 is associatedwith the data of a referenced picture, that is, the B-picture, and supplied to a processor 53b employed in theprocessing unit 53. The processor 53b subtracts the dataof the backward-prediction picture, that is, the P-picture, supplied by the motion-compensation circuit 64from the data of macroblocks of the referenced picturesupplied by the prediction-mode switching circuit 52 tofind a difference or an error in prediction. Thedifference data is transferred to the transmission lineby way of the DCT-mode switching circuit 55, the DCTcircuit 56, the quantization circuit 57, the variable-length—encoding circuit 58 and the transmission buffer 59.In the forward & backward-prediction modewherein the switch 53d is set at the contact point d, onthe other hand, the motion—compensation circuit 64 readsout the data of the I-picture, in this case, from theforward-prediction picture area 63a of the frame—memoryunit 63 and the data of the P—picture from the backward-prediction picture area 63b, carrying out motioncompensation on the data in accordance with motionvectors supplied to the motion—compensation circuit 64 bythe motion—vector detecting circuit 50.That is to say, informed of the forward &backward-prediction mode by the prediction determining39S99P0316USO0Ver1.1.docCA 02265089 1999-03-05circuit 54, the motion-compensation circuit 64 generatesdata of forward & backward-prediction pictures by readingthe data of the I- and P—pictures from read addresses inthe forward—prediction picture area 63a and the backward-prediction picture area 63b respectively. The readaddresses are positions shifted away from the position ofa macroblock currently output by the motion—vectordetecting circuit 50 by distances corresponding to themotion vectors. In this case, there are 2 motion vectors,namely, motion vectors for the forward & backward-prediction pictures.The data of the forward & backward-predictionpictures read out by the motion-compensation circuit 64is associated with the data of the referenced picture andsupplied to a processor 53c employed in the processingunit 53. The processor 53c subtracts the data of theprediction pictures supplied by the motion-compensationcircuit 64 from the data of macroblocks of the referencedpicture supplied by the prediction—mode switching circuit52 employed in the motion—vector detecting circuit 50 tofind a difference or an error in prediction. Thedifference data is transferred to the transmission lineby way of the DCT—mode switching circuit 55, the DCTcircuit 56, the quantization circuit 57, the variable-length-encoding circuit 58 and the transmission buffer 59.Never used as a prediction picture for anotherframe, a B-picture is not stored in the frame-memory unit63.It should be noted that, typically, the forward-prediction picture area 63a and the backward-predictionpicture area 63b of the frame-memory unit 63 are40S99P0316USOOVer1.l.docCA 02265089 1999-03-05implemented as memory banks which can be switched fromone to another. Thus, in an operation to read a forward-prediction picture, the frame-memory unit 63 is set tothe forward—prediction picture area 63a. In an operationto read a backward—prediction picture, on the other hand,the frame—memory unit 63 is set to the backward-prediction picture area 63b.While the above description is focused onluminance blocks, the color—difference blocks are alsoprocessed and transferred in macroblock units shown inFigs. 8 to 11 in the same way as the luminance blocks. Itshould be noted that, as a motion vector in theprocessing of the color—difference blocks, components ofthe motion vector of the luminance blocks associated withthe color—difference blocks in the vertical andhorizontal directions are used with their magnitudes eachcut in half.Fig. 12 is a block diagram showing theconfiguration of the decoder 31 employed in the moving-picture encoding/decoding apparatus shown in Fig. 5.Encoded picture data transmitted through the transmissionline implemented by the recording medium 3 is received bythe decoder 31, by way of the playback circuit 30 of themoving-picture encoding/decoding apparatus and thenstored temporarily in a reception buffer 81 employed inthe decoder 31. Then, the picture data is supplied to avariable—length—decoding circuit 82 employed in adecoding circuit 90 of the decoder 31. The variable-length-decoding circuit 82 carries out Variable—lengthdecoding on the picture data read out from the receptionbuffer 81, outputting a motion vector, information on the41S99P0316USOOVerl.1.doc CA 02265089 1999-03-05prediction mode, a frame/field-prediction flag and aframe/field DCT—flag to a motion compensation—circuit 87and a quantization scale as well as decoded picture datato an inverse—quantization circuit 83.The inverse—quantization circuit 83 carries outinverse quantization on the picture data supplied theretoby the variable—length—decoding circuit 82 at thequantization scale also received from the variable-length—decoding circuit 82. The inverse—quantizationcircuit 83 outputs DCT coefficients data obtained as aresult of the inverse quantization to an IDCT circuit 84for carrying out IDCT (inverse discrete cosinetransformation), supplying the result of the IDCT to aprocessor 85.In the case of an I—picture, the picture datasupplied to the processor 85 by the IDCT circuit 84 isoutput as it is, by the processor 85 to a frame-memoryunit 86 to be stored in a forward-prediction picture area86a of the frame-memory unit 86. The data of the I-picture stored in the forward-prediction picture area 86awill be used for generation of data of a forward-prediction picture for picture data of a P— or B—picturesupplied to the processor 85 after the I-picture in theforward-prediction mode. The data of the I-picture isalso output to a format converting circuit 32 employed inthe moving—picture encoding/decoding apparatus shown inFig. 5.When the picture data supplied by the IDCTcircuit 84 is the data of P-picture having the picturedata leading ahead by one frame, that is, the picturedata of the I-picture, is read out by the motion-42S99P03l6USOOVer1 . 1 . docCA 02265089 1999-03-05compensation circuit 87 from the forward—predictionpicture area 86a of the frame—memory unit 86. In themotion-compensation circuit 87, the picture data of theI-picture undergoes motion compensation for a motionvector supplied by the variable—length-decoding circuit82. The picture data completing the motion compensationis then supplied to the processor 85 to be added to thepicture data supplied by the IDCT circuit 84 which isactually difference data. The result of the addition,that is, data of the decoded P—picture is supplied to theframe-memory unit 86 to be stored in the backward-prediction picture area 86b of the frame-memory unit 86as described above. The data of the P—picture stored inthe backward-prediction picture area 86b will be used forgeneration of data of a backward-prediction picture forpicture data of a B—picture supplied to the processor 85thereafter in the backward-prediction mode.On the other hand, picture data of a P—pictureprocessed by the signal encoding apparatus 1 in theintra-frame prediction mode is output by the processor 85as it is without undergoing any processing, to thebackward-prediction picture area 86b as is the case withan I-picture.Since the P—picture will be displayed after a B-picture to be processed after the P—picture, at thispoint of time, the P—picture is not output to the formatconverting circuit 32. Much like the encoder 18, thedecoder 31 processes and transfers the P—picture prior tothe B-picture even though the P—picture is received afterthe B-picture.Picture data of a B—picture output by the IDCT43S99P03l6USOOVer1 . 1 . doc CA 02265089 1999-03-05circuit 84 is processed by the processor 85 in accordancewith the information on the prediction mode supplied bythe variable—length-decoding circuit 82. To be morespecific, the processor 85 may output the picture data asit is in the intra—picture—prediction mode as is the casewith the I-picture or process the picture data in theforward-prediction, backward—prediction or forward &backward—prediction mode. In the forward—prediction,backward—prediction or forward & backward-prediction mode,the motion—compensation circuit 87 reads out the data ofthe I-picture stored in the forward—prediction picturearea 86a, the data of the P-picture stored in thebackward—prediction picture area 86b or the data of the Iand P-pictures stored in the forward-prediction picturearea 86a and the backward—prediction picture area 86b ofthe frame—memory unit 86 respectively. The motion-compensation circuit 87 then carries out motioncompensation on the picture data read out from the frame-memory unit 86 in accordance with a motion vector outputby the variable—length-decoding circuit 82 to produce aprediction picture. In the case of the intra—picture-prediction mode described above, no prediction picture isgenerated because no prediction picture is required bythe processor 85.The prediction picture experiencing the motioncompensation in the motion—compensation circuit 87 isadded by the processor 85 to the picture data of the B-picture, strictly speaking difference data, output by theIDCT circuit 84. Data output by the processor 85 is thensupplied to the format converting circuit 32 as is thecase with the I-picture.44S99vP0316USOOVer1 . 1 . docCA 02265089 1999-03-05Since the data output by the processor 85 is thepicture data of a B—picture, however, the data is notrequired in generation of a prediction picture. Thus, thedata output by the processor 85 is not stored in theframe—memory unit 86.After the data of the B—picture has been output,the data of the P—picture is read out by the motion-compensation circuit 87 from the backward-predictionpicture area 86b and supplied to the processor 85. Thistime, however, the data is not subjected to motioncompensation since the data has experienced the motioncompensation before being stored in the backward-prediction picture area 86b.The decoder 31 does not include counterpartcircuits of the prediction-mode switching circuit 52 andthe DCT-mode switching circuit 55 employed in the encoder18 shown in Fig. 5 because the counterpart processing,that is, processing to convert the signal formats witheven and odd fields separated from each other as shown inFigs. 9 and 11 back into the signal formats with even andodd fields mixed with each other as shown in Figs. 8 and10 respectively is carried out by the motion—compensationcircuit 87.While the above description is focused onluminance signals, the color-difference signals are alsoprocessed and transferred in macroblock units shown inFigs. 8 to 11 in the same way as the luminance signals.It should be noted that, as a motion vector in theprocessing of the color-difference signals, components ofthe motion vector of the luminance signals associatedwith the color-difference blocks in the vertical and45S99P0316USOOVerl.l.docCA 02265089 1999-03-05horizontal directions are used with their magnitudes eachcut in half.Fig. 13 is a diagram showing the quality ofencoded pictures in terms of an SNR (Signal-to—NoiseRatio). As shown in the figure, the quality of a picturemuch depends on the type of the picture. To be morespecific, transmitted I— and P-pictures have highqualities, but B—picture has poorer qualitiesrespectively. Deliberate variations in picture qualityshown in Fig. 13 is a sort of technique for making use ofthe human being‘ s characteristic of the sense of sight.That is to say, by varying the quality, the qualityappears better to the sense of sight than a case in whichan average of all picture qualities is used. Control tovary the picture quality is executed by the quantizationcircuit 57 employed in the encoder 18 shown in Fig. 7.Figs. 14 and 15 are diagrams showing theconfiguration of the transcoder 101 provided by thepresent invention. Fig. 15 shows the configuration shownin Fig. 14 in more detail. The transcoder 101 convertsthe GOP structure and the bit rate of an encoded videobitstream supplied to the video decoding apparatus 102into a GOP structure and a bit rate desired by theoperator or specified by the host computer, respectively.The function of the transcoder 101 is explained byassuming that, in actuality, three other transcoders eachhaving all but the same function as the transcoder 101are connected at the front stage of the transcoder 101.In order to convert the GOP structure and the bit rate ofbitstream into one of a variety of GOP structures and oneof a variety of bit rates respectively, transcoders of46S99P0316USOOVer1.1.docCA 02265089 1999-03-05the first, second and third generations are connected inseries and the transcoder 101 of the fourth generationshown in Fig. 15 is connected behind the seriesconnection of the transcoders of the first, second andthird generation. It should be noted that the transcodersof the first, second and third generations themselves arenot shown in Fig. 15.In the following description of the presentinvention, an encoding process carried out by thetranscoder of the first generation is referred to as anencoding process of the first generation and an encodingprocess carried out by the transcoder of the secondgeneration connected after the transcoder of the firstgeneration is referred to as an encoding process of thesecond generation. Likewise, an encoding process carriedout by the transcoder of the third generation connectedafter the transcoder of the second generation is referredto as an encoding process of the third generation and anencoding process carried out by the fourth transcoderconnected after the transcoder of the third generation,that is, the transcoder 101 shown in Fig. 15, is referredto as an encoding process of the fourth generation. Inaddition, encoding parameters used as well as obtained asa result of the encoding process of the first generationare referred to as encoding parameters of the firstgeneration and encoding parameters used as well asobtained as a result of the encoding process of thesecond generation are referred to as encoding parametersof the second generation. Similarly, encoding parametersused as well as obtained as a result of the encodingprocess of the third generation are referred to as47S99P0316USO0Verl.1.docCA 02265089 1999-03-05encoding parameters of the third generation and encodingparameters used as well as obtained as a result of theencoding process of the fourth generation are referred toas encoding parameters of the fourth generation or thecurrent encoding parameters.First, an encoded video bitstream ST(3rd) of thethird generation generated and supplied by the transcoderof the third generation to the transcoder 101 of thefourth generation shown in Fig. 15 is explained. Theencoded video bitstream ST(3rd) of the third generationis an encoded video bitstream obtained as a result of theencoding process of the third generation carried out bythe third transcoder provided at a stage preceding thetranscoder 101 of the fourth generation. In the encodedvideo bitstream ST(3rd) of the third generation resultingfrom the encoding process of the third generation, theencoding parameters of the third generation generated inthe encoding process of the third generation aredescribed as a sequence_header() function, asequence_extension() function, agroup_of_pictures_header() function, a picture_header()function, a picture_coding_extension() function, apicture_data() function, a slice() function and amacroblock() function on a sequence layer, a GOP layer, apicture layer, a slice layer and a macroblock layer ofthe encoded video bitstream ST of the third generation,respectively. The fact that the encoding parameters ofthe third generation used in the encoding process of thethird generation are described in the encoded videobitstream ST of the third generation resulting from theencoding process of the third generation conforms to48S99P0316USOOVer1 . 1 . docCA 02265089 1999-03-05MPEG2 standard and does not reveal any novelty whatsoever.A point unique to the transcoder 101 provided bythe present invention is not the fact that the encodingparameters of the third generation are described in theencoded video bitstream ST of the third generation, butthe fact that the encoding parameters of the first andsecond generations obtained as results of the encodingprocesses of the first and second generationsrespectively are included in the encoded video bitstreamST of the third generation. The encoding parameters ofthe first and second generations are described ashistory_stream() in a user_data area of the picture layerof the encoded video bitstream ST of the third generation.In the present invention, the history stream described ina user_data area of the picture layer of the encodedvideo bitstream ST of the third generation is referred toas “history information" and the parameters described asa history stream are referred to as “history parameters".In another way of naming parameters, the encodingparameters of the third generation described in theencoded video bitstream ST of the third generation canalso be referred to as current encoding parameters. Inthis case, the encoding parameters of the first andsecond generations described as history_stream() in theuser-data area of the picture layer of the encoded videobitstream ST of the third generation are referred to as“past encoding parameters" since the encoding processesof the first and second generations are each a processcarried out in the past if viewed from the encodingprocess of the third generation.The reason why the encoding parameters of the first49s99Po316Usoover1.1.docCA 02265089 1999-03-05and second generations obtained as results of theencoding processes of the first and second generations'respectively are also described in the encoded videobitstream ST(3rd) of the third generation in addition tothe encoding parameters of the third generation asdescribed above is to avoid deterioration of the picturequality, even if the GOP structure and the bit rate ofencoding stream are changed repeatedly in transcodingprocesses. For example, a picture may be encoded as a P-picture in the encoding process of the first generationand, in order to change the GOP structure of the encodedvideo bitstream of the first generation, the picture isencoded as a B—picture in the encoding process of thesecond generation. In order to further change the GOPstructure of the encoded video bitstream of the secondgeneration, the picture is again encoded as a P—picturein the encoding process of the third generation. Sinceconventional encoding and decoding processes based on theMPEG standard are not 100% reverse processed, the picturequality deteriorates each time encoding and decodingprocesses are carried out as is generally known. In sucha case, encoding parameters such as the quantizationscale, the motion vector and the prediction mode are notmerely re-computed in the encoding process of the thirdgeneration. Instead, the encoding parameters such as thequantization scale, the motion vector and the predictionmode generated in the encoding process of the firstgeneration are re-utilized. The encoding parameters suchas the quantization scale, the motion vector and theprediction mode newly generated in the encoding processof the first generation obviously each have a precision50s9qPo315Usoover1.1.docCA 02265089 1999-03-05higher than the counterpart encoding parameters newlygenerated in the encoding process of the third generation.Thus, by re-utilizing the encoding parameters generatedin the encoding process of the first generation, it ispossible to lower the degree to which the picture qualitydeteriorates even if the encoding and decoding processesare carried out repeatedly.The processing according to the present inventiondescribed above is exemplified by explaining the decodingand encoding processes carried out by the transcoder 101of the fourth generation shown in Fig. 15 in detail. Thevideo decoding apparatus 102 decodes an encoded videosignal included in the encoded video bitstream ST(3rd) ofthe third generation by using the encoding parameters ofthe third generation to generate decoded base—banddigital video data. In addition, the video decodingapparatus 102 also decodes the encoding parameters of thefirst and second generations described as a historystream in the user-data area of the picture layer of theencoded video bitstream ST(3rd) of the third generation.The configuration and the operation of the video decodingapparatus 102 are described in detail by referring to Fig.16 as follows.Figs. 16 is a diagram showing a detailedconfiguration of the video decoding apparatus 102. Asshown in the figure, the video decoding apparatus 102comprises a buffer 81 for buffering a supplied encodedbitstream, a variable-length decoding circuit 112 forcarrying out a variable-length decoding process on theencoded bitstream, an inverse—quantization circuit 83 forcarrying out inverse quantization on data completing the51s99po31sUsoover1.1.doc CA 02265089 1999-03-05variable—length decoding process in accordance with aquantization scale supplied by the variable—lengthdecoding circuit 112, an IDCT circuit 84 for carrying outinverse discrete cosine transformation on DCTcoefficients completing the inverse quantization, aprocessor 85 for carrying out motion compensationprocessing, a frame—memory unit 86 and a motion-compensation circuit 87.In order to decode the encoded video bitstreamST(3rd) of the third generation, the variable—lengthdecoding circuit 112 extracts the encoding parameters ofthe third generation described on the picture layer, theslice layer and the macroblock layers of the encodedvideo bitstream ST(3rd) of the third generation.Typically, the encoding parameters of the thirdgeneration extracted by the variable—length decodingcircuit 112 include picture_coding_type representing thetype of the picture, quantiser_scale_code representingthe size of the quantization-scale step, macrobloc_typerepresenting the prediction mode, motion_vectorrepresenting the motion vector, frame/field_motion_typeindicating a frame prediction mode or a field predictionmode and dct_type indicating a frame-DCT mode or a field-DCT mode. The quantiser_scale_code encoding parameter issupplied to the inverse—quantization circuit 83. On theother hand, the rest of the encoding parameters such aspicture_coding_type, quantiser_scale_code,macroblock_type, motion_vector, frame/field_motion_typeand dct_type are supplied to the motion—compensationcircuit 87.The variable—length decoding circuit 112 extracts52s99Po316Usoover1.1.docCA 02265089 1999-03-05not only the above mentioned encoding parameters requiredfor decoding the encoded video bitstream ST(3rd) of thethird generation, but all other encoding parameters ofthe third generation to be transmitted to a transcoder ofthe fifth generation connected behind the transcoder 101shown in Fig. 15 as history information of the thirdgeneration from the sequence layer, the GOP layer, thepicture layer, the slice layer and the macroblock layerof the encoded video bitstream ST(3rd) of the thirdgeneration. It is needless to say that the above encodingparameters of the third generation such aspicture_coding_type, quantiser_scale_code, macrobloc_type,motion_vector, frame/field_motion_type and dct_type usedin the process of the third generation as described aboveare also included in the history information of the thirdgeneration. The operator or the host computer determinesin advance what encoding parameters are to be extractedas history information in accordance with thetransmission capacity.In addition, the variable~length decoding circuit112 also extracts user data described in the user—dataarea of the picture layer of the encoded video bitstreamST(3rd) of the third generation, supplying the user datato the history decoding apparatus 104.The history decoding apparatus 104 extracts theencoding parameters of the first and second generationsdescribed as history information from the user dataextracted from the picture layer of the encoded videobitstream ST(3rd) of the third generation. To put itconcretely, by analyzing the syntax of the user datareceived from the variable—length decoding circuit 112,53S99PO316USO0Ver1.1.docCA 02265089 1999-03-05the history decoding apparatus 104 is capable ofdetecting unique History_Data_Id described in the userdata and using it for extractingconverted_history_stream(). Then, by fetching 1-bitmarker bits (marker_bit) inserted intoconverted_history_stream() at predetermined intervals,the history decoding apparatus 104 is capable ofacquiring history_stream(). By analyzing the syntax ofhistory_stream(). the history decoding apparatus 104 iscapable of extracting the encoding parameters of thefirst and second generations recorded in history_stream().The configuration and the operation of the historydecoding apparatus 104 will be described in detail later.In order to eventually supply the encodingparameters of the first, second and third generations tothe video encoding apparatus 106 for carrying out theencoding process of the fourth generation, the historyinformation multiplexing apparatus 103 multiplexes theencoding parameters of the first, second and thirdgenerations in base—band video data decoded by the videodecoding apparatus 102. The history informationmultiplexing apparatus 103 receives the base—band videodata from the video decoding apparatus 102, the encodingparameters of the third generation from the variable-length decoding circuit 112 employed in the videodecoding apparatus 102 and the encoding parameters of thefirst as well as second generations from the historydecoding apparatus 104, multiplexing the encodingparameters of the first, second and third generations inbase—band video data. The base—band video data with theencoding parameters of the first, second and third54s99Po316Usoover1.1.docCA 02265089 1999-03-05generations multiplexed therein is then supplied to thehistory information separating apparatus 105.Next, a technique of multiplexing the encodingparameters of the first, second and third generations inthe base—band video data is explained by referring toFigs. 17 and 18. Fig. 17 is a diagram showing amacroblock composed of a luminance-signal portion and acolor-difference-signal portion with the portions eachcomprising 16 pixels 16 pixels as defined in accordancewith MPEG standard. One of the portions comprising 16pixels 16 pixels is composed of 4 sub-blocks Y[0], Y[1],Y[2] and Y[3] for the luminance signal whereas the otherportion is composed of 4 sub—blocks Cr[0], Cr[l], Cb[O]and Cb[1] for the color-difference signal. The sub-blocksY[0], Y[1], Y[2] and Y[3] and the sub-blocks Cr[0], Cr[1],Cb[0] and Cb[1] each comprise 8 pixels 8 pixels.Fig. 18 is a diagram showing a format of video data.Defined in accordance with an RDT601 recommended by ITU,the format represents the so—called D1 format used in thebroadcasting industry. Since the D1 format isstandardized as a format for transmitting video data, 1pixel of video data is expressed by 10 bits.Base-band video data decoded in conformity with theMPEG standard is 8 bits in length. In the transcoderprovided by the present invention, base—band video datadecoded in conformity with the MPEG standard istransmitted by using the 8 high-order bits D9 to D2 ofthe 10-bit D1 format as shown in Fig. 18. Therefore, the8-bit decoded video data leaves 2 low-order bits D1 andD2 unallocated in the D1 format. The transcoder providedby the present invention utilizes an unallocated area55s99Po315usoover1.1.docCA 02265089 1999-03-05comprising these unallocated bits for transmittinghistory information.The data block shown in Fig. 18 is a data block fortransmitting a pixel in the 8 sub-blocks of a macroblock.Since each sub-block actually comprises 64 (= 8 8) pixelsas described above, 64 data blocks each shown in Fig. 18are required to transmit a volume of data of macroblockcomprising the 8 sub-blocks. As described above, themacroblock for the luminance and color-difference signalscomprises 8 sub-blocks each composed of 64 (= 8 8) pixels.Thus, the macroblock for the luminance and color-difference signals comprises 8 64 pixels = 512 pixels.Since each pixel leaves 2 bits unallocated as describedabove, the macroblock for the luminance and color-difference signals has 512 pixels 2 unallocatedbits/pixel = 1,024 unallocated bits. By the way, historyinformation of one generation is 256 bits in length. Thus,history information of four (= 1,024/256) previousgenerations can be superposed on the macroblock of videodata for the luminance and color-difference signals. Inthe example shown in Fig. 18, history information of thefirst, second and third generations is superposed on theone macroblock of video data by utilizing the 2 low-orderbits of D1 and D0.The history information separating apparatus 105extracts base—band video data from the 8 high-order bitsof data transmitted thereto as the D1 format and historyinformation from the 2 low-order bits. In the exampleshown in Fig. 15, the history information separatingapparatus 105 extracts base—band video data fromtransmitted data, supplying the base—band video data to56s99Po316usoover1.1.docCA 02265089 1999-03-05the video encoding apparatus 106. At the same time, thehistory information separating apparatus 105 extractshistory information comprising encoding parameters of thefirst, second and third generations from the transmitteddata, supplying the history information to the signalencoding circuit 106 and the history encoding apparatus107.The video encoding apparatus 106 encodes the base-band video data supplied thereto by the historyinformation separating apparatus 105 into a bitstreamhaving a GOP structure and a bit rate specified by theoperator or the host computer. It should be noted thatchanging the GOP structure means changing the number ofpictures included in the GOP, changing the number of P—pictures existing between two consecutive I-pictures orchanging the number of B—pictures existing between twoconsecutive I-pictures or between an I-picture and a P-picture.In the embodiment shown in Fig. 15, the suppliedbase—band video data includes history information of theencoding parameters of the first, second and thirdgenerations superposed thereon. Thus, the video encodingapparatus 106 is capable of carrying out an encodingprocess of the fourth generation by selective re-utilization of these pieces of history information so asto lower the degree to which the picture qualitydeteriorates.Fig. 19 is a concrete diagram showing theconfiguration of an encoder 121 employed in the videoencoding apparatus 106. As shown in the figure, theencoder 121 comprises a motion-vector detecting circuit57S99‘P0316USOOVer1 . 1 . docCA 02265089 1999-03-0550, a prediction mode switch circuit 52, a processor 53,a DCT switch circuit 55, a DCT circuit 56, a quantizationcircuit 57, a variable—length coding circuit 58, atransmission buffer 59, an inverse—quantization circuit60, an inverse—DCT circuit 61, a processor 62, a framememory unit 63 and a motion compensation 64. Thefunctions of these circuits are all but the same as thoseemployed in the encoder 18 shown in Fig. 7, making itunnecessary to repeat their explanation. The followingdescription thus focuses on differences between theencoder 121 and the encoder 18 shown in Fig. 7.The encoder 121 also includes a controller 70 forcontrolling the operations and the functions of the otheraforementioned components composing the encoder 121. Thecontroller 70 receives an instruction specifying a GOPstructure from the operator or the host computer,determining the types of pictures constituting the GOPstructure. In addition, the controller 70 also receivesinformation on a target bit rate from the operator or thehost computer, controlling the quantization circuit 57 soas to set the bit rate output by the encoder 121 at thespecified target bit rate.Furthermore, the controller 70 also receiveshistory information of a plurality of generations outputby the history information separating apparatus 105,encoding a referenced picture by reutilization of thishistory information. The functions of the controller 70are described in detail as follows.First, the controller 70 forms a judgment as towhether or not the type of a referenced picturedetermined from the GOP structure specified by the58s99Po316Usoover1.1.docCA 02265089 1999-03-05operator or the host computer matches with the picturetype included in the history information. That is to say,the controller 70 forms a judgment as to whether or notthe referenced picture has been encoded in the past inthe same picture type as the specified picture type.The formation of the judgment described above canbe exemplified by using the example shown in Fig. 15. Thecontroller 70 forms a judgment as to whether or not thepicture type assigned to the referenced picture in theencoding process of the fourth generation is the sametype as that of the referenced picture in the encodingprocess of the first generation, the type of thereferenced picture in the encoding process of the secondgeneration or the type of the referenced picture in theencoding process of the third generation.If the result of the judgment indicates that thepicture type assigned to the referenced picture in theencoding process of the fourth generation is not the sametype as that of the referenced picture in an encodingprocess of any previous generation, the controller 70carries out a normal encoding process. This result of thejudgment implies that this referenced picture was neversubjected to the encoding processes of the first, secondand third generations previously in the picture typeassigned to the referenced picture in the encodingprocess of the fourth generation. If the result of thejudgment indicates that the picture type assigned to thereferenced picture in the encoding process of the fourthgeneration is the same type as that of the referencedpicture in an encoding process of a previous generation,on the other hand, the controller 70 carries out a59s99Po316Usoover1.1.doc CA 02265089 1999-03-05parameter—reuse-encoding-process by reutilization ofparameters of the previous generation. This result of thejudgment implies that this referenced picture wassubjected to the encoding process of the first, second orthird generation previously in the picture type assignedto the referenced picture in the encoding process of thefourth generation.First, the normal encoding process carried out bythe controller 70 is explained. In order to allow thecontroller 70 to make a decision as to which one of theframe-prediction mode or the field-prediction mode shouldbe selected, the motion—vector detecting circuit 50detects a prediction error in the frame prediction modeand a prediction error in the field prediction mode,supplying the values of the prediction errors to thecontroller 70. The controller 70 compares the values witheach other, selecting the prediction mode with thesmaller prediction error. The prediction mode switchcircuit 52 then carries out signal processing tocorrespond to the prediction mode selected by thecontroller 70, supplying a signal obtained as a result ofthe processing to the processing unit 53. With the frame-prediction mode selected, the prediction mode switchcircuit 52 carries out signal processing so as to supplythe luminance signal to the processing unit 53 as it ison receiving the signal, and carries out signalprocessing of the color—difference signal so as to mixodd field lines with even field lines as being describedearlier by referring to Fig. 8. With the field-predictionmode selected, on the other hand, the prediction modeswitch circuit 52 carries out signal processing of the60s99Po316Usoover1.1.doc CA 02265089 1999-03-05luminance signal so as to make the luminance sub—blocksY[l] and Y[2] comprise odd field lines and luminance sub-blocks Y[3] and Y[4] comprise even field lines, andcarries out signal processing of the color—differencesignal so as to make the four upper lines comprise oddfield lines and the four lower lines comprise even fieldlines as described earlier by referring to Fig. 9.In addition, in order to allow the controller 70 tomake a decision as to which one of the intra-pictureprediction mode, the forward prediction mode, thebackward prediction mode or the forward & backwardprediction mode to be selected, the motion-vectordetecting circuit 50 generates a prediction error foreach of the prediction modes, supplying the predictionerrors to the controller 70. The controller 70 selects amode having the smallest prediction error as an inter-picture prediction mode from the forward prediction mode,the backward prediction mode or the forward & backwardprediction mode. Then, the controller 70 compares thesmallest prediction error of the selected inter-pictureprediction mode with the prediction error of the intra-picture prediction mode, selecting either the selectedinter-picture prediction or the intra-picture predictionmode with a smaller prediction error as a prediction mode.To put it in detail, if the prediction error of theintra-picture prediction mode is found smaller, theintra-picture prediction mode is established. If theprediction error of the inter-picture prediction mode isfound smaller, on the other hand, the selected forwardprediction, backward prediction mode or forward &backward prediction mode with the smallest prediction61s99Po316usoover1.1.docCA 02265089 1999-03-05error is established. The controller 70 then controls theprocessor 53 and the motion compensation 64 to operate inthe established prediction mode.Furthermore, in order to allow the controller 70 tomake a decision as to which one of the frame-DCT mode orthe field-DCT mode to be selected, the DCT mode switchcircuit 55 converts data of the four luminance sub—blocksinto a signal with a format of the frame-DCT modecomprising mixed odd and even field lines and a signalwith a format of the field-DCT mode comprising separatedodd and even field lines, supplying the signals resultingfrom the conversion to the DCT circuit 56. The DCTcircuit 56 computes an encoding efficiency of DCTprocessing of the signal comprising mixed odd and evenfield lines and an encoding efficiency of DCT processingof the signal comprising separated odd and even fieldlines, supplying the computed encoding efficiencies tothe controller 70. The controller 70 compares theencoding efficiencies with each other, selecting a DCTmode with a higher efficiency. The controller 70 thencontrols the DCT mode switch circuit 55 to work in theselected DCT mode.The controller 70 also receives a target bit raterepresenting a desired bit rate specified by the operatoror the host computer and a signal representing the volumeof data stored in the transmission buffer 59 or the sizeof a residual free area left in the buffer 59, generatingfeedback_q_scale_code for controlling the size of thequantization step used by the quantization circuit 57 onthe basis of the target bit rate and the size of theresidual free area left in the buffer 59. The62S99P03l6USO0Ver1 . 1 .docCA 02265089 1999-03-05feedback_q_scale_code is a control signal generated inaccordance with the size of the residual free area leftin the transmission buffer 59 so as to prevent anoverflow or an underflow from occurring in the buffer 59and to cause a bitstream to be output from thetransmission buffer 59 at the target bit rate. To put itconcretely, if the volume of data buffered in thetransmission buffer 59 becomes small, for example, thesize of the quantization step is reduced so that thenumber of bits of a picture to be encoded next increases.If the volume of data buffered in the transmission buffer59 becomes large, on the other hand, the size of thequantization step is increased so that the number of bitsof a picture to be encoded next decreases. It should benoted that the size of the quantization step isproportional to feedback_q_scale_code. That is to say,when feddback_q_scale_code is increased, the size of thequantization step rises. When feddback_q_scale_code isreduced, on the other hand, the size of the quantizationstep decreases.Next, the parameter—reuse-encoding processreutilizing encoding parameters which characterizes thetranscoder 101 is explained. In order to make theexplanation easy to understand, it is assumed that thereferenced picture was encoded as an I—picture in theencoding process of the first generation, as a P—picturein the encoding process of the second generation and as aB-picture in the encoding process of the third generation,and has to be encoded again as an I—picture in thecurrent encoding process of the fourth generation. Inthis case, since the referenced picture was previously63S99P0316USOOVer1.l.docCA 02265089 1999-03-05encoded in the encoding process of the first generationin the required picture type as the I—picture assigned tothe encoding process of the fourth generation, thecontroller 70 carries out an encoding process by usingthe encoding parameters of the first generation insteadof creating new encoding parameters from the suppliedvideo data. Representatives of such encoding parametersto be reutilized in the encoding process of the fourthgeneration include quantiser_scale_code representing thesize of the quantization-scale step, macrobloc_typerepresenting the prediction mode, motion_vectorrepresenting the motion vector, frame/field_motion_typeindicating a frame prediction mode or a field predictionmode and dct_type indicating a frame-DCT mode or a field-DCT mode. The controller 70 does not reutilize allencoding parameters received as information history.Instead, the controller 70 reutilizes only encodingparameters judged to be appropriate for reutilization andnewly creates encoding parameters for which previousencoding parameters are not suitable for reutilization.Next, the parametre—reuse—encoding.processreutilizing encoding parameters is explained by focusingon differences from the normal encoding process describedearlier. In the normal encoding process, the motion-vector detecting circuit 50 detects a motion vector of areferenced picture. In the parameter-reuse—encodingprocess reutilizing encoding parameters, on the otherhand, the motion—vector detecting circuit 50 does notdetect a motion vector of a referenced picture. Instead,the motion—vector detecting circuit 50 reutilizesmotion_vector transferred as history information of the64S99P0316USO0Ver1.l.docCA 02265089 1999-03-05first generation. The reason motion_vector of the firstgeneration is used is explained as follows. Since thebase—band video data obtained as a result of a decodingprocess of the encoded bitstream of the third generationhas experienced at least three encoding processes, thepicture quality thereof is obviously poor in comparisonwith the original video data. A motion vector detectedfrom video data with a poor picture quality is notaccurate. To be more specific, a motion vector suppliedto the transcoder 101 of the fourth generation as historyinformation of the first generation is certainly has aprecision better than a motion vector detected in theencoding process of the fourth generation. Byreutilization of the motion vector received as anencoding parameter of the first generation, the picturequality does not deteriorate during the encoding processof the fourth generation. The controller 70 supplies themotion vector received as the history information of thefirst generation to the motion compensation 64 and thevariable-length coding circuit 58 to be used as themotion vector of a referenced picture being encoded inthe encoding process of the fourth generation.In the normal processing, the motion-vectordetecting circuit 50 detects a prediction error in theframe—prediction mode and a prediction error in thefield-prediction mode in order to select either theframe-prediction mode or the field-prediction mode. Inthe parameter-reuse-encoding processing based onreutilization of encoding parameters, on the other hand,the motion-vector detecting circuit 50 detects neitherprediction error in the frame—prediction mode and nor65S99P0316USO0Ver1.1.docCA 02265089 1999-03-05prediction error in the field—prediction mode. Instead,frame/field_motion_type received as history informationof the first generation to indicate either the frame-prediction mode or the field—prediction mode isreutilized. This is because the prediction error of eachprediction mode detected in the encoding process of thefirst generation has a precision higher than theprediction error of each prediction mode detected in theencoding process of the fourth generation. Thus, aprediction mode selected on the basis of predictionerrors each having a high precision will allow a moreoptimum encoding process to be carried out. To put itconcretely, the controller 70 supplies a control signalrepresenting frame/field_motion_type received as historyinformation of the first generation to the predictionmode switch circuit 52. The control signal drives theprediction mode switch circuit 52 to carry out signalprocessing according to reutilizedframe/field_motion_type.In the normal processing, the motion—vectordetecting circuit 50 also detects a prediction error ineach of the intra-picture prediction mode, the forwardprediction mode, the backward prediction mode and theforward & backward prediction mode in order to select oneof these prediction modes. In the processing based onreutilization of encoding parameters, on the other hand,the motion—vector detecting circuit 50 detects noprediction errors of these prediction modes. Instead, oneof the intra—picture prediction mode, the forwardprediction mode, the backward prediction mode and theforward & backward prediction mode indicated by66s99Po316Usoover1.1.docCA 02265089 1999-03-05macroblock_type received as history information of thefirst generation is selected. This is because theprediction error of each prediction mode detected in theprocess of the first generation has a precision higherthan the prediction error of each prediction modedetected in the process of the fourth generation. Thus, aprediction mode selected on the basis of predictionerrors each having a high precision will allow a moreefficient encoding process to be carried out. To put itconcretely, the controller 70 selects a prediction modeindicated by macroblock_type included in the historyinformation of the first generation and controls theprocessor 53 and the motion-compensation 64 to operate inthe selected prediction mode.In the normal encoding process, the DCT mode switchcircuit 55 supplies both of a signal converted into aformat of the frame-DCT mode and a signal converted intoa format of the field-DCT mode to the DCT circuit 56 foruse in comparison of an encoding efficiency in the frame-DCT mode with an encoding efficiency in the field-DCTmode. In the processing based on reutilization ofencoding parameters, on the other hand, neither thesignal converted into a format of the frame—DCT mode northe signal converted into a format of the field-DCT modeis generated. Instead, only processing in a DCT modeindicated by dct_type included in the history informationof the first generation is carried out. To put itconcretely, the controller 70 reutilizes dct_typeincluded in the history information of the firstgeneration and controls the DCT-mode switching circuit 55to carry out signal processing in accordance with a DCT67s99Po316Usoover1.1.docCA 02265089 1999-03-05mode indicated by dct_type.In the normal encoding process, the controller 70controls the size of the quantization step used in thequantization circuit 57 on the basis of a target bit ratespecified by the operator or the host computer and thesize of a residual free area left in the transmissionbuffer 59. In the processing based on reutilization ofencoding parameters, on the other hand, the controller 70controls the size of the quantization step used in thequantization circuit 57 on the basis of a target bit ratespecified by the operator or the host computer, the sizeof a residual free area left in the transmission buffer59 and a past quantization scale included in historyinformation. It should be noted that, in the followingdescription, the past quantization scale included inhistory information is referred to ashistory_q_scale_code. In a history stream to be describedlater, the quantization scale is referred to asquantiser_scale_code.First, the controller 70 generatesfeedback_q_scale_code representing the currentquantization scale as is the case with the normalencoding process. Feedback_q_scale_code is set at such avalue determined from the size of the residual free arealeft in the transmission buffer 59 that neither overflownor underflow occurs in the transmission buffer 59. Then,history_q_scale_code representing a previous quantizationscale included in the history stream of the firstgeneration is compared with feedback_q_scale_coderepresenting the current quantization scale to determinewhich quantization scale is greater. It should be noted68S99P0316USO0Ver1.1.docCA 02265089 1999-03-05that a large quantization scale implies a largequantization step. If feedback_q_scale_code representingthe current quantization scale is found greater thanhistory_q_scale_code representing the largest among aplurality of previous quantization scales, the controller70 supplies feedback_q_scale_code representing thecurrent quantization scale to the quantization circuit 57.If history_q_scale_code representing the largest previousquantization scale is found greater thanfeedback_q_scale_code representing the currentquantization scale, on the other hand, the controller 70supplies history_q_scale_code representing the largestprevious quantization scale to the quantization circuit57. The controller 70 selects the largest among aplurality of previous quantization scales included in thehistory information and the current quantization scalederived from the size of the residual free area left inthe transmission buffer 59. In other words, thecontroller 70 controls the quantization circuit 57 tocarry out quantization using the largest quantizationstep among the quantization steps used in the currentencoding process (or the encoding process of the fourthgeneration) and the previous encoding processes (or theencoding processes of the first, second and thirdgenerations). The reason is described as follows.Assume that the bit rate of a stream generated inthe encoding process of the third generation is 4 Mbpsand the target bit rate set for the encoder 121 carryingout the encoding process of the fourth generation is 15Mbps. Such a target bit rate which is higher than theprevious bit rate cannot be actually achieved by simply69s99po316usoover1.1.docCA 02265089 1999-03-05decreasing the quantization step. This is because acurrent encoding process carried out at a smallquantization step on a picture completing a previousencoding process performed at a large quantization stepby no means improves the quality of the picture. That isto say, a current encoding process carried out at a smallquantization step on a picture completing a previousencoding process performed at a large quantization stepmerely increases the number of resulting bits, but doesnot help to improve the quality of the picture. Thus, byusing the largest quantization step among thequantization steps used in the current encoding process(or the encoding process of the fourth generation) andthe previous encoding processes (or the encodingprocesses of the first, second and third generations),the most efficient encoding process can be carried out.Next, the history decoding apparatus 104 and thehistory encoding apparatus 107 are explained by referringto Fig. 15. As shown in the figure, the history decodingapparatus 104 comprises a user-data decoder 201, aconverter 202 and a history decoder 203. The user-datadecoder 201 decodes user data supplied by the decodingapparatus 102. The converter 202 converts data output bythe user-data decoder 201 and the history decoder 203reproduces history information from data output by theconverter 202.On the other hand, the history encodingapparatus 107 comprises a history formatter 211, aconverter 212 and a user-data formatter 213. The historyformatter 211 formats encoding parameters of the threegenerations supplied thereto by the history information70s99Po316Usoover1.1.docCA 02265089 1999-03-05separating apparatus 105. The converter 212 converts dataoutput by the history formatter 211 and the user-dataformatter 213 formats data output by the converter 212into a format of the user data.The user-data decoder 201 decodes user datasupplied by the decoding apparatus 102, supplying aresult of the decoding to the converter 202. Details ofthe user data will be described later. At any rate, theuser data denoted by user_data() comprisesuser_data_start_code and user_data. According to MPEGspecifications, generation of 23 consecutive bits of “O”in the user-data is prohibited in order to preventstart_code from being detected incorrectly. Since thehistory information may include 23 or more consecutivebits of "0", it is necessary to process and convert thehistory information into converted_history_stream() to bedescribed later by referring to Fig. 38. The componentthat carries out this conversion by insertion of a "1"bit is the converter 212 employed in the history encodingapparatus 107. On the other hand, the converter 202employed in the history encoding apparatus 104 carriesout conversion of deletion of a bit opposite to theconversion performed by the converter 212 employed in thehistory encoding apparatus 107.The history decoder 203 generates historyinformation from data output by the converter 202,outputting the information to the history informationmultiplexing apparatus 103.On the other hand, the history formatter 211employed in the history encoding apparatus 107 convertsformats encoding parameters of the three generations71S9_9P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05supplied thereto by the history information separatingapparatus 105 into a format of history information. Theformat of history information can have a fixed lengthshown in Figs. 40 to 46 to be described later or avariable length shown in Fig. 47 also to be describedlater.The history information formatted by the historyformatter 211 is converted by the converter 212 intoconverted_history_stream() to prevent start_code ofuser_data() from being detected incorrectly as describedabove. That is to say, while history information mayinclude 23 or more consecutive bits of "0", generation of23 consecutive bits of "0" in user_data is prohibited byMPEG specifications. The converter 212 thus converts thehistory information by insertion of a "1" bit inaccordance with the imposed prohibition.The user—data formatter 213 adds Data_ID anduser_data_stream_code to converted_history_stream()supplied by the converter 212 on the basis of a syntaxshown in Fig. 38 to be described later to generateuser_data that can be inserted into video_stream,outputting user_data to the encoding apparatus 106.Fig. 20 is a block diagram showing a typicalconfiguration of the history formatter 211. As shown inthe figure, code-language converter 301 and a code-language converter 305 receive item data and item-numberdata from the encoding-parameter separating circuit 105.The item data is encoding parameters, that is, encodingparameters transmitted this time as history information.The item-number data is information used for identifyinga stream including the encoding parameters. Examples of72S'_99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05the item-number data are a syntax name and the name ofsequence_header to be described later. The code—languageconverter 301 converts an encoding parameter suppliedthereto into a code language conforming to a specifiedsyntax, outputting the code to a barrel shifter 302. Thebarrel shifter 302 barrel-shifts the code languagesupplied thereto by a shift amount corresponding toinformation supplied thereto by an address generatingcircuit 306, outputting the shifted code to a switch 303in byte units. The switch 303, the contact position ofwhich can be changed over by a bit select signal outputby the address generating circuit 306, has as many pairsof contact poles as bits supplied by the barrel shifter302. The switch 303 passes on the code supplied theretoby the barrel shifter 302 to a RAM unit 304 to be storedat a write address specified by the address generatingcircuit 306. The code language stored in the RAM unit 304is read out back from a read address specified by theaddress generating circuit 306 and supplied to theconverter 212 provided at the following stage. Ifnecessary, the code read out from the RAM unit 304 isagain supplied to the RAM unit 304 by way of the switch303 to be stored therein again.The code-length converter 305 determines thecode length of the encoding parameter from the syntax andthe encoding parameter supplied thereto, outputtinginformation on the code length to the address generatingcircuit 306. The address generating circuit 306 generatesthe shift amount, the bit select signal, the writeaddress and the read address described above inaccordance with the information on the code length73s99Po316Usoover1.1.docCA 02265089 1999-03-05received from the code—length converter 305. The shiftamount, the bit select signal and the addresses aresupplied to the barrel shifter 302, the switch 303 andthe RAM unit 304 respectively.As described above, the history formatter 211functions as the so—called variable-length encoder forcarrying out a variable-length encoding process on anencoding parameter supplied thereto and for outputting aresult of the variable-length encoding process.Fig. 22 is a block diagram showing a typicalconfiguration of the converter 212. In this typicalconfiguration, 8-bit data is read out from a read addressin a buffer—memory unit 320 provided between the historyformatter 211 and the converter 212 and supplied to an 8-bit D-type flip-flop (D—FF) circuit 321 to be heldtherein. The read address is generated by a controller326. The 8-bit data read out from the D-type flip-flopcircuit 321 is supplied to a stuff circuit 323 and an 8-bit D-type flip-flop circuit 322 to be held therein. The8-bit data read out from the D-type flip-flop circuit 322is also supplied to the stuff circuit 323. To put it indetail, the 8-bit data read out from the D-type flip-flopcircuit 321 is concatenated with the 8-bit data read outfrom the D-type flip-flop circuit 322 to form 16-bitparallel data which is then supplied to the stuff circuit323.The stuff circuit 323 inserts the code "1" intoa stuff position specified by a signal to produce datawith a total of 17 bits which is supplied to a barrelshifter 324. The signal indicating a stuff position issupplied by the controller 326 and the operation to74s99Po316usoover1.1.docCA 02265089 1999-03-05insert the code "1" is called stuffing.The barrel shifter 324 barrel-shifts the datasupplied thereto by the stuff circuit 323 by a shiftamount indicated by a signal received from the controller326, extracting 8-bit data out off the shifted one. Theextracted data is then output to an 8-bit D—type flip-flop circuit 325 to be held therein. The data held in theD—type flip—flop circuit 325 is finally output to theuser-data formatter 213 provided at the following stageby way of a buffer—memory unit 327. That is to say, thedata is temporarily stored in the buffer-memory unit 327provided between the converter 212 and the user-dataformatter 213 at a write address generated by thecontroller 326.Fig. 23 is a block diagram showing a typicalconfiguration of the stuff circuit 323. In thisconfiguration, the 16-bit data received from the D—typeflip-flop circuits 321 and 322 is supplied to contactpoints a of switches 331-16 to 331-1. Pieces of datasupplied to the contact points a of the switches 331-iwhere i = O to 15 are also supplied to contact points cof the switches 331-i. The switches 331-i where i = 1 to16 are switches adjacent to the switches 331-i where i =0 to 15 respectively on the MSB side shown in the figure)of the switches 331—i where i = 0 to 15. For example, thethirteenth piece of data from the LSB supplied to thecontact point a of the switch 331-13 on the MSB side ofthe adjacent switch 331-12 is also supplied to thecontact point c of the switch 331-12. At the same time,the fourteenth piece of data from the LSB supplied to thecontact point a of the switch 331-14 on the MSB side of75S99P0316USO0Ver1.1.docCA 02265089 1999-03-05the adjacent switch 331-14 is also supplied to thecontact point c of the switch 331-13.However, the contact point a of the switch 331-0provided on the lower side of the switch 331-1 is openbecause no switch is provided on the lower side of theswitch 331-0 which corresponds to the LSB. In addition,the contact point c of the switch 331-16 provided on theupper side of the switch 331-15 is also open because noswitch is provided on the upper side of the switch 331-16which corresponds to the MSB.The data "1" is supplied to contact points b ofthe switches 331-0 to 331-16. The decoder 332 changesover one of the switches from 331-0 to 331-16 at a stuffposition indicated by a staff position signal receivedfrom the controller 326 to the contact point b in orderto insert the data "1" into the stuff position. Theswitches 331-0 to 331-16 on the LSB side of the switch atthe stuff position are changed over to their contactpoints c whereas the switches 331 on the MSB side of theswitch at the stuff position are changed over to theircontact points a.Fig. 23 shows an example in which the data "1"is inserted into the thirteenth bit from the LSB side.Thus, in this case, the switches 331-O to 331-12 arechanged over to their contact points c and the switches331-14 to 331-16 are changed over to their contact pointsa. The switch 331-13 is changed over to the contact pointb thereof.With the configuration described above, theconverter 212 shown in Fig. 22 converts the 22-bit codeinto a 23-bit code including the inserted data "1",76S99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05outputting the 23-bit result of the conversion.Fig. 24 is a diagram showing timing of pieces ofdata output by a variety of portions of the converter 212shown in Fig. 22. When the controller 326 employed in theconverter 212 generates a read address shown in Fig. 24Asynchronously with a clock signal for a byte of data,byte data stored at the read address is read out from thebuffer—memory unit 320 and temporarily held by the D—typeflip-flop circuit 321. Then, data of Fig. 24B read outfrom the D-type flip-flop circuit 321 is supplied to thestuff circuit 323 and the D—type flip-flop circuit 322 tobe held therein. Data of Fig. 24C read out from the D-type flip-flop circuit 322 is concatenated with the dataof Fig. 24B read out from the D-type flip-flop circuit321 and data obtained as a result of the concatenationshown in Fig. 24D is supplied to the stuff circuit 323.As a result, with timing of a read address Al,the first byte D0 of the data of Fig. 24B read out fromthe D-type flip-flop circuit 321 is supplied to the stuffcircuit 323 as a first byte of the data shown in Fig. 24D.Then, with timing of a read address Al, the second byteD1 of the data of Fig. 24B read out from the D-type flip-flop circuit 321 and concatenated with the first byte D0of the data of Fig. 24C read out from the D—type flip-flop circuit 322 is supplied to the stuff circuit 322 asa second two bytes of the data shown in Fig. 24D.Subsequently, with timing of a read address A3, the thirdbyte D2 of the data of Fig. 24B read out from the D—typeflip-flop circuit 321 and concatenated with the secondbyte D1 of the data of Fig. 24C read out from the D-typeflip-flop circuit 322 is supplied to the stuff circuit77s99Po316usoover1.1.docCA 02265089 1999-03-05322 as a third two bytes of the data shown in Fig. 24Dand so on.The stuff circuit 323 receives a signal of Fig.24E indicating a stuff position, into which the data "1"is to be inserted, from the controller 326. The decoder332 employed in the stuff circuit 323 changes over one ofthe switches 331-0 to 331-16 at the stuff position to thecontact point b thereof. The switches 331 on the LSB sideof the switch at the stuff position are changed over totheir contact points c whereas the switches 331 on theMSB side of the switch at the stuff position are changedover to their contact points a. As a result, the stuffcircuit 323 inserts the data "1" into the stuff positionsignal, outputting data with the inserted data "1" shownin Fig. 24F.The barrel shifter 324 barrel-shifts the datasupplied thereto by the stuff circuit 323 by a shiftamount indicated by a shift signal of Fig. 24G receivedfrom the controller 326, outputting a shifted signalshown in Fig. 24H. The shifted signal is held temporarilyin the D—type flip-flop circuit 325 before being outputto a later stage as shown in Fig. 241.The data output from the D—type flip—flopcircuit 325 includes the data "1" inserted into aposition following to 22-bit data. Thus, the number ofconsecutive 0 bits never exceeds 22 even if all bitsbetween the data "1" and the next data "1" are "0".Fig. 25 is a block diagram showing a typicalconfiguration of the converter 202. The fact thatcomponents ranging from a D—type flip—flop circuit 341 toa controller 346 are employed in the converter 202 as the78s99Po316Usoover1.1.docCA 02265089 1999-03-05components ranging from the D-type flip-flop circuit 321to the controller 326 are employed in the converter 212shown in Fig. 22 indicates that the configuration of theformer is basically the same as the configuration of thelatter. The converter 202 is different from the converter212 in that, in the case of the former, a delete circuit343 is employed in place of the stuff circuit 323 of thelatter. Otherwise, the configuration of the converter 202is the same as that of the converter 212 shown in Fig. 22.The delete circuit 343 employed in the converter202 deletes a bit at a delete position indicated by asignal output by the controller 346. The delete positioncorresponds to the stuff position, into which the stuffcircuit 323 shown in Fig. 23 inserts the data "1".The remaining operations of the converter 202are the same as those carried out by the converter 212shown in Fig. 22.Fig. 26 is a block diagram showing a typicalconfiguration of the delete circuit 343. In thisconfiguration, the 15 bits on LSB side of which 16-bitdata received from the D—type flip-flop circuits 341 and342 are supplied to contact points a of switches 351-0 to351-14. Pieces of data supplied to the Contact points aof the switches 351-i where i = 1 to 14 are also suppliedto Contact points b of the switches 351—i where i = 0 to13. The switches 351-i where i = 1 to 14 are switchesadjacent to the switches 351-i where i = 0 to 13respectively on the MSB side (or the upper side shown inthe figure) of the switches 351—i where i = 0 to 13. Forexample, the thirteenth piece of data from the LSBsupplied to the contact point a of the switch 351-13 on79s99Po316usoover1.1.docCA 02265089 1999-03-05the MSB side of the adjacent switch 351-12 is alsosupplied to the contact point b of the switch 351-12. Atthe same time, the 14th piece of data from the LSBsupplied to the contact point a of the switch 351-14 onthe MSB side of the adjacent switch 351-13 is alsosupplied to the contact point c of the switch 351-13. Thedecoder 352 deletes a bit at a delete position indicatedby a signal output by the controller 346, outputting theremaining 15-bit data excluding the deleted bit.Fig. 26 shows a state in which the thirteenthinput bit from the LSB (input bit 12) is deleted. Thus,in this case, the switches 351-0 to 351-11 are changedover to their contact points a to output 12 input bitsfrom the LSB (bit 0) to the twelfth bit (bit 11) as theyare. On the other hand, the switches 351-12 to 351-14 arechanged over to their contact points b to pass on thefourteenth to sixteenth input bits (input bits 13 to 15)as the thirteenth to fifteenth output bits (output bits12 to 14) respectively. In this state, the thirteenthinput bit from the LSB (input bit 12) is connected tonone of the output lines.16-bit data is supplied to the stuff circuit 323shown in Fig. 23 and the delete circuit 343 shown in Fig.26. This is because the data supplied to the stuffcircuit 323 is a result of concatenation of pieces ofdata output by the 8-bit D—type flip-flop circuits 321and 322 employed in the converter 212 shown in Fig. 22.By the same token, the data supplied to the deletecircuit 343 is a result of concatenation of pieces ofdata output by the 8-bit D-type flip-flop circuits 341and 342 employed in the converter 202 shown in Fig. 25.80S99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05The barrel shifter 324 employed in the converter 212shown in Fig. 22 barrel—shifts the 17-bit data supplied‘thereto by the stuff circuit 323 by a shift amountindicated by a signal received from the controller 326,finally extracting data of typically 8 bits out off theshifted one. Likewise, the barrel shifter 344 employed inthe converter 202 shown in Fig. 25 barrel—shifts the 15-bit data supplied thereto by the stuff circuit 324 by ashift amount indicated by a signal received from thecontroller 346, finally extracting data of 8 bits out offthe shifted one.Fig. 21 is a block diagram showing a typicalconfiguration of the history decoder 203 for decodingdata completing the history formatting process in theconverter 202. Data of an encoding parameter supplied tothe history decoder 203 by the converter 202 is fed to aRAM unit 311 to be stored therein at a write addressspecified by an address generating circuit 315. Theaddress generating circuit 315 also outputs a readaddress with predetermined timing to the RAM unit 311. Atthat time, data stored at the readiaddress in the RAMunit 311 is output to a barrel shifter 312. The barrelshifter 312 barrel—shifts the data supplied thereto by ashift amount corresponding to information suppliedthereto by the address generating circuit 315, outputtingthe shifted data to inverse—code—length converters 313and 314.The inverse—code—length converters 313 and 314also receive the name of a syntax of a stream includingthe encoding parameters from the converter 202. Theinverse code-length converter 313 determines the code81S99P0316USOOVer1 . 1 . docCA 02265089 1999-03-05length of the encoding parameter from the data or thecode language supplied thereto in accordance with thesyntax, outputting information on the code length to theaddress generating circuit 315.On the other hand, the inverse~code-lengthconverter 314 decodes or reversely encodes the datasupplied by the barrel shifter 312 on the basis of thesyntax, outputting a result of the decoding process tothe encoding-parameter multiplexing apparatus 103.In addition, the inverse-code—length converter314 also extracts information required for identifyingwhat code language is included, that is, informationrequired for determining a delimiter of a code,outputting the information to the address generatingcircuit 315. The address generating circuit 315 generateswrite and read addresses and a shift amount based on thisinformation and the code length received from theinverse-code—length converter 313, outputting thewrite/read addresses to the RAM unit 311 and the shiftamount to the barrel register 312.Fig. 27 is a block diagram showing anothertypical configuration of the converter 212. A counter 361employed in this configuration counts the number ofconsecutive 0 bits of data supplied thereto, outputtingthe result of counting to the controller 326. When thenumber of consecutive 0-bit of data supplied to thecounter 361 reaches 22, the controller 326 outputs asignal representing a stuff position to the stuff circuit323. At the same time, the controller 326 resets thecounter 361, allowing the counter 361 to start countingthe number of consecutive 0-bit again from 0. The rest of82s99Po316usoover1.1.docCA 02265089 1999-03-05the configuration and the operation are the same as thoseof the converter 212 shown in Fig. 22.Fig. 28 is a block diagram showing anothertypical configuration of the converter 202. A counter 371employed in this configuration counts the number ofconsecutive 0-bit of data supplied thereto, outputtingthe result of counting to the controller 346. When thenumber of consecutive 0-bit of data supplied to thecounter 371 reaches 22, the controller 346 outputs asignal representing a delete position to the deletecircuit 343. At the same time, the controller 346 resetsthe counter 371, allowing the counter 371 to startcounting the number of consecutive 0-bit again from 0.The rest of the configuration and the operation are thesame as those of the converter 202 shown in Fig. 25.As described above, in the typicalconfigurations shown in Figs. 27 and 28, the data "1" isinserted as a marker bit and deleted respectively when apredetermined pattern comprising a predetermined numberof consecutive 0-bit is detected by the counter. Thetypical configurations shown in Figs. 27 and 28 allowprocessing to be carried out with a higher degree ofefficiency than the configurations shown in Figs. 22 and25 respectively.Fig. 29 is a block diagram showing a typicalconfiguration of the user-data formatter 213. In thisconfiguration, when a controller 383 outputs a readaddress to a buffer memory not shown provided between theconverter 212 and the user-data formatter 213, data isoutput from the read address and supplied to a contactpoint a of a switch 382 employed in the user-data83s99Po316Usoover1.1.docCA 02265089 1999-03-05formatter 213. It should be noted that the buffer memoryitself is not shown in the figure. In a ROM unit 381,data required for generating user_data() such as a user-data start code and a data ID is stored. A controller 313changes over the switch 382 to the contact point a or acontact point b with predetermined timing in order toallow the switch 382 to select the data stored in the ROMunit 381 or data supplied by the converter 212 and passon the selected data. In this way, data with a format ofuser_data() is output to the encoding apparatus 106.It is worth noting that the user—data decoder201 can be implemented by outputting input data by way ofa switch for deleting inserted data read out from a ROMunit like the ROM unit 381 employed in the user—dataformatter 213 shown in Fig. 29. The configuration of theuser—data decoder 201 is shown in none of the figures.Fig. 30 is a block diagram showing the state ofan implementation in which a plurality of transcoders101-1 to 101—N are connected in series for the use ofvideo editing studio service. The encoding—parametermultiplexing apparatuses 103-i employed in thetranscoders 10l—i where i = 1 to N each write most recentencoding parameters used by itself over a region forstoring least recent encoding parameters in an area usedfor recording encoding parameters. As a result, base—bandpicture data includes encoding parameters or generationhistory information of the four most recent generationsassociated with the macroblocks of the picture data.The variable—length-encoding circuit 58 employedin the encoder 121-i of Fig. 19 employed in the encodingapparatus 106—i encodes video data received from the84S99P0316USO0Ver1.1.docCA 02265089 1999-03-05quantization circuit 57 on the basis of current encodingparameters received from the encoding-parameterseparating circuit 105-i. As a result, the currentencoding parameters are multiplexed typically inpicture_header() included in a bitstream generated by thevariable—length-encoding circuit 58.In addition, the variable—length-encodingcircuit 58 also multiplexes user data, which includesgeneration history information and is received from thehistory encoding apparatus 107-i, into the outputbitstream. This multiplexing process is not the embeddedprocessing like the one shown in Fig. 18, butmultiplexing of the user data into the bitstream. Then,the bitstream output by the encoding apparatus 106-i issupplied to the transcoder l01—(i+1) at the followingstage by way of the SDTI 108-1.The configurations of the transcoders 101—i and101—(i+1) are the same as the one shown in Fig. 15. Theprocessing carried out by them can thus be explained byreferring to Fig. 15. If it is desired to change thecurrent picture type from the I—picture to the P— or B-picture in an encoding operation using the history ofactual encoding parameters, the history of previousencoding parameters is searched for those of a P- or B-picture used in the previous. If a history of a P or B-picture is found in the history, its parameters includinga motion vector are used to change the picture type. If ahistory of a P— or B—picture is not found in the history,on the other hand, the modification of the picture typewithout motion detection is given up. It is needless tosay that the picture type can be changed even if an85s99Po316Usoover1.1.docCA 02265089 1999-03-05encoding parameter of a P— or B-picture is not found inthe history provided that motion detection is carried out.In the format shown in Fig. 18, encodingparameters of four generations are embedded in picturedata. As an alternative, parameters for each of the I—,P- and B—pictures can also be embedded in a format likeone shown in Fig. 31. In the example shown in Fig. 31,encoding parameters or picture history information of onegeneration are recorded for each picture type in anoperation to encode the same macroblocks accompanyingchanges in picture type occurring in the previous. Inthis case, the decoder 111 shown in Fig. 16 outputsencoding parameters of one generation for the I—, P- andB—pictures to be supplied to the encoder 121 shown in Fig.19 instead of encoding parameters of most recent, thefirst, second and third preceding generations.In addition, since the area of Cb[1][x] andCr[1] [x] is not used, the present invention can also beapplied to picture data of a 4 : 2 : 0 format which doesnot use the area of Cb[1][x] and Cr[1][x]. In the case ofthis example, the decoding apparatus 102 fetches encodingparameters in the course of decoding and identifies thepicture type. The decoding apparatus 102 writes ormultiplexes the encoding parameters into a locationcorresponding to the picture type of the picture signaland outputs the multiplexed picture signal to theencoding-parameter separating apparatus 105. Theencoding-parameter separating apparatus 105 separates theencoding parameters from the picture data and, by usingthe separated encoding parameters, the encoding-parameterseparating apparatus 105 is capable of carrying out a86s99Po316Usoover1.1.docCA 02265089 1999-03-05post-decoding-encoding process while changing the picturetype by taking a picture type to be changed and previousencoding parameters supplied thereto into consideration.The transcoder 101 has another operation whichis different from the parameter-reuse—encoding process todeter mine a changeable picture type in only case of thecontroller 70 does not allow the motion vector detectingcircuits to operate.The other operation is explained by referring toa flowchart shown in Fig. 32. As shown in Fig. 32, theflowchart begins with a step S1 at which encodingparameters or picture history information of onegeneration for each picture type are supplied to thecontroller 70 of the encoder 121. The flow of theprocessing then goes on to a step S2 at which theencoding—parameter separating apparatus 105 forms ajudgment as to whether or not picture history informationincludes encoding parameters used in a change to a B-picture. If the picture history information includesencoding parameters used in a change to a B—picture, theflow of the processing proceeds to a step S3.At the step S3, the econtroller 70 forms ajudgment as to whether or not picture history informationincludes encoding parameters used in a change to a P-picture. If the picture history information includesencoding parameters used in a change to a P—picture, theflow of the processing proceeds to a step S4.At the step S4, the controller 70 determinesthat changeable picture types are the I—, P— and B-pictures. If the outcome of the judgment formed at thestep S3 indicates that the picture history information87s99Po316usoover1.1.docCA 02265089 1999-03-05does not include encoding parameters used in a change toa P—picture, on the other hand, the flow of theprocessing proceeds to a step S5.At the step S5, the controller 70 determinesthat changeable picture types are the I— and B-pictures.In addition, the controller 70 determines that a pseudo-change to a P—picture is also possible by carrying outspecial processing using only a forward-prediction vectorand no backward—prediction vector included in historyinformation of the B—picture. If the outcome of thejudgment formed at the step S2 indicates that the picturehistory information does not include encoding parametersused in a change to a B-picture, on the other hand, theflow of the processing proceeds to a step S6.At the step S6, the controller 70 forms ajudgment as to whether or not picture history informationincludes encoding parameters used in a change to a P-picture. If the picture history information includesencoding parameters used in a change to a P—picture, theflow of the processing proceeds to a step S7.At the step S7, the controller 70 determinesthat changeable picture types are the I- and P—pictures.In addition, the encoding-parameter separating apparatus105 determines that a change to a B-picture is alsopossible by carrying out special processing using only aforward-prediction vector and no backward—predictionvector included in history information of the P-picture.If the outcome of the judgment formed at thestep S6 indicates that the picture history informationdoes not include encoding parameters used in a change toa P—picture, on the other hand, the flow of the88s99Po316Usoover1.1.doc CA 02265089 1999-03-05processing proceeds to a step S8. At the step S8, thecontroller 70 determines that the only changeable picturetype is the I-picture because there is no motion vector.An I—picture can not be changed to any other picture typethan the I-picture.After completing the steps S4, S5, S7 or S8, theflow of the processing goes on to a step S9 at which thecontroller 70 notifies the user of the changeable picturetypes on a display unit which is shown in none of thefigures.Fig. 33 is a diagram showing examples of changesin picture type. When the picture type is changed, thenumber of frames composing a GOP structure is changed. Toput it in detail, in these examples, a long GOP structureis changed to a short GOP structure of the secondgeneration. And then, the GOP structure of athe secondgeneration is changed back to a long GOP at a thirdgeneration. The long GOP structure has an N = 15 and an M= 3, where N is the number of frames constituting the GOPand M is the period of the appearance of the P-pictureexpressed in terms of frames. On the other hand, theshort GOP has an N = 1 and an M = 1 where M is the periodof the appearance of the I-picture expressed in terms offrames. It should be noted that a dashed line shown inthe figure represents a boundary between two adjacentGOPs.When the GOP structure of the first generationis changed to the GOP structure of the second generation,the picture types of all the frames can be changed to theI-picture as is obvious from the explanation of theprocessing to determine changeable picture types given89S99P0316USO0Ver1.1.docCA 02265089 1999-03-05above. When these picture types are changed, all motionvectors which were processed when the source video sognalwas encoded in the first generation are saved or left.Then, when the short GOP structureis changed back to thelong GOP structure at the third generation. That is, evenif picture types are changed, the motion vector for eachtype which was saved when the source video signal wasencoded at the first generation is re-utilized, allowinga change back to the long GOP structureto be made withdeterioration of the picture quality avoided.9Fig. 34 is a diagram showing another examples ofchanges in picture type. In the case of these examples,changes are made from a long GOP structure with N = 14and M = 2 to a short GOP structure with N = 2 and M = 2at the second generation and then to a short GOPstructure with N = 1 and M = 1 and finally to a randomGOP with an undetermined frame count N at the fourthgeneration.Also in these examples, a motion vector for eachpicture type which was processed when the source videosognal was encoded as the first generation is saveduntill the fourth generation. As a result, by re-utilizing the saved encoding parameters, deterioration ofthe picture quality can be reduced to a minimum even ifthe picture types are changed in a complicated manner asshown in Fig. 34. In addition, if the quantization scaleof the saved encoding parameters is utilized effectively,an encoding process which entails only littledeterioration of the picture quality can be implemented.The re—utilization of the quantization scale isexplained by referring to Fig. 35. Fig. 35 is a diagram90S99PO316USO0Ver1.1.docCA 02265089 1999-03-05showing a case in which a certain reference frame isalways encoded with an I—picture from a first generationto a fourth generation. Only the bit rate is changed from4 Mbps for the first generation to 18 Mbps for the secondgeneration and then to 50 Mbps for the third generationand finally back to 4 Mbps for the fourth generation.When a bit rate of 4 Mbps of bitstream generatedat first generation is changed to a bit rate of 18 Mbpsat second generation, the picture quality is not improvedeven a post-decoding—encoding process is carried out at afine quantization scale accompanying the increase in bitrate. This is because data quantized in the previous at acoarse quantization step is not restored. Thus,quantization at a fine quantization step accompanying araise in bit rate in the course of processing as shown inFig. 35 merely increases the amount of information anddoes not lead to an improvement of the picture quality.For this reason, if control is executed to sustain acoarsest or largest quantization scale used in theprevious, the encoding process can be implemented leastwastefully and most efficiently.As described above, when the bit rate is changed,by making use of the previous history of the quantizationscale, the encoding process can be implemented mosteffectively.This quantization control processing isexplained by referring to a flowchart shown in Fig. 36.As shown in the figure, the flowchart begins with a stepS11 at which controller 70 forms a judgment as to whetheror not input picture history information includes anencoding parameter of a picture type to be changed from91s99Po316Usoover1.1.docCA 02265089 1999-03-05now on. If the outcome of the judgment indicates that theinput picture history information includes an encodingparameter of a picture type to be changed, the flow ofthe processing goes on to a step S12.At the step S12, the controller 70 extracts thehistory_q_scale_code from the encoding parameters inquestion for comparison included in the picture historyinformation.The flow of the processing then proceeds to astep S13 at which the controller 70 calculates acandidate value of the feedback_q_scale_code obased on adata fullness of the transmission buffer 59.The flow of the processing then proceeds to astep S14 at which the controller 70 forms a judgment asto whether or not history_q_scale_code is larger orcoarser than feedback_q_scale_code. If the outcome of thejudgment indicates that history_q_scale_code is larger orcoarser than feedback_q_scale_code, the flow of theprocessing continues to a step S15.At the step S15, the controller 70 supplieshistory_q_scale_code as a quantization scale to thequantization circuit 57 which then carries out aquantization process by using history_q_scale_code.The flow of the processing then proceeds to astep S16 at which the controller 70 forms a judgment asto whether or not all macroblocks included in the framehave been quantized. If the outcome of the judgmentindicates that all the macroblocks included in the framenot have been quantized yet, the flow of the processinggoes back to the step S13 to carry out the pieces ofprocessing of the steps S13 to S16 repeatedly till all92S99P0316USO0Ver1.1.docCA 02265089 1999-03-05the macroblocks included in the frame are quantized.If the outcome of the judgment formed at thestep S14 indicates that history_q_scale_code is notlarger than feedback_q_scale_code, that is,history_q_scale_code is finer than feedback_q_scale_code.on the other hand, the flow of the processing continuesto a step S17.At the step S17, the controller 70 suppliesfeeds back feedback_q_scale_code as a quantization scaleto the quantization circuit 57 which then carries out aquantization process by using feedback_q_scale_code.If the outcome of the judgment formed at thestep S11 indicates that the input picture historyinformation does not include an encoding parameter of apicture type to be changed, on the other hand, the flowof the processing goes on to a step S18.At the step S18, the quantization circuit 57receives the candidate value of the feedback_q_scale_codefrom the controller 70.The flow of the processing then proceeds to astep S19 at which the quantization circuit 57 whichcarries out a quantization process by using Q_feedback.The flow of the processing then proceeds to astep S20 at which controller 70 forms a judgment as towhether or not all macroblocks included in the frame havebeen quantized. If the outcome of the judgment indicatesthat all the macroblocks included in the frame not havebeen quantized yet, the flow of the processing goes backto the step S18 to carry out the pieces of processing ofthe steps S18 to S20 repeatedly till all the macroblocksincluded in the frame are quantized.93S99PO316USOOVer1 . 1 . doc CA 02265089 1999-03-05The transcoder 101 explained earlier by referringto Fig. 15 supplies previous encoding parameters of thefirst, second and third generations to the video encodingapparatus 106 by multiplexing these parameters in base-band video data. In the present invention, however, atechnology of multiplexing previous encoding parametersin base—band video data is not absolutely required. Forexample, previous encoding parameters can be transferredby using a transmission line such as a data transfer busprovided separately from that for the base—band videodata as shown in Fig. 37. 9The video decoding apparatus 102, the historydecoding apparatus 104, the video encoding apparatus 106and the history encoding apparatus 107 shown in Fig. 37have entirely the same configurations and functions asthe video decoding apparatus 102, the history decodingapparatus 104, the video encoding apparatus 106 and thehistory encoding apparatus 107 respectively which havebeen described earlier by referring to Fig. 15.The variable-length decoding circuit 112 employedin the video decoding apparatus 102 extracts encodingparameters of the third generation from the sequencelayer, the GOP layer, the picture layer, the slice layerand the macroblock layer of the encoded video bitstreamST(3rd) of the third generation, supplying the parametersto the history encoding apparatus 107 and the controller70 employed in the video encoding apparatus 106. Thehistory encoding apparatus 107 converts the encodingparameters of the third generation supplied thereto intoconverted_history_stream() which can be described in theuser—data area on the picture layer, supplying94S99PO316USO0Ver1 . 1 . doc CA 02265089 1999-03-05converted_history_stream() to the variable-length codingcircuit 58 employed in the video encoding apparatus 106as user data.In addition, the variable-length decoding circuit112 also extracts user data (user_data) includingprevious encoding parameters of the first and secondgenerations from the user-data area on the picture layerof the encoded video bitstream ST(3rd) of the thirdgeneration, supplying the user_data to the historydecoding apparatus 104 and the variable-length codingcircuit 58 employed in the video encoding apparatus 106.The history decoding apparatus 104 extracts the encodingparameters of the first and second generations from ahistory steam of the user data which is described in theuser-data area as converted_history_stream(). supplyingthe parameters to the controller 70 employed in the videoencoding apparatus 106.The controller 70 of the video encoding apparatus106 controls the encoding process carried out by thevideo encoding apparatus 106 on the basis of the encodingparameters of the first and second generations receivedfrom the history decoding apparatus 104 and the encodingparameters of the third generation received from thevideo decoding apparatus 102.In the meantime, the variable-length coding circuit58 employed in the video encoding apparatus 106 receivesthe user data (user_data) including encoding parametersof the first and second generations from the videodecoding apparatus 102 and the user data (user_data)including encoding parameters of the third generationfrom the history encoding apparatus 107, describing these95S99PO316USO0Ver1.1.docCA 02265089 1999-03-05pieces of user_data in the user—data area on the picturelayer of an encoded video bitstream of the fourthgeneration as history information.Fig. 38 is a diagram showing a syntax used fordecoding an MPEG video stream. The decoder decodes anMPEG bit stream in accordance with this syntax in orderto extract a plurality of meaningful data items ormeaningful data elements from the bitstream. In thesyntax to be explained below, a function and aconditional statement are each represented by a string ofnormal characters whereas a data element is representedby a string of bold characters. A data item is describedby the Mnemonic representing the name of the data item.In some cases, the Mnemonic also indicates the bit lengthcomposing the data item and the type of the data item.First of all, functions used in the syntax shownin Fig. 38 are explained. A next_start_code() is afunction used for searching a bitstream for a start codedescribed in the bitstream. In the syntax shown in Fig.38, the next_start_code() function is followed by asequence_header() function and a sequence_extension()function which are laid out sequentially to indicate thatthe bitstream includes data elements defined by thesequence_header() and sequence_extension() functions.Thus, a start code, a kind of data element described atthe beginning of the sequence_header() andsequence_extension() functions, is found by thenext_start_code() function from the bitstream in anoperation to decode the bitstream. The start code is thenused as a reference to further find the sequence_header()and sequence_extension() functions and decode data96S9,9.P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05elements defined by the sequence_header() andsequence_extension() functions.It should be noted that the sequence_header()function is a function used for defining header data of asequence layer in the MPEG bitstream whereas thesequence_extension() function is a function used fordefining extension data of a sequence layer in the MPEGbitstream.A do {} while statement is described after thesequence_extension() function. The do {} while statementcomprises a {} block following a do statement and a whilestatement following the {} block. Data elements describedby functions in the {} block following the do statementare extracted from the bitstream as long as a conditiondefined by the while statement is true. That is to say,the do {} while syntax defines a decoding process toextract data elements described by functions in the {}block following the do statement from the bitstream aslong as the condition defined by the while statement istrue.A nextbits() used in the while statement is afunction used to compare a bit or a string of bitsappearing in the bitstream with a data element to bedecoded next. In the example of the syntax shown in Fig.38, the nextbits() function compares a string of bitsappearing in the bitstream with sequence_end_code usedfor indicating the end of a video sequence. The conditiondefined by the while statement is the to be true if thestring of bits appearing in the bitstream does not matchThus,sequence_end_code. the do {} while statementdescribed after the sequence_extension() function97S99£O3l6USOOVer1.1.docCA 02265089 1999-03-05indicates that data elements defined by functions in the{} block following the do statement are described in thebitstream as long as sequence_end_code used forindicating the end of a video sequence does not appear inthe bitstream.After the data elements defined by thesequence_extension() function in the bitstream, dataelements defined by an extension_and_user_data(0)function are described. The extension_and_user_data(0)function is a function used for defining extension dataand user data of the sequence layer of the MPEG bitstream.A do {} while statement following theextension_and_user_data(0) function is a function toextract data elements described by functions in the {}block following the do statement from the bitstream aslong as a condition defined by the while statement istrue. The nextbits() functions used in the whilestatement are functions used to form a judgment as towhether or not a bit or a string of bits appearing in thebitstream matches picture_start_code or group_start_codestart codes respectively by comparing the string with thestart code specified in the function. If the string ofbits appearing in the bitstream matchespicture_start_code or group_start_code, a conditiondefined by the while statement is said to be true. Thus,if picture_start_code or group_start_code appears in thebitstream, the codes of data elements defined byfunctions in the {} block following the do statement aredescribed after this start code. Accordingly, by findinga start code represented by picture_start_code orgroup_start_code, it is possible to extract data elements98S99P0316USO0Ver1.1.docCA 02265089 1999-03-05defined by functions in the {} block of the do statementfrom the bitstream.An if-statement described at the beginning ofthe {} block of the do statement states a condition "ifgroup_start_code appears in the bitstream." A true(satisfied) condition stated by the if—statementindicates that data elements defined by agroup_of_picture_header(1) function and aextension_and_user_data(1) function are describedsequentially after group_start_code.The group_of_picture_header(1) function is afunction used for defining header data of a GOP layer ofthe MPEG bitstream and the extension_and_user_data(1)function is a function used for defining extension datanamed extension_data and/or user data named user_data ofthe GOP layer of the MPEG bitstream.Furthermore, in this bitstream, data elementsdefined by a picture_header() function and apicture_coding_extension() function are described afterthe data elements defined by thegroup_of_picture_header(1) function and theextension_and_user_data(1) function. Of course, if thecondition defined by the if—statement is not true, thedata elements defined by the group_of_picture_header(1)function and the extension_and_user_data(1) function arenot described. In this case, the data elements defined bythe picture_header() function and thepicture_coding_extension() function are described afterthe data elements defined by anextension_and_user_data(0) function.The picture_header() function is a function used99S99PO316USOOVer1.1.docCA 02265089 1999-03-05for defining header data to a picture layer of the MPEGstream and the picture_coding_extension() function is afunction used for defining first extension data of thepicture layer of the MPEG stream.The next while statement is a function used fordefining a condition. A condition defined by each if-statement described in a {} block following the conditiondefined by the while statement is judged to be true orfalse as long as the condition defined by the whilestatement is true. nextbits() functions used in the whilestatement are functions for forming a judgment as towhether a string of bits appearing in the bitstreammatches extension_start_code and user_start_coderespectively. If the string of bits appearing in thebitstream matches extension_start_code or user_data_start,a condition defined by the while statement is said to betrue.A first if-statement in the {} block followingthe while statement is a function for forming a judgmentas to whether or not a string of bits appearing in thebitstream matches extension_start_code. A string of bitsappearing in the bitstream that matches 32-bitextension_start_code indicates that data elements definedby an extension_data(2) function are described afterextension_start_code in the bitstream.A second if-statement is a function for forminga judgment as to whether or not a string of bitsappearing in the bitstream matches user_data_start_code.If a string of bits appearing in the bitstream matches32-bit user_data_start_code, a condition defined by athird if-statement is judged to be true or false. The100s99Po316Usoover1.1.docCA 02265089 1999-03-05user_data_start_code is a start code used for indicatingthe beginning of a user-data area of the picture layer ofthe MPEG bitstream.The third if-statement in the {} block followingthe while statement is a function for forming a judgmentas to whether or not a string of bits appearing in thebitstream matches History_Data_ID. A string of bitsappearing in the bitstream that matches 8-bitHistory_Data_ID indicates that data elements defined by aconverted_history_stream() function are described after acode indicated by 8-bit History_Data_ID in the user—dataarea of the picture layer of the MPEG bitstream.A converted_history_stream() function is afunction used for describing history information andhistory data for transmitting all encoding parametersused in the MPEG encoding process. Details of dataelements defined by this converted_history_stream()function will be described later. History_Data_ID is astart code used for indicating the beginning of adescription of the history information and the historydata in the user—data area of the picture layer of theMPEG bitstream.An else statement is syntax indicating a casefor a false condition defined by the third if-statement.Thus, if data elements defined by aconverted_history_stream() function are not described inthe user—data area of the picture layer of the MPEGbitstream, data elements defined by a user_data()function are described.A picture_data() function is a function used fordescribing data elements related to a slice layer and a101S99P0316USOOVer1.1.doc CA 02265089 1999-03-05macroblock layer after user data of the picture layer ofthe MPEG bitstream. Normally, data elements defined bythis picture_data() function are described after dataelements defined by a user_data() function or dataelements defined by a converted_history_stream() functiondescribed in the user-data area of the picture layer ofthe bitstream. If neither extension_start_code noruser_data_start_code exists in a bitstream showing dataelements of the picture layer, however, data elementsdefined by this picture_data() function are describedafter data elements defined by apicture_coding_extension() function.After the data elements defined by thispicture_data() function, data elements defined by asequence_header() function and a sequence_extension()function are described sequentially. The data elementsdescribed by the sequence_header() function and thesequence_extension() function are exactly the same dataelements as defined by a sequence_header() function and asequence_extension() function described at the beginningof a sequence of the video stream. The reason why thesame pieces of data are defined in the stream is toprevent data of the sequence layer from being no longerreceivable, thus, preventing a stream from being nolonger decodable when reception of a bitstream is startedby a bitstream receiving apparatus in the middle of adata stream, such as part of bitstream corresponding to apicture layer.After the data elements defined by thesequence_header() function and the sequence_extension()function, that is, at the end of the data stream, 32-bit102S99P0316USOOVer1.1.docCA 02265089 1999-03-05sequence_end_code used for indicating the end of thesequence is described.Fig. 39 is a schematic diagram showing anoutline of the basic configuration of the syntaxdescribed so far.Next, a history stream defined by aconverted_history_stream() function is explained.The converted_history_stream() function is afunction used for inserting a history stream showinghistory information into the user-data area of thepicture layer of the MPEG bitstream. It should be notedthat the word 'converted‘ means that the stream hascompleted a conversion process to insert one marker bitfor at least every 22 bits of the history stream composedof history data to be inserted into the user area inorder to avoid start emulation.The converted_history_stream() function isdescribed in either one of a format of a fixed-lengthhistory stream shown in Figs. 40 to 46 or a variable-length history stream shown in Fig. 47 to be describedlater. If the fixed-length history stream is selected onthe encoder side, there is a merit that a circuit andsoftware employed in the decoder for decoding dataelements from the history stream become simple. If thevariable—length history stream is selected on the encoderside, on the other hand, the encoder is capable ofselecting arbitrarily history information or dataelements described in the user area of the picture layerwhen necessary. Thus, the amount of data of the historystream can be reduced. As a result, the data rate of thebitstream as a whole can also be lowered as well.103s99Po316usoover1.1.doc CA 02265089 1999-03-05The history information, the history data andthe history parameter cited in the explanation of thepresent invention are encoding parameters or dataelements used in the related art encoding processes andare not encoding—parameter data used in the currentencoding process or the encoding process carried out atthe last stage. Consider a case in which a picture isencoded and transmitted as an I~picture in an encodingprocess of a first generation, as a P-picture in anencoding process of a second generation and as a B-picture in an encoding process of a third generation.Encoding parameters used in the encoding process of thethird generation are described at predetermined locationsof the sequence, GOP—picture, slice and macroblock layersof an encoded bitstream generated as a result of theencoding process of the third generation. On the otherhand, encoding parameters used in the encoding process ofthe first and second generations are not recorded in thesequence or GOP layer for recording the encodingparameters used in the encoding process of the thirdgeneration, but recorded in the user-data area of thepicture layer as history information of the encodingparameters.First of all, the syntax of the fixed-lengthhistory stream is explained by referring to Figs. 40 to46.In the first place, encoding parameters relatedto the sequence header of the sequence layer used in theprevious encoding processes, that is, the encodingprocesses of typically the first and second generations,are inserted as a history stream into the user-data area104s99Po316Usoover1.1.docCA 02265089 1999-03-05of the picture layer of the bitstream generated in theencoding process carried out at the last stage, that is,the encoding process of typically the third generation.It should be noted that history information related tothe sequence header of the sequence layer of thebitstream generated in the previous encoding processes isnever inserted into the sequence header of the sequencelayer of the bitstream generated in the encoding processcarried out at the last stage.Data elements related to the sequence headerused in the previous encoding processes includesequence_header_code, sequence_header_present_flag,horizontal_size_value, vertical_size_value,aspect_ratio_information, frame_rate_code, bit_rate_value,marker_bit, VBV_buffer_size_value,constrained_parameter_flag, load_intra_quantizer_matrix,intra_quantizer_matrix, load_non_intra_quantizer_matrixand non_intra_quantizer_matrix as shown in Fig 40.The data elements listed above are described asfollows. The sequence_header_code data element is a startsynchronization code of the sequence layer. Thesequence_header_present_flag data element is a flag usedfor indicating whether data in the sequence header isvalid or invalid. The horizontal_size_value data elementis data comprising the low-order 12 bits of the number ofpixels of the picture in the horizontal direction. Thevertical_size_value data element is data comprising low-order 12 bits of the number of pixels of the picture inthe vertical direction. The aspect_ratio_information dataelement is an aspect ratio, that is, a ratio of theheight to the width of the picture, or the aspect ratio105s99Po316Usoover1.1.docCA 02265089 1999-03-05of the display screen. The frame_rate_code data elementis data representing the picture display period.The bit_rate_value data element is datacomprising the low—order 18 bits of a bit rate forlimiting the number of generated bits. The data isrounded up in 400—bsp units. The marker_bit data elementis bit data inserted for preventing start—code emulation.The VBV_buffer_size_value data element is data comprisingthe low—order 10 bits of a value for determining the sizeof a virtual buffer (video buffer verifier) used incontrol of the amount of generated code. Theconstrained_parameter_flag data element is a flag usedfor indicating whether or not parameters are underconstraint. The load_intra_quantizer_matrix data elementis a flag used for indicating whether or not data of anintra-MB quantization matrix exists. Theintra_quantizer_matrix data element is the value of theintra-MB quantization matrix. Theload_non_intra_quantizer_matrix data element is a flagused for indicating whether or not data of a non-intra-MBquantization matrix exists. Thenon_intra_quantizer_matrix data element is the value ofthe non-intra-MB quantization matrix.Subsequently, data elements representing asequence extension used in the previous encodingprocesses are described as a history stream in the userarea of the picture layer of the bitstream generated inthe encoding process carried out at the last stage.Data elements representing the sequenceextension used in the previous encoding includeextension_start_code, extension_start_code_identifier,106S99P0316USOOVer1 . 1 . doc.............................................,r............,...........4............z..........,..._..u,W.V .W. . .CA 02265089 1999-03-05sequence_extension_present_flag,profi1e_and_level_identification, progressive_sequence,chroma_format, horizontal_size_extension,vertical_size_extension, bit_rate_extension,vbv_buffer_size_extension, low_delay,frame_rate_extension_n and frame_rate_extension_d asshown in Figs. 40 and 41.The data elements listed above are described asfollows. The extension_start_code data element is a startsynchronization code of extension data. Theextension_start_code_identifier data element is data usedfor indicating which extension data is transmitted. Thesequence_extension_present_flag data element is a flagused for indicating whether data in the sequenceextension is valid or invalid. Theprofile_and_level_identification data element is dataspecifying a profile and a level of the video data. Theprogressive_sequence data element is data showing thatthe video data has been obtained from sequential scanning.The chroma_format data element is data specifying thecolor—difference format of the video data.The horizontal_size_extension data element isthe two high—order bits data to be added tohorizontal_size_value of the sequence header. Thevertical_size_extension data element is the two high-order bits data to be added to vertical_size_value of thesequence header. The bit_rate_extension data element isthe twelve high—order bits data to be added tobit_rate_value of the sequence header. Thevbv_buffer_size_extension data element is the eight high-order bits data to be added to vbv_buffer_size_value of107S99yP0316USO0Ver1 . 1 . docCA 02265089 1999-03-05the sequence header. The low_delay data element is dataused for indicating that a B—picture is not included. Theframe_rate_extension_n data element is data used forobtaining a frame rate in conjunction withframe_rate_code of the sequence header. Theframe_rate_extension_d data element is data used forobtaining a frame rate in conjunction withframe_rate_code of the sequence header.Subsequently, data elements representing asequence-display extension of the sequence layer used inthe previous encoding processes are described as ahistory stream in the user area of the picture layer ofthe bitstream.Data elements described as a sequence-displayextension are extension_start_code,extension_start_code_identifier,sequence_display_extension_present_flag, video_format,color_decription, color_primaries,transfer_characteristics, matrix_coefficients,display_horizontal_size and display_vertical_size asshown in Fig. 41.The data elements listed above are described asfollows. The extension_start_code data element is a startsynchronization code of extension data. Theextension_start_code_identifier data element is data usedfor indicating which extension data is transmitted. Thesequence_display_extension_presentation_flag data elementis a flag used for indicating whether data elements inthe sequence extension are valid or invalid. Thevideo_format data element is data representing the videoformat of the source signal. The color_description data108S99P03l6USO0Ver1.1.docCA 02265089 1999-03-05element is data used for indicating that detailed data ofa color space exists. The color_primaries data element isdata showing details of a color characteristic of thesource signal. The transfer_characteristics data elementis data showing details of how opto—electrical conversionhas been carried out. The matrix_coefficients dataelement is data showing details of how a source signalhas been converted from the three primary colors of light.The disp1ay_horizontal_size data element is datarepresenting the activation area or the horizontal sizeof an intended display. The display_vertical_size dataelement is data representing the activation area or thevertical size of the intended display.Subsequently, macroblock assignment data (namedmacroblock_assignment_in_user_data) showing phaseinformation of a macroblock generated in the previousencoding processes is described as a history stream inthe user area of the picture layer of a bitstreamgenerated in the encoding process carried out at the laststage.The macroblock_assignment_in_user;data showingphase information of a macroblock comprises data elementssuch as macroblock_assignment_present_flag, v_phase andh_phase as shown in Fig. 41.The data elements listed above are described asfollows. The macroblock_assignment_present_flag dataelement is a flag used for indicating whether dataelements of macroblock_assignment_in_user_data are validor invalid. The v_phase data element is data showingphase information in the vertical direction which isobtained when the macroblock is detached from picture109s99Po316usoover1.1.docCA 02265089 1999-03-05data. The h_phase data element is data showing phaseinformation in the horizontal direction which is obtainedwhen the macroblock is detached from picture data.Subsequently, data elements representing a GOPheader of the GOP layer used in the previous encodingprocesses are described as a history stream in the userarea of the picture layer of a bitstream generated in theencoding process carried out at the last stage.The data elements representing the GOP headerare group_start_code,group_of_picture_header_present_flag, time_code,closed_gop and broken_link as shown in Fig. 41.The data elements listed above are described asfollows. The group_start_code data element is the startsynchronization code of the GOP layer. Thegroup_of_picture_header_present_flag data element is aflag used for indicating whether data elements ingroup_of_picture_header are valid or invalid. Thetime_code data element is a time code showing the lengthof time measured from the beginning of the first pictureof the GOP. The closed_gop data element is a flag usedfor indicating whether or not it is possible to carry outan independent playback operation of a picture in one GOPfrom another GOP. The broken_link data element is a flagused for indicating whether or not the B—picture at thebeginning of the GOP can not be reproduced with a highdegree of accuracy because of reasons such as editing.Subsequently, data elements representing apicture header of the picture layer used in the previousencoding processes are described as a history stream inthe user area of the picture layer of a bitstream110S99P0316USO0Verl . 1 .doc CA 02265089 1999-03-05generated in the encoding process carried out at the laststage.The data elements related to a picture headerare picture_start_code, temporal_reference,picture_coding_type, vbv_delay, full_pel_forward_vector,forward_f_code, full_pel_backward_vector andbackward_f_code as shown in Figs. 41 and 42.The data elements listed above are describedconcretely as follows. The picture_start_code dataelement is the start synchronization code of the picturelayer. The temporal_reference data element is a numberused for indicating a display order of the picture. Thisnumber is reset at the beginning of the GOP. Thepicture_coding_type data element is data used forindicating the type of the picture. The vbv_delay dataelement is data showing an initial state of a virtualbuffer at a random access. The full_pel_forward_vectordata element is a flag used for indicating whether theprecision of the forward motion vector is expressed interms of pixel units or half—pixel units. Theforward_f_code data element is data representing aforward-motion—vector search range. Thefull_pel_backward_vector data element is a flag used forindicating whether the precision of the backward motionvector is expressed in terms of pixel units or half-pixelunits. The backward_f_code data element is datarepresenting a backward—motion-vector search range.Subsequently, data elements representing apicture-coding extension of the picture layer used in theprevious encoding processes are described as a historystream in the user area of the picture layer of a111s99Po316usoover1.1.docCA 02265089 1999-03-05bitstream generated in the encoding process carried outat the last stage.The data elements related to the picture-codingextension are extension_start_code,extension_start_code_identifier, f_code[O][O],f_code[O][1], f_code[1][0], f_code[1][l],intra_dc_precision, picture_structure, top_field_first,frame_predictive_frame_dct, concealment_motion_vectors,q_scale_type, intra_vlc_format, alternate_scan,repeat_first_field, chroma_420_type, progressive_frame,composite_display_flag, v_axis, field_sequence,sub_carrier, burst_amplitude and sub_carrier_phase asshown in Fig. 42.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of extension data ofthe picture layer. The extension_start_code_identifierdata element is a code used for indicating whichextension data is transmitted. The f_code[0][0] dataelement is data representing a horizontal motion—vectorsearch range in the forward direction. The f_code[O][1]data element is data representing a vertical motion-vector search range in the forward direction. Thef_code[1][0] data element is data representing ahorizontal motion-vector search range in the backwarddirection. The f_code[1][1] data element is datarepresenting a vertical motion—vector search range in thebackward direction.The intra_dc_precision data element is datarepresenting the precision of DC coefficients. Thepicture_structure data element is data used for112S99_P0316USOOVerl . 1 . docCA 02265089 1999-03-05indicating whether the data structure is a framestructure or a field structure. In the case of the fieldstructure, the picture_structure data element also’indicates whether the field structure is the high—orderfield or the low—order field. The top_field_first dataelement is data used for indicating whether the firstfield of a frame structure is the high—order field or thelow—order field. The frame_predictive_frame_dct dataelement is data used for indicating that the predictionof frame-mode DCT is carried out only in the frame-DCTmode in the case of a frame structure. Theconcealment_motion_vectors data element is data used forindicating that the intra—macroblock includes a motionvector for concealing a transmission error.The q_scale_type data element is data used forindicating whether to use a linear quantization scale ora non—linear quantization scale. The intra_vlc_formatdata element is data used for indicating whether or notanother 2—dimensional VLC is used in the intra—macroblock.The alternate_scan data element is data representingselection to use a zigzag scan or an alternate scan. Therepeat_first_field data element is data used in the caseof a 2 : 3 pull-down. The chroma_420_type data element isdata equal to the value of the next progressive_framedata element in the case of a 4 : 2 : 0 signal format or0 otherwise. The progressive_frame data element is dataused for indicating whether or not this picture has beenobtained from sequential scanning. Thecomposite_display_flag data element is a flag used forindicating whether or not the source signal is acomposite signal.113S99P0316USO0Ver1.1.doc CA 02265089 1999-03-05The v_axis data element is data used in the caseof a PAL source signal. The field_sequence data elementis data used in the case of a PAL source signal. Thesub_carrier data element is data used in the case of aPAL source signal. The burst_amplitude data element isdata used in the case of a PAL source signal. Thesub_carrier_phase data element is data used in the caseof a PAL source signal.Subsequently, a quantization—matrix extensionused in the previous encoding processes is described as ahistory stream in the user area of the picture layer ofthe bitstream generated in the encoding process carriedout at the last stage.Data elements related to the quantization—matrixextension are extension_start_code,extension_start_code_identifier,quant_matrix_extension_present_flag,load_intra_quantizer_matrix, intra_quantizer_matrix[64],load_non_intra_quantizer_matrix,non_intra_quantizer_matrix[64],load_chroma_intra_quantizer_matrix,chroma_non_intra_quantizer_matrix[64],load_chroma_intra_quantizer_matrix andchroma_non_intra_quantizer_matrix[64] as shown in Fig. 43.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of the quantization-matrix extension. The extension_start_code_identifierdata element is a code used for indicating whichextension data is transmitted. Thequant_matrix_extension_present_flag data element is a114S99P03l6USO0Ver1.1.docCA 02265089 1999-03-05flag used for indicating whether data elements of thequantization—matrix extension are valid or invalid. Theload_intra_quantizer_matrix data element is data used forindicating whether or not quantization—matrix data for an"intra-macroblock exists. The intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for an intra-macroblock.The load_non_intra_quantizer_matrix data elementis data used for indicating whether or not quantization-matrix data for a non-intra~macroblock exists. Thenon_intra_quantizer_matrix data element is datarepresenting values of a quantization—matrix for a non-intra-macroblock. The load_chroma_intra_quantizer_matrixdata element is data used for indicating whether or notquantization—matrix data for a color—difference intra-macroblock exists. The chroma_intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for a color—difference intra-macroblock. Theload_chroma_non_intra_quantizer_matrix data element isdata used for indicating whether or not quantization-matrix data for a color—difference non—intra-macroblockexists. The chroma_non_intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for a color-difference non—intra-macroblock.Subsequently, a copyright extension used in theprevious encoding processes is described as a historystream in the user area of the picture layer of thebitstream generated in the encoding process carried outat the last stage.Data elements related to the copyright extensionare extension_start_code, extension_start_code_identifier,115s99Po316Usoover1.1.docCA 02265089 1999-03-05copyright_extension_present_f1ag, copyright_flag,copyright_identifier, original_or_copy,copyright_number_l, copy_right_number_2 andcopyright_number_3 as shown in Fig. 43.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of the copyrightextension. The extension_start_code_identifier dataelement is a code used for indicating which extensiondata is transmitted. The copyright_extension_present_flagdata element is a flag used for indicating whether dataelements of the copyright extension are valid or invalid.The copyright_flag data element is a flag used forindicating whether or not a copyright has been given toencoded video data in a range up to the next copyrightextension or the end of the sequence.The copyright_identifier data element is dataused for identifying an institution cataloging thecopyright specified by the ISO/IEC JTC/SC29. Theoriginal_or_copy data element is a flag used forindicating whether data of the bitstream is original orcopied data. The copyright_number_1 data elementindicates bits 44 to 63 of a copyright number. Thecopyright_number_2 data element indicates bits 22 to 43of the copyright number. The copyright_number_3 dataelement indicates bits 0 to 21 of the copyright number.Subsequently, a picture-display extension(picture_display_extension) used in the previous encodingprocesses is described as a history stream in the userarea of the picture layer of the bitstream generated inthe encoding process carried out at the last stage.116s99po316usoover1.1.doc..._...-. ..,...........................-.. . ... . . CA 02265089 1999-03-05Data elements representing the picture—displayextension are extension_start_code,extension_start_code_identifier,picture_display_extension_present_flag,frame_center_horizontal_offset_1,frame_center_vertical_offset_1,frame_center_horizontal_offset_2,frame_center_vertical_offset_2,frame_center_horizontal_offset_3 andframe_center_vertical_offset_3 as shown in Fig. 44.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of the picture—displayextension. The extension_start_code_identifier dataelement is a code used for indicating which extensiondata is transmitted. Thepicture_display_extension_present_flag data element is aflag used for indicating whether data elements of thepicture—display extension are valid or invalid. Theframe_center_horizontal_offset data element is an offsetof the display area in the horizontal direction and theframe_center_vertical_offset data element is an offset ofthe display area in the vertical direction. Up to threeoffset value of horizontal and vertical offsets can bedefined respectively.User data is described as a history stream afterthe history information representing the picture—displayextension already explained in the user area of thepicture layer of the bitstream generated in the encodingprocess carried out at the last stage as shown in Fig. 44.Following to the user data, information on a117s99Po316usoover1.1.docCA 02265089 1999-03-05macroblock used in the previous encoding processes isdescribed as a history stream as shown in Figs. 44 to 46.Information on the macroblock comprises dataelements related to the position of the macroblock, dataelements related to the mode of the macroblock, dataelements related to control of the quantization step,data elements related to motion compensation, dataelements related to the pattern of the macroblock anddata elements related to the amount of generated code asshown in Figs. 44 to 46. The data elements related to theposition of the macroblock include such as 9macroblock_address_h, macroblock_address_v,slice_header_present_flag and skipped_macrob1ock_flag.The data elements related to the mode of the macroblockinclude such as macroblock_quant,macroblock_motion_forward, macroblock_motion_backward,macroblock_pattern, macroblock_intra,spatial_temporal_weight_code_flag, frame_motion_type anddct_type. The data elements related to control of thequantization step includes such as quantiser_scale_code.The data elements related to motion compensation includePMV[0][0][0]. PMV[0][0][1].motion_vertical_field_se1ect[0][0], PMV[0][1][0],PMV[0][1][1], motion_vertical_field_select[0][1],PMV[l][0][0]. PMV[1][0][l].motion_vertical_field_select[1][O], PMV[l][l][0],PMV[1][1][1] and motion_vertical_field_select[1][1]. Thedata elements related to the pattern of the macroblockinclude such as coded_block_pattern, and the dataelements related to the amount of generated code arenum_mv_bits, num_coef_bits and num_other_bits or the like.118s99Po316usoover1.1.docCA 02265089 1999-03-05The data elements related to the macroblock aredescribed in detail as follows.The macroblock_address_h data element is datadefining the present absolute position of the macroblockin horizontal direction. The macroblock_address_v dataelement is data defining the present absolute position ofthe macroblock in vertical direction. Theslice_header_present_flag data element is a flag used forindicating this macroblock is located at the beginning ofa slice layer, and whether or not being accompanied by aslice header. The skipped_macroblock_flag data element isa flag used for indicating whether or not to skip thismacroblock in a decoding process.The macroblock_quant data element is dataderived from macroblock_type shown in Figs. 65 to 67.This data element indicates whether or notquantiser_scale_code appears in the bitstream. Themacroblock_motion_forward data element is data derivedfrom the macroblock_type shown in Figs. 65 to 67 and usedin the decoding process. The macroblock_motion_backwarddata element is data derived from the macroblock_typeshown in Figs. 65 to 67 and used in the decoding process.The macroblock_pattern data element is data derived fromthe macroblock_type shown in Figs. 65 to 67 and itindicates whether or not coded_block_pattern appears inthe bitstream.The macroblock_intra data element is dataderived from the macroblock_type shown in Figs. 65 to 67and used in the decoding process. Thespatial_temporal_weight_code_flag data element is a flagderived from the macrobloc_type shown in Figs. 65 to 67119s99Po316Usoover1.1.docCA 02265089 1999-03-05and used for indicating whether or notspatial_temporal_weight_code showing an up—samplingtechnique of a low-order layer picture with timescalability exists in the bitstream.The frame_motion_type data element is a 2-bitcode used for indicating the prediction type of themacroblock of a frame. A frame_motion_type value of "00"indicates that there are two prediction vectors and theprediction type is a field-based prediction type. Aframe_motion_type value of "01" indicates that there isone prediction vector and the prediction type is a field-based prediction type. A frame_motion_type value of "10"indicates that there is one prediction vector and theprediction type is a frame-based prediction type. Aframe_motion_type value of '11" indicates that there isone prediction vector and the prediction type is a dual-prime prediction type. The field_motion_type data elementis a 2-bit code showing the motion prediction of themacroblock of a field. A field_motion_type value of '01"indicates that there is one prediction vector and theprediction type is a field-based prediction type. Afield_motion_type value of "10" indicates that there istwo prediction vectors and the prediction type is a 18 X8 macroblock-based prediction type. A field_motion_typevalue of "11" indicates that there is one predictionvector and the prediction type is a dual—prime predictiontype. The dct_type data element is data used forindicating whether the DCT is carried out in the frame-DCT mode or the field—DCT mode. The quantiser_scale_codedata element indicates the quantization—step size of themacroblock.120s99Po316Usoover1.1.docCA 02265089 1999-03-05Next, data elements related to a motion vectorare described. In order to reduce the magnitude of amotion vector required in a decoding process, aparticular motion vector is subjected to an encodingprocess by actually encoding a difference between theparticular motion vector and a motion vector decodedearlier. A decoder for decoding a motion vector has tosustain four motion-vector prediction values, each ofwhich comprises horizontal and vertical components. Thesemotion-vector prediction values are represented byPMV[r][s][v]. The subscript [r] is a flag used forindicating whether the motion vector in a macroblock isthe first or second vector. To be more specific, an [r]value of "0" indicates the first vector and an [r] valueof "1" indicates the second vector. The subscript [s] isa flag used for indicating whether the direction of themotion vector in the macroblock is the forward orbackward direction. To be more specific, an [s] value of"0" indicates the forward direction of the motion vectorand an [r] value of '1” indicates the backward directionof the motion vector. The subscript [V] is a flag usedfor indicating whether the component of the motion vectorin the macroblock is a component in the horizontal orvertical direction. To be more specific, a [v] value of"0" indicates the horizontal component of the motionvector and a [V] value of "1" indicates the verticalcomponent of the motion vector.Thus, PMV[0][0][0] is data representing thehorizontal component, the forward motion vector of thefirst vector. PMV[0][0][1] is data representing thevertical component of the forward motion vector of the121S_99P0316USO0Ver1 . 1 .docCA 02265089 1999-03-05first vector. PMV[0][l][O] is data representing thehorizontal component of the backward motion vector of thefirst vector. PMV[0][1][1] is data representing thevertical component of the backward motion vector of thefirst vector. PMV[l][0][O] is data representing thehorizontal component of the forward motion vector of thesecond vector. PMV[1][0][1] is data representing thevertical component of the forward motion vector of thesecond vector. PMV[l][1][0] is data representing thehorizontal component of the backward motion vector of thesecond vector. PMV[1][1][1] is data representing thevertical component of the backward motion vector of thesecond vector.A motion_vertical_field_select[r][s] is dataused for indicating which referenced field of theprediction format is used. To be more specific, amotion_vertical_field_select[r][s]-value of "0" indicatesthe top referenced field and amotion_vertical_field_select[r][s]—value of "1" indicatesthe bottom referenced field to be used.In motion_vertica1_field_select[r][s], thesubscript [r] is a flag used for indicating whether themotion vector in a macroblock is the first or secondvector. To be more specific, an [r] value of "0"indicates the first vector and an [r] value of "1"indicates the second vector. The subscript [s] is a flagused for indicating whether the direction of the motionvector in the macroblock is the forward or backwarddirection. To be more specific, an [s] value of "0"indicates the forward direction of the motion vector andan [r] value of "1" indicates the backward direction of122S99PO316USO0Ver1.1.docCA 02265089 1999-03-05the motion vector. Thus,motion_vertical_field_select[0][O] indicates thereferenced field used in the generation of the forward’motion vector of the first vector.motion_vertical_field_select[O][1] indicates thereferenced field used in the generation of the backwardmotion vector of the first vector.motion_vertica1_field_select[1][0] indicates thereferenced field used in the generation of the forwardmotion vector of the second vector.motion_vertical_field_select[1][1] indicates thereferenced field used in the generation of the backwardmotion vector of the second vector.The coded_block_pattern data element isvariable-length data used for indicating which DCT blockamong a plurality of DCT blocks each for storing a DCTcoefficient contains a meaningful or non-zero DCTcoefficient. The num_mv_bits data element is datarepresenting the amount of code of the motion vector inthe macroblock. The num_coef_bits data element is datarepresenting the amount of code of the DCT coefficient inthe macroblock. The num_other_bits data element shown inFig. 46 is data representing the amount of code in themacroblock other than the motion vector and the DCTcoefficient.Next, a syntax for decoding data elements from ahistory stream with a variable length is explained byreferring to Figs. 47 to 64.As shown in Fig. 47, the history stream with avariable length comprises data elements defined by anext_start_code() function, a sequence_header() function,123S99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05a sequence_exension() function, anextension_and_user_data(O) function, agroup_of_picture_header() function, anextension_and_user_data(1) function, a picture_header()function, a picture_coding_extension() function, anextension_and_user_data(2) function and a picture_data()function.Since the next_start_code() function is afunction used for searching a bitstream for a start code,data elements defined by the sequence_header() functionand used in the previous encoding processes are describedat the beginning of the history stream as shown in Fig.48.Data elements defined by the sequence_header()function include sequence_header_code,sequence_header_present_flag, horizontal_size_value,vertical_size_value, aspect_ratio_information,frame_rate_code, bit_rate_value, marker_bit,VBV_buffer_size_value, constrained_parameter_flag,load_intra_quantizer_matrix, intra_quantizer_matrix,load_non_intra_quantizer_matrix andnon_intra_quantizer_matrix as shown in Fig. 48.The data elements listed above are described asfollows. The sequence_header_code data element is thestart synchronization code of the sequence layer. Thesequence_header_present_flag data element is a flag usedfor indicating whether data in sequence_header is validor invalid. The horizontal_size_value data element isdata comprising the low-order 12 bits of the number ofpixels of the picture in the horizontal direction. Thevertical_size_value data element is data comprising the124s99Po316Usoover1.1.docCA 02265089 1999-03-05low-order 12 bits of the number of pixels of the picturein the Vertical direction. The aspect_ratio_informationdata element is an aspect ratio of pixels of a picture,that is, a ratio of the height to the width of thepicture, or the aspect ratio of the display screen. Theframe_rate_code data element is data representing thepicture display period. The bit_rate_value data elementis data comprising the low-order 18 bits of a bit ratefor limiting the number of generated bits. The data isrounded up in 400-bsp units.The marker_bit data element is bit data insertedfor preventing start—code emulation. TheVBV_buffer_size_value data element is data comprising thelow-order 10 bits of a value for determining the size avirtual buffer (video buffer verifier) used in control ofthe amount of generated code. Theconstrained_parameter_flag data element is a flag usedfor indicating whether or not parameters are underconstraint. The load_intra_quantizer_matrix data elementis a flag used for indicating whether or not data of anintra—MB quantization matrix exists. Theintra_quantizer_matrix data element is the value of theintra—MB quantization matrix. Theload_non_intra_quantizer_matrix data element is a flagused for indicating whether or not data of a non-intra-MBquantization matrix exists. Thenon_intra_quantizer_matrix data element is the value ofthe non-intra-MB quantization matrix.Following to the data elements defined by thesequence_header() function, data elements defined by thesequence_extension() function are described as a history125s99po316Usoover1.1.docCA 02265089 1999-03-05stream as shown in Fig. 49.The data elements defined by thesequence_extension () function includeextension_start_code, extension_start_code_identifier,sequence_extension_present_flag,profile_and_level_identification, progressive_sequence,chroma_format, horizontal_size_extension,vertical_size_extension, bit_rate_extension,vbv_buffer_size_extension, low_delay,frame_rate_extension_n and frame_rate_extension_d asshown in Figs. 49.The data elements listed above are described asfollows. The extension_start_code data element is a startsynchronization code of extension data. Theextension_start_code_identifier data element is data usedfor indicating which extension data is transmitted. Thesequence_extension_present_flag data element is a flagused for indicating whether data in the sequenceextension is valid or invalid. Theprofile_and_level_identification data element is dataspecifying a profile and a level of the video data. Theprogressive_sequence data element is data showing thatthe video data has been obtained from sequential scanning.The chroma_format data element is data specifying thecolor—difference format of the video data. Thehorizontal_size_extension data element is data to beadded to horizontal_size_value of the sequence header asthe two high-order bits. The vertical_size_extension dataelement is data to be added to vertical_size_value of thesequence header as the two high-order bits. Thebit_rate_extension data element is data to be added to126S99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05bit_rate_value of the sequence header as the 12 high-order bits. The vbv_buffer_size_extension data element isdata to be added to vbv_buffer_size_value of the sequenceheader as the 8 high-order bits.The low_delay data element is data used forindicating that a B-picture is not included. Theframe_rate_extension_n data element is data used forobtaining a frame rate in conjunction withframe_rate_code of the sequence header. Theframe_rate_extension_d data element is data used forobtaining a frame rate in conjunction withframe_rate_code of the sequence header.Following to the data elements defined by thesequence_extension() function, data elements defined bythe extension_and_user_data(0) function are described asa history stream as shown in Fig. 50. For (i) with avalue other than 2, the extension_and_user_data(i)function describes only data elements defined by auser_data() function as a history stream instead ofdescribing data elements defined by the extension_data()function. Thus, the extension_and_user_data(O) functiondescribes only data elements defined by the user_data()function as a history stream.The user_data() function describes user data asa history stream on the basis of a syntax like one shownin Fig. 51.Following to the data elements defined by theextension_and_user_data(0) function, data elementsdefined by the group_of_picture_header() function shownin Fig. 52 and data elements defined by theextension_and_user_data(1) function shown in Fig. 50 are127S99PO316USOOVer1 . 1 . docCA 02265089 1999-03-05described as a history stream. It should be noted,however, that the data elements defined by thegroup_of_picture_header() function and data elementsdefined by the extension_and_user_data(1) function aredescribed only if group_start_code representing the startcode of the GOP layer is described in the history stream.As shown in Fig. 52, the data elements definedby the group_of_picture_header() function aregroup_start_code, group_of_picture_header_present_flag,time_code, closed_gop and broken_link.The data elements listed above are described asfollows. The group_start_code data element is the startsynchronization code of the GOP layer. Thegroup_of_picture_header_present_flag data element is aflag used for indicating whether data elements ingroup_of_picture_header are valid or invalid. Thetime_code data element is a time code showing the lengthof time measured from the beginning of the first pictureof the GOP. The closed_gop data element is a flag usedfor indicating whether or not it is possible to carry outan independent playback operation of a picture in the GOPfrom another GOP. The broken_link data element is a flagused for indicating that the B-picture at the beginningof the GOP can not be reproduced with a high degree ofaccuracy because of reasons such as editing.Much like the extension_and_user_data(O)function shown in Fig. 50, the extension_and_user_data(1)function describes only data elements defined by theuser_data() function as a history stream.If group_start_code representing the start codeof the GOP layer is not described in the history stream,128s99Po316Usoover1.1.docCA 02265089 1999-03-05the data elements defined by thegroup_of_picture_header() function and data elementsdefined by the extension_and_user_data(1) function arealso not described in the history stream. In this case,data elements defined by the picture_header() functionare described after the data elements defined by theextension_and_user_data(0) function.The data elements defined by the picture_headr()function are picture_start_code, temporal_reference,picture_coding_type, vbv_delay, full_pel_forward_vector,forward_f_code, full_pel_backward_vector, backward_f_code,extra_bit_picture and extra_information_picture as shownin Fig. 53.The data elements listed above are describedconcretely as follows. The picture_start_code dataelement is the start synchronization code of the picturelayer. The temporal_reference data element is a numberused for indicating a display order of the picture. Thisnumber is reset at the beginning of the GOP. Thepicture_coding_type data element is data used forindicating the type of the picture. The vbv_delay dataelement is data showing an initial state of a virtualbuffer at a random access. The ful1_pel_forward_vectordata element is a flag used for indicating whether theprecision of the forward motion vector is expressed interms of integral pixel units or half—pixel units. Theforward_f_code data element is data representing aforward—motion-vector search range. Thefull_pel_backward_vector data element is a flag used forindicating whether the precision of the backward motionvector is expressed in terms of integral pixel units or129s99Po316Usoover1.1.docCA 02265089 1999-03-05half-pixel units. The backward_f_code data element isdata representing a backward-motion—vector search range.The extra_bit_picture data element is a flag used forindicating whether or not following additionalinformation exists. To be more specific,extra_bit_picture having a value of "0" indicates that nofollowing additional information exists whileextra_bit_picture having a value of "1" indicates thatfollowing additional information exists. Theextra_information_picture data element is informationreserved by specifications.Following to the data elements defined by thepicture_headr() function, data elements defined by thepicture_coding_extension() function shown in Fig. 54 aredescribed as a history stream.The data elements defined by thepicture_coding_extension() function areextension_start_code, extension_start_code_identifier,f_code[0][0], f_code[0][l], f_code[1][0], f_code[1][1],intra_dc_precision, picture_structure, top_field_first,frame_predictive_frame_dct, concealment_motion_vectors,q_scale_type, intra_vlc_format, alternate_scan,repeat_first_field, chroma_420_type, progressive_frame,composite_display_flag, v_axis, field_sequence,sub_carrier, burst_amplitude and sub_carrier_phase asshown in Fig. 54.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of extension data ofthe picture layer. The extension_start_code_identifierdata element is a code used for indicating which130s99po316usoover1.1.doc CA 02265089 1999-03-05extension data is transmitted. The f_code[0][O] dataelement is data representing a horizontal motion-vectorsearch range in the forward direction. The f_code[0][l]data element is data representing a vertical motion-vector search range in the forward direction. Thef_code[1][O] data element is data representing ahorizontal motion-vector search range in the backwarddirection. The f_code[1][1] data element is datarepresenting a vertical motion-vector search range in thebackward direction. The intra_dc_precision data elementis data representing the precision of DC coefficients.The picture_structure data element is data usedfor indicating whether the data structure is a framestructure or a field structure. In the case of the fieldstructure, the picture_structure data element alsoindicates whether the field structure is the high-orderfield or the low-order field. The top_field_first dataelement is data used for indicating whether the firstfield of a frame structure is the high-order field or thelow-order field. The frame_predictive_frame_dct dataelement is data used for indicating that the predictionof frame—mode DCT is carried out only in the frame modein the case of a frame structure. Theconcealment_motion_vectors data element is data used forindicating that the intra-macroblock includes a motionvector for concealing a transmission error. Theq_scale_type data element is data used for indicatingwhether to use a linear quantization scale or a non-linear quantization scale. The intra_vlc_format data’element is data used for indicating whether or notanother 2—dimensional VLC is used in the intra-macroblock.131S99P0316USOOVer1.l.docCA 02265089 1999-03-05The alternate_scan data element is datarepresenting selection to use a zigzag scan or analternate scan. The repeat_first_field data element isdata used in the case of a 2 : 3 pull—down. Thechroma_420_type data element is data equal to the valueof the next progressive_frame data element in the case ofa 4 : 2 : 0 signal format or 0 otherwise. Theprogressive_frame data element is data used forindicating whether or not this picture has been obtainedfrom sequential scanning. The composite_display_flag dataelement is data used for indicating whether or not thesource signal is a composite signal. The v_axis dataelement is data used in the case of a PAL source signal.The field_sequence data element is data used in the caseof a PAL source signal. The sub_carrier data element isdata used in the case of a PAL source signal. Theburst_amplitude data element is data used in the case ofa PAL source signal. The sub_carrier_phase data elementis data used in the case of a PAL source signal.Following to the data elements defined by thepicture_coding_extension() function, data elementsdefined by the extension_and_user_data(2) function shownin Fig. 50 are described as a history stream. It shouldbe noted, however, that data elements defined by theextension_data() function are described by theextension_and_user_data(2) function only ifextension_start_code representing the start code of theextension exists in the bit stream. In addition, dataelements defined by the user_data() function aredescribed by the extension_and_user_data(2) functionafter the data elements defined by the extension_data()132S99PO316USO0Ver1.1.docCA 02265089 1999-03-05function only if user_data_start_code representing thestart code of the user data exists in the bitstream asshown in as shown in Fig. 50. That is to say, if neitherthe start code of the extension nor the start code of theuser data exists in the bitstream, data elements definedby the extension_data() function and data elementsdefined by the user_data() function are not described inthe bitstream.The extension_data() function is a function usedfor describing a data element representingextension_start_code and data elements defined by aquant_matrix_extension() function, acopyright_extension() function and apicture_display_extension() function as a history streamin the bitstream as shown in Fig. 55.Data elements defined by thequant_matrix_extension() function areextension_start_code, extension_start_code_identifier,quant_matrix_extension_present_flag,load_intra_quantizer_matrix, intra_quantizer_matrix[64],load_non_intra_quantizer_matrix,non_intra_quantizer_matrix[64],load_chroma_intra_quantizer_matrix,chroma_intra_quantizer_matrix[64],load_chroma_non_intra_quantizer_matrix andchroma_non_intra_quantizer_matrix[64] as shown in Fig. 56.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of the quantization-matrix extension. The extension_start_code_identifierdata element is a code used for indicating which133S99P0316USO0Ver1.1.docCA 02265089 1999-03-05extension data is transmitted. Thequant_matrix_extension_present_flag data element is aflag used for indicating whether data elements of thequantization-matrix extension are valid or invalid. Theload_intra_quantizer_matrix data element is data used forindicating whether or not quantization-matrix data for anintra—macroblock exists. The intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for an intra—macroblock.The load_non_intra_quantizer_matrix data elementis data used for indicating whether or not quantization-matrix data for a non—intra—macroblock exists. Thenon_intra_quantizer_matrix data element is datarepresenting values of a quantization-matrix for a non-intra—macroblock. The load_chroma_intra_quantizer_matrixdata element is data used for indicating whether or notquantization-matrix data for a color-difference intra-macroblock exists. The chroma_intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for a color-difference intra—macroblock. The1oad_chroma_non_intra_quantizer_matrix data element isdata used for indicating whether or not quantization-matrix data for a color-difference non—intra—macroblockexists. The chroma_non_intra_quantizer_matrix dataelement is data representing values of a quantization-matrix for a color-difference non-intra-macroblock.The data elements defined by thecopyright_extension() function are extension_start_code,extension_start_code_identifier,copyright_extension_present_flag, copyright_flag,copyright_identifier, original_or_copy,134s99Po316Usoover1.1.docCA 02265089 1999-03-05copyright_number_l, copy_right_number_2 andcopyright_number_3 as shown in Fig. 57.The data elements listed above are described asfollows. The extension_start_code data element is a startcode used for indicating the start of the copyrightextension. The extension_start_code_identifier dataelement is a code used for indicating which extensiondata is transmitted. The copyright_extension_present_flagdata element is a flag used for indicating whether dataelements of the copyright extension are valid or invalid.The copyright_flag data element is a flag usedfor indicating whether or not a copyright has been givento encoded video data in a range up to the next copyrightextension or the end of the sequence. Thecopyright_identifier data element is data used foridentifying an institution cataloging the copyrightspecified by the ISO/IEC JTC/SC29. The original_or_copydata element is a flag used for indicating whether dataof the bitstream is original or copied data. Thecopyright_number_1 data element indicates bits 44 to 63of a copyright number. The copyright_number_2 dataelement indicates bits 22 to 43 of the copyright number.The copyright_number_3 data element indicates bits 0 to21 of the copyright number.The data elements defined by thepicture_display_extension() function areextension_start_code_identifier,frame_center_horizontal_offset andframe_center_vertical_offset as shown in Fig. 58.The data elements listed above are described asfollows. The extension_start_code_identifier data element135s99Po316usoover1.1.docCA 02265089 1999-03-05is a code used for indicating which extension data istransmitted. The frame_center_horizontal_offset dataelement is an offset of the display area in thehorizontal direction. The number of such horizontaloffsets can be defined by number_of_frame_center_offsets.The frame_center_vertical_offset data element is anoffset of the display area in the vertical direction. Thenumber of such vertical offsets can be defined bynumber_of_frame_center_offsets.As shown in the variable-length history streamof Fig.47, data elements defined by a picture_data()function are described as a history stream after the dataelements defined by the extension_and_user(2) function.As shown in Fig. 59, the data elements definedby a picture_data() function are data elements defined bya slice() function. It should be noted that the dataelements defined by a slice() function are not describedin the bitstream if slice_start_code representing thestart code of the slice() function does not exist in thebitstream.\As shown in Fig. 60, the slice() function is afunction used for describing data elements such asslice_start_code, slice_quantiser_scale_code,intra_slice_flag, intra_slice, reserved_bits,extra_bit_slice, extra_information_slice andextra_bit_slice and data elements defined by amacroblock() function as a history stream.The data elements listed above are described asfollows. The slice_start_code data element is a startcode used for indicating the data elements defined by theslice() function. The slice_quantiser_scale_code data136s99po316Usoover1.1.docCA 02265089 1999-03-05element is the size of the quantization step defined fora macroblock existing in the slice layer. However,quantiser_scale_code set for macroblocks is preferablyused, when quantiser_scale_code has been set.The intra_slice_flag data segment is_a flag usedfor indicating whether or not intra_slice andreserved_bits exist in the bit stream. The intra_slicedata element is a flag used for indicating whether or nota non-intra—macroblock exists in the slice layer. To bemore specific, if any one of macroblocks in the slicelayer is a non-intra—macroblock, the intra_slice flag hasa value of “O”. If all macroblocks in the slice layer arenon—intra—macroblocks, on the other hand, the intra_sliceflag has a value of "1". The reserved_bits data elementis 7-bit data having a value of "0". The extra_bit_slicedata element is a flag used for indicating whether or notthe extra_information_slice data element, that is,information added as a history stream, exists. To be morespecific, if the next extra_information_slice dataelement exists, the extra_bit_slice flag has a value of"1". If the next extra_information_slice data elementdoes not exist, on the other hand, the extra_bit_sliceflag has a value of "0".Following to the data element defined by theslice() function, data elements defined by a macroblock()function are described as a history stream.As shown in Fig. 61, the macroblock() functionare a function used for defining data elements such asmacroblock_escape, macroblock_address_increment andmacroblock_quantiser_sca1e_code and data elements definedby a macroblock_modes() function and a137s99Po316Usoover1.1.doc CA 02265089 1999-03-05macroblock_vectors(s) function.The data elements listed above are described asfollows. The macroblock_escape data element is a stringof bits with a fixed length used for indicating whetheror not a difference in the horizontal direction between areferenced macroblock and a preceding macroblock is atleast 34 or greater. If the difference in the horizontaldirection between a referenced macroblock and a precedingmacroblock is at least 34 or greater, 33 is added to thevalue of the macroblock_address_increment data element.The macroblock_address_increment data element is thedifference in the horizontal direction between areferenced macroblock and a preceding macroblock. If onemacroblock_escape data element exists before themacroblock_address_increment data element, a valueobtained as a result of the addition of 33 to the valueof the macroblock_address_increment data elementrepresents the actual difference in the horizontaldirection between a referenced macroblock and a precedingmacroblock.The macroblock_quantiser_scale_code data elementis the size of the quantization step set in eachmacroblock. The slice_quantiser_scale_code data elementrepresenting the size of the quantization step of a slicelayer is also set in each slice layer. However,macroblock_scale_code set for a macroblock takesprecedence of slice_quantiser_scale_code.Following to the macroblock_address_incrementdata element, data elements defined by themacroblock_modes() function are described. As shown inFig. 62, the macroblock_modes() function is a function138S99_P03l6USO0Ver1 . 1 . docCA 02265089 1999-03-05used for describing data elements such as macroblock_type,frame_motion_type, field_motion_type and dct_type as ahistory stream.The data elements listed above are described asfollows. The macroblock_type data element is datarepresenting the encoding type of the macroblock. To putit concretely, the macroblock_type data element is datawith a variable length generated from flags such asmacroblock_quant, dct_type_flag,macroblock_motion_forward and macroblock_motion_backwardas shown in Figs. 65 to 67. The macroblock_quant flag isa flag used for indicating whether or notmacroblock_quantiser_scale_code for setting the size ofthe quantization step for the macroblock is set. Ifmacroblock_quantiser_scale_code exists in the bitstream,the macroblock_quant flag has a value of "1".The dct_type_flag is a flag used for indicatingwhether or not dct_type showing that the referencedmacroblock has been encoded in the frame-DCT mode or thefield-DCT mode exists. In other words, dct_type_flag is aflag used for indicating whether or not the referencedmacroblock experienced DCT. If dct_type exists in thebitstream, dct_type_flag has a value of "1". Themacroblock_motion_forward is a flag showing whether ornot the referenced macroblock has undergone forwardprediction. If the referenced macroblock has undergoneforward prediction, the macroblock_motion_forward flaghas a value of "1". On the other hand,macroblock_motion_backward is a flag showing whether ornot the referenced macroblock has undergone backwardprediction. If the referenced macroblock has undergone139s99Po316Usoover1.1.doc CA 02265089 1999-03-05backward prediction, the macroblock_motion_backward flaghas a value of "1".If the macroblock_motion_forward flag or themacroblock_motion_backward flag has a value of "1", thepicture is transferred in the frame—prediction mode andframe_period_frame_dct has a value of “O”, a data elementrepresenting frame_motion_type is described after a dataelement representing macroblock_type. It should be notedthat frame_period_frame_dct is a flag used for indicatingwhether or not frame_motion_type exists in the bit stream.The frame_motion_type data element is a 2-bitcode showing the prediction type of the macroblock of theframe. A frame_motion_type value of "00" indicates thatthere are two prediction vectors and the prediction typeis a field—based prediction type. A frame_motion_typevalue of "01" indicates that there is one predictionvector and the prediction type is a field—basedprediction type. A frame_motion_type value of "10"indicates that there is one prediction vector and theprediction type is a frame-based prediction type. Aframe_motion_type value of "11" indicates that there isone prediction vector and the prediction type is a dual—prime prediction type.If the macroblock_motion_forward flag or themacroblock_motion_backward flag has a value of “l” andthe picture is transferred not in the frame predictionmode, a data element representing field_motion_type isdescribed after a data element representingmacroblock_type.The field_motion_type data element is a 2-bitcode showing the motion prediction of the macroblock of a140s99Po316usoover1.1.docCA 02265089 1999-03-05field. A field_motion_type value of “O1” indicates thatthere is one prediction vector and the prediction type isa field—based prediction type. ‘A field_motion_type valueof "10" indicates that there is two prediction vectorsand the prediction type is a 18 X 8 macroblock—basedprediction type. A field_motion_type value of "11"indicates that there is one prediction vector and theprediction type is a dual—prime prediction type.If the picture is transferred in the frameprediction_mode, frame_period_frame_dct indicates thatframe_motion_type exists in the bitstream andframe_period_frame_dct also indicates that dct_typeexists in the bitstream, a data element representingdct_type is described after a data element representingmacroblock_type. It should be noted that the dct_typedata element is data used for indicating whether the DCTis carried out in the frame—DCT mode or the field—DCTmode.As shown in Fig. 61, if the referencedmacroblock is either a forward—prediction macroblock oran intra-macroblock completing conceal processing, a dataelement defined by a motion_vectors(0) function isdescribed. If the referenced macroblock is a backward-prediction macroblock, a data element defined by amotion_vectors(l) function is described. It should benoted that the motion_vectors(O) function is a functionused for describing a data element related to a firstmotion vector and the motion_vectors(l) function is afunction used for describing a data element related to asecond motion vector.As shown in Fig. 63, the motion_vectors(s)141S99P0316USO0Ver1 . 1 . docCA 02265089 1999-03-05function is a function used for describing a data elementrelated to a motion vector.If there is one motion vector and the dual-primeprediction mode is not used, data elements defined bymotion_vertical_field_select[0][s] andmotion_vector[0][s] are described.The motion_vertical_field_select[r][s] is a flagused for indicating that the first vector, be it aforward—prediction or backward—prediction vector, is avector made by referencing the bottom field or the topfield. The subscript [r] indicates the first or secondvector whereas the subscript [s] indicates a forward-prediction or backward—prediction vector.As shown in Fig. 64, the motion_vector(r, s)function is a function used for describing a data arrayrelated to motion_code[r][s][t], a data array related tomotion_residual[r][s][t] and data representingdmvector[t].The motion_code[r][s][t] is data with a variablelength used for representing the magnitude of a motionvector in terms of a value in the range -16 to +16.motion_residual[r][s][t] is data with a variable lengthused for representing a residual of a motion vector. Thus,by using the values of motion_code[r][s][t] andmotion_residual[r][s][t], a detailed motion vector can bedescribed. The dmvector[t] is data used for scaling anexisting motion vector with a time distance in order togenerate a motion vector in one of the top and bottomfields (for example, the top field) in the dual-primeprediction mode and used for correction in the verticaldirection in order to reflect a shift in the vertical142S99P0316USO0Ver1.1.docCA 02265089 1999-03-05direction between lines of the top and bottom fields.The subscript [r] indicates the first or second vectorwhereas the subscript [s] indicates a forward-predictionor backward—prediction vector. The subscript [t]indicates that the motion vector is a component in thevertical or horizontal direction.First of all, the motion_vector(r, s) functiondescribes a data array to represent motion_code[r][s][O]in the horizontal direction as a history stream as shownin Fig. 64. The number of bits of bothmotion_residual[0][s][t] and motion_residual[1][s][t] isrepresented by f_code[s][t]. Thus, a value off_code[s][t] other than "1" indicates thatmotion_residual[r][s][t] exists in the bitstream. Thefact that motion_residual[r][s][0], a horizontal-direction component, is not "1" and motion_code[r][s][O],a horizontal—direction component, is not "0" indicatesthat a data element representing motion_residual[r][s][0]is included in the bit stream and the horizontal-direction component of the motion vector exists. In thiscase, a data element representingmotion_residual[r][s][0], a horizontal component, is thusdescribed.Subsequently, a data array to representmotion_code[r][s][1] is described in the verticaldirection as a history stream. Likewise, the number ofbits of both motion~residual[O][s][t] andmotion_residual[1][s][t] is represented by f_code[s][t].Thus, a value of f_code[s][t] other than “1" indicatesthat motion_residual[r][s][t] exists in the bitstream.The fact that motion_residual[r][s][1], a vertical-143S99,P0316USO0Ver1 . 1 .docCA 02265089 1999-03-05direction component, is not "1" and motion_code[r][s][1],a vertical-direction component, is not "0" indicates thata data element representing motion_residual[r][s][1] isincluded in the bitstream and the vertical-directioncomponent of the motion vector exists. In this case, adata element representing motion_residual[r][s][l], avertical component, is thus described.It should be noted that, in the variable-lengthformat, the history information can be eliminated inorder to reduce the transfer rate of transmitted bits.For example, in order to transfermacroblock_type and motion_vectors(), but not to transferquantiser_scale_code, slice_quantiser_scale_code is setat "OOOOO" in order to reduce the bit rate.In addition, in order to transfer onlymacroblock_type but not to transfer motion_vectors(),quantiser_scale_code and dct_type, "not_coded" is used asmacroblock_type in order to reduce the bit rate.Furthermore, in order to transfer onlypicture_coding_type but not to transfer all informationfollowing slice(), picture_data() having noslice_start_code is used in order to reduce the bit rate.As described above, in order to prevent 23consecutive bits of 0 from appearing in user_data, a "1"bit is inserted for every 22 bits. It should be notedthat, however, a "1" bit can also be inserted for eachnumber of bits smaller than 22. In addition, instead ofinsertion of a "1" bit by counting the number ofconsecutive 0 bits, a "1" bit can also be inserted byexamining Byte_allign.In addition, in the MPEG, generation of 23144S99P03l6USO0Ver1.1.docCA 02265089 1999-03-05consecutive bits of 0 is prohibited. In actuality,however, only a sequence of such 23 bits starting fromthe beginning of a byte is a problem. That is to say, asequence of such 23 bits not starting from the beginningof a byte is not a problem. Thus, a "1" bit may beinserted for each typically 24 hits at a position otherthan the LSB.Furthermore, while the history information ismade in a format close to a video elementary stream asdescribed above, the history information can also be madein a format close to a packetized elementary stream or atransport stream. In addition, even though user_data ofthe Elementary Stream is placed in front of picture_dataaccording to the above description, user_data can beplaced at another location as well.It should be noted that programs to be executedby a computer for carrying out pieces of processingdescribed above can be presented to the user throughnetwork presentation media such as the Internet or adigital satellite in addition to presentation mediaimplemented by an information recording medium such as amagnetic disc and a CD-ROM.145S99P0316USO0Ver1.1.doc

Claims (117)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said video data as a present encoding process to generate an encoded video stream; and control means for receiving said past encoding parameters generated, and for controlling said present encoding process of said encoding means based on said past encoding parameters, wherein said past encoding parameters are described in source encoded video stream.
2. The transcoding system according to claim 1, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges whether said reference picture has been encoded with the same picture type as said assigned picture type in the past encoding processes, and controls said present encoding process based on a consequence of the judgment.
3. The transcoding system according to claim 1, wherein said encoding means encodes a reference picture included in said video data with a present picture type assigned to said reference picture in said present encoding process; and said control means detects past picture types which have been assigned to said reference picture at said present encoding processes by referring said past encoding parameters, and then, said control means controls said present encoding process based on said present picture type and said past picture types.
4. The transcoding system according to claim 2, wherein said control means selects optimum encoding parameters from said past encoding parameters according to said judgement, and controls said present encoding process of said encoding means based on said selected optimum encoding parameters.
5. The transcoding system according to claim 2, wherein said control means encodes said reference picture by using said past encoding parameters generated at one of said past encoding processes.
6. The transcoding system according to claim 4, wherein said past encoding parameters include motion vector information generated at said past encoding processes; and said encoding means includes a motion vector detect means for detecting a motion vector information of said reference picture in said present encoding processes.
7. The transcoding system according to claim 6, wherein said control means controls an operation of said motion vector detect means based on said result of the judgment.
8. The transcoding system according to claim 7, wherein said control means reuses said motion vector information included in said past encoding parameters as substitute for calculation of new motion vector information in said motion vector detect means.
9. The transcoding system according to claim 7, wherein said control means reuses said motion vector information included in said past encoding processes, if said reference picture has been encoded with the same picture type as said present picture type in past encoding processes.
10. The transcoding system according to claim 9, wherein said control means controls said motion vector detect means so that new motion vector information was detected by said motion vector detect means if said reference picture has not been encoded with said assigned picture type at past encoding process.
11. The transcoding system according to claim 1, wherein said control means selects optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and controls said present encoding process of said encoding means based on said optimum encoding parameters.
12. The transcoding system according to claim 1, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges the fact of whether said reference picture has been encoded with the same picture type as said assigned picture type at the past encoding processes, and selects said optimum encoding parameters based on a consequence of the judgment.
13. The transcoding system according to claim 11, wherein said past encoding parameters includes a prediction mode information indicating a frame prediction mode or a field prediction mode; and said control means controls said present encoding process according to said prediction mode information.
14. The transcoding system according to claim 11, wherein said control means reuses said prediction mode information included in said past encoding parameters as substitute for calculation of new prediction mode information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
15. The transcoding system according to claim 11, wherein said encoding parameters includes a prediction type information indicating intra prediction, forward prediction, backward prediction, or interpolative prediction; and said control means controls said present encoding process based on said prediction type information.
16. The transcoding system according to claim 15, wherein said control means reuses said prediction type information included in said past encoding parameters as substitute for calculation of new prediction type information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
17. The transcoding system according to claim 11, wherein said encoding parameters includes a DCT mode information indicating a frame DCT
mode or a field DCT mode; and said control means controls said present encoding process based on said DCT
mode information.
18. The transcoding system according to claim 17, wherein said control means reuses said DCT mode information included in said past encoding parameters as a substitute for calculation of new DCT mode information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
19. The transcoding system according to claim 1, wherein said control means generates present encoding parameters corresponding to said present encoding process of said encoding means.
20. The transcoding system according to claim 19, wherein said control means selects optimum encoding parameters which are commensurate with said present encoding process from said present encoding parameters and said past encoding parameters, and controls said present encoding process of said encoding means based on said optimum encoding parameters.
21. The transcoding system according to claim 20, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges the fact of whether said reference picture has been encoded with the same picture type as said assigned picture type at the past encoding processes, and selects said optimum encoding parameter based on a consequence of the judgment.
22. The transcoding system according to claim 21, wherein said past encoding parameters include a quantization information generated at said past encoding processes; and said encoding means includes a quantization means for quantizating said reference picture in said present encoding processes.
23. The transcoding system according to claim 22, wherein said control means receives a buffer information indicating a fullness of a transmission buffer for storing said encoded video stream, and controls said quantization means based on said buffer information so as to prevent overflow and underflow of said transmission buffer.
24. The transcoding system according to claim 23, wherein said control means controls said quantization means based on a quantization step size derived from said buffer information and quantization step sizes derived from said quantization information included in said past encoding parameter.
25. The transcoding system according to claim 24, wherein said control means controls said quantization means by using the biggest quantization step size selected from among said quantization step size corresponding to said buffer information and said quantization step sizes correspond to said quantization information.
26. The transcoding system according to claim 1, wherein said control means controls said encoding means so that said encoding means describes said past encoding parameters into said encoded video stream.
27. The transcoding system according to claim 1, wherein said encoding means includes a processing means for processing said encoded video stream to generate MPEG bit stream in conformance with MPEG standards, said MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer.
28. The transcoding system according to claim 27, wherein;

said control means generates present encoding parameters corresponding to said present encoding process of said encoding means; and said processing means describes said present encoding parameters into said picture layer, said slice layer, and said macroblock layer.
29. The transcoding system according to claim 28, wherein said processing means generates a history stream comprising said past encoding parameters in order to describe said past encoding parameters in said user data area.
30. The transcoding system according to claim 29, wherein said processing means inserts marker bits into said history stream in order to prevent an emulation of fixed start code defined with MPEG standards, describes said history stream inserted said marker bits in said user data area provided in said picture layer.
31. A method for converting source encoded video stream, comprising the steps of:

decoding the source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from the source encoded video stream;

encoding the video data as a present encoding process to generate an encoded video stream;

receiving the past encoding parameters; and controlling the present encoding process of the encoding means based on the past encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
32. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, for extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream, and for outputting said past encoding parameters as history information;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and control means for receiving said history information including past encoding parameters, and for controlling said present encoding process of said encoding means based on said history information so that said present encoding process being optimized by selectively using said past encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
33. A method for converting source encoded video stream, comprising the steps of:

decoding the source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

outputting said past encoding parameters as history information;

encoding said decoded video data to generate an encoded video stream as a present encoding process;

receiving said history information including past encoding parameters; and controlling said present encoding process of said encoding means based on said history information so that said present encoding process being optimized by selectively using said past encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
34. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and control means for receiving said past encoding parameters, for selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and for controlling said present encoding process of said encoding means based on said optimum encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
35. A method for converting source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;

extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process;

receiving said past encoding parameters;

selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters; and controlling said present encoding process of said encoding means based on said optimum encoding parameters, wherein said past encoding parameters are described the source encoded video stream.
36. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding a reference picture included in said encoded video data with a assigned picture type; and control means for receiving past encoding parameters generated at past encoding processes, for selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type, and for controlling a present encoding process of said encoding means based on said optimum encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
37. A method for converting source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding a reference picture included in said encoded video data with a assigned picture type;

receiving past encoding parameters generated at past encoding processes;

selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type; and controlling a present encoding process of said encoding means based on said optimum encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
38. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding a reference picture included in said decoded video data with a assigned picture type; and control means for judging whether the reference picture has been encoded with said assigned picture type at the past encoding process, and for controlling a present encoding process of said encoding means based on a consequence of the judgment, wherein said past encoding parameters are described in the source encoded video stream.
39. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream to generate decoded video data;
extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding a reference picture included in said decoded video data with a assigned picture type;

judging whether the reference picture has been encoded with said assigned picture type at the past encoding process; and controlling a present encoding process of said encoding means based on a consequence of the judgment, wherein said past encoding parameters are described in the source encoded video stream.
40. A transcoding system for converting source encoded video stream, comprising:
decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and stream generating means for generating a MPEG stream including said encoded video data, present encoding parameters generated at said present encoding process, and said past encoding parameters generated at said past encoding processes, wherein said past encoding parameters are described in the source encoded video stream.
41. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream to generate decoded video data;
extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and generating a MPEG stream including said encoded video data, present encoding parameters generated at said present encoding process, and said past encoding parameters generated at said past encoding processes, wherein said past encoding parameters are described in a user data area of said source encoded video stream.
42. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and stream generating means for generating a MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes said past encoding parameters generated at said past encoding processes, wherein said past encoding parameters are described in a user data area of said picture layer.
43. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream to generate decoded video data;
extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and generating a MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes said past encoding parameters generated at said past encoding processes, wherein said past encoding parameters are described in a user data area of said picture layer.
44. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and for extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said past encoding parameters;
describe means for describing said past encoding parameters in said encoded video stream; and output means for outputting said encoded video stream in which said past parameters being described.
45. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream to generate decoded video data;
extracting a past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said past encoding parameters;

describing said past encoding parameters in said encoded video stream; and outputting said encoded video stream in which said past parameters are described.
46. A transcoding system for converting source encoded video stream, the system comprising:

means for decoding said source encoded video stream to generate decoded video data;
means for extracting a history information including a previous encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

means for encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said history information; and means for describing said history information in said encoded stream so that said history information is available in advance encoding process.
47. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream to generate decoded video data;
extracting a history information including a previous encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said history information; and describing said history information in said encoded stream so that said history information is available in advance encoding process.
48. A transcoding system for converting source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream based on encoding parameters of latest encoding process to generate base-band video data, and for extracting past encoding parameters corresponding to different generations of past encoding processes, and for multiplexing said encoding parameters of said latest and past encoding process into said base-band video data; and encoding means for encoding said base-band video data based on said encoding parameters of said latest and past encoding process to generate a new encoded video stream, wherein said past encoding parameters are described in the source encoded video stream.
49. A method for converting source encoded video stream, comprising the steps of:
decoding said source encoded video stream based on encoding parameters of latest encoding process to generate base-band video data;

extracting past encoding parameters corresponding to different generations of past encoding processes;

multiplexing said encoding parameters of said latest and past encoding process into said base-band video data; and encoding said base-band video data based on said encoding parameters of said latest and past encoding process to generate a new encoded video stream, wherein said past encoding parameters are described in the source encoded video stream.
50. A video encoding apparatus for encoding a source video data, the apparatus comprising:

encoding means for encoding said video data as a present encoding process to generate an encoded video stream; and control means for receiving past encoding parameters corresponding to different generations of past encoding processes, and for controlling said present encoding process of said encoding means based on said past encoding parameters, wherein said past encoding parameters are described in the source encoded video stream.
51. The video encoding apparatus according to claim 50, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges the fact of whether said reference picture has been encoded with the same picture type as said assigned picture type at the past encoding processes, and controls said present encoding process based on a consequence of the judgment.
52. The video encoding apparatus according to claim 50, wherein said encoding means encodes a reference picture included in said video data with a present picture type assigned to said reference picture in said present encoding process; and said control means detects past picture types which have been assigned to said reference picture at said present encoding processes by referring said past encoding parameters, and then, said control means controls said present encoding process based on said present picture type and said past picture types.
53. The video encoding apparatus according to claim 51, wherein said control means selects optimum encoding parameters from said past encoding parameters according to said judgement, and controls said present encoding process of said encoding means based on said selected optimum encoding parameters.
54. The video encoding apparatus according to claim 51, wherein said control means encodes said reference picture by using said past encoding parameters generated at one of said past encoding processes.
55. The video encoding apparatus according to claim 53, wherein said past encoding parameters include motion vector information generated at said past encoding processes; and said encoding means includes a motion vector detect means for detecting a motion vector information of said reference picture in said present encoding processes.
56. The video encoding apparatus according to claim 55, wherein said control means controls an operation of said motion vector detect means based on said result of the judgment.
57. The video encoding apparatus according to claim 56, wherein said control means reuses said motion vector information included in said past encoding parameters as substitute for calculation of new motion vector information in said motion vector detect means.
58. The video encoding apparatus according to claim 56, wherein said control means reuses said motion vector information included in said past encoding processes, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
59. The video encoding apparatus according to claim 58, wherein said control means controls said motion vector detect means so that new motion vector information was detected by said motion vector detect means, if said reference picture has not been encoded with said assigned picture type at past encoding process.
60. The video encoding apparatus according to claim 50, wherein said control means selects optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and controls said present encoding process of said encoding means based on said optimum encoding parameters.
61. The video encoding apparatus according to claim 50, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges the fact of whether said reference picture has been encoded with the same picture type as said assigned picture type at the past encoding processes, and selects optimum encoding parameters based on a consequence of the judgment.
62. The video encoding apparatus according to claim 60, wherein said past encoding parameters includes a prediction mode information indicating a frame prediction mode or a field prediction mode; and said control means controls said present encoding process according to said prediction mode information.
63. The video encoding apparatus according to claim 62, wherein said control means reuses said prediction mode information included in said past encoding parameters as substitute for calculation of new prediction mode information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
64. The video encoding apparatus according to claim 60, wherein said encoding parameters includes a prediction type information indicating intra prediction, forward prediction, backward prediction, or interpolative prediction;
said control means controls said present encoding process based on said prediction type information.
65. The video encoding apparatus according to claim 64, wherein said control means reuses said prediction type information included in said past encoding parameters as substitute for calculation of new prediction type information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
66. The video encoding apparatus according to claim 60, wherein said encoding parameters includes a DCT mode information indicating a DCT mode or a field DCT mode;

said control means controls said present encoding process based on said DCT
mode information.
67. The video encoding apparatus according to claim 66, wherein said control means reuses said DCT mode information included in said past encoding parameters as substitute for calculation of new DCT mode information, if said reference picture has been encoded with the same picture type as said present picture type at past encoding processes.
68. The video encoding apparatus according to claim 50, wherein said control means generates present encoding parameters corresponding to said present encoding process of said encoding means.
69. The video encoding apparatus according to claim 68, wherein said control means selects optimum encoding parameters which are commensurate with said present encoding process from said present encoding parameter and said past encoding parameters, and controls said present encoding process of said encoding means based on said optimum encoding parameters.
70. The video encoding apparatus according to claim 69, wherein said encoding means encodes a reference picture included in said source video data with a present picture type assigned to said reference picture in said present encoding process; and said control means judges the fact of whether said reference picture has been encoded with the same picture type as said assigned picture type at the past encoding processes, and selects said optimum encoding parameters based on a consequence of the judgment.
71. The video encoding apparatus according to claim 70, wherein said past encoding parameters include a quantization information generated at said past encoding processes; and said encoding means includes a quantization means for quantizating said reference picture in said present encoding processes.
72. The video encoding apparatus according to claim 71, wherein said control means receives a buffer information indicating a fullness of a transmission buffer for storing said encoded video stream, and controls said quantization means based on said buffer information so as to prevent overflow and underflow of said transmission buffer.
73. The video encoding apparatus according to claim 72, wherein said control means controls said quantization means based on a quantization step size derived from said buffer information and quantization step sizes derived from said quantization information included in said past encoding parameters.
74. The video encoding apparatus according to claim 73, wherein said control means controls said quantization means by using the biggest quantization step size selected from among said quantization step size corresponding to said buffer information and said quantization step sizes correspond to said quantization information.
75. The video encoding apparatus according to claim 50, wherein said control means controls said encoding means so that said encoding means describes said past encoding parameters into said encoded video stream.
76. The video encoding apparatus according to claim 50, wherein said encoding means includes a processing means for processing said encoded video stream to generate MPEG bit stream in conformance with MPEG standards, said MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer.
77. The video encoding apparatus according to claim 76, wherein;

said control means generates present encoding parameters corresponding to said present encoding process of said encoding means;

said processing means describes said present encoding parameters into said picture layer, said slice layer, and said macroblock layer, and describes said past encoding parameters into a user data area provided in said picture layer.
78. The video encoding apparatus according to claim 77, wherein said processing means generates a history stream comprising said past encoding parameters in order to describe said past encoding parameters in a user data area.
79. The video encoding apparatus according to claim 78, wherein said processing means inserts marker bits into said history stream in order to prevent an emulation of fixed start code defined with MPEG standards, describes said history stream inserted said marker bits in said user data area provided in said picture layer.
80. A video encoding apparatus for encoding a source video data, the apparatus comprising:

encoding means for encoding said video data to generate source encoded video stream as a present encoding process; and control means for receiving history information including past encoding parameters corresponding to different generations of past encoding processes, and for controlling a present encoding process of said encoding means based on said history information so that said present encoding process is optimized by selectively using said past encoding parameters, wherein said past encoding parameters are described in said source encoded video stream.
81. A video encoding method for encoding a source video data, comprising the steps of:

encoding said video data to generate an encoded video stream as a present encoding process; and receiving history information including past encoding parameters corresponding to different generations of past encoding processes, and for controlling a present encoding process of said encoding means based on said history information so that said present encoding process is optimized by selectively using said past encoding parameters.
82. A video encoding apparatus for encoding a source video data to generate encoded video stream, the apparatus comprising:

encoding means for encoding said video data to generate said encoded video stream as a present encoding process; and control means for receiving past encoding parameters corresponding to different generations of past encoding processes, selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and controlling the present encoding process based on optimum encoding parameters.
83. A video encoding method for encoding a source video data to generate encoded video stream, comprising the steps of:

encoding said video data to generate said encoded video stream as a present encoding process; and receiving past encoding parameters corresponding to different generations of past encoding processes, for selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and controlling the present encoding process based on optimum encoding parameters.
84. A video encoding apparatus for encoding a source video data to generate encoded video stream, the apparatus comprising:

encoding means for encoding a reference picture included in said video data with a assigned picture type; and control means for receiving past encoding parameters corresponding to different generations of past encoding processes, selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type, and controlling a present encoding process of said encoding means based on optimum encoding parameters.
85. A video encoding method for encoding a source video data to generate encoded video stream, comprising the steps of:

encoding a reference picture included in said video data with an assigned picture type;
receiving past encoding parameters corresponding to different generations of past encoding processes, for selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type; and controlling a present encoding process of said encoding means based on said optimum encoding parameters.
86. A video encoding apparatus for encoding a source video data to generate encoded video stream, the apparatus comprising:

encoding means for encoding a reference picture included in said video data with a assigned picture type; and control means for judging whether the reference picture has been encoded with said assigned picture type corresponding to a number of different generations of a past encoding process and controlling a present encoding process of said encoding means based on a consequence of the judgment.
87. A video encoding method for encoding a source video data to generate encoded video stream, comprising the steps of:

encoding a reference picture included in said video data with an assigned picture type;

judging whether the reference picture has been encoded with said assigned picture type corresponding to a number of different generations of a past encoding process;
and controlling a present encoding process of said encoding means based on a consequence of the judgment.
88. A video encoding apparatus for encoding a source video data, the apparatus comprising:

encoding means for said video data to generate encoded video data as a present encoding process; and stream generating means for generating a MPEG bit stream including said encoded video data, present encoding parameters generated at said present encoding process, and past encoding parameters corresponding to a number of different generations of a past encoding process.
89. A video encoding method for encoding a source video data, comprising the steps of:

generate encoded video data as a present encoding process; and generating a MPEG bit stream including said encoded video data, present encoding parameters generated at said present encoding process, and past encoding parameters corresponding to a number of different generations of a past encoding process.
90. A video encoding apparatus for encoding a source video data to generate encoded video stream, the apparatus comprising:

encoding means for said video data to generate encoded video as a present encoding process; and stream generating means for generating a MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes past encoding parameters corresponding to a number of different generations of a past encoding process.
91. A video encoding method for encoding a source video data to generate encoded video stream, comprising the steps of:

generate the encoded video stream as a present encoding process; and generating a MPEG bit stream comprising sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes past encoding parameters corresponding to a number of different generations of past encoding processes.
92. A video encoding apparatus for encoding a video data to generate encoded video stream, the apparatus comprising:

encoding means for encoding a source video data by referring to the past encoding parameters corresponding to a number of different generations of a past encoding process to generate said encoded video stream;

describe means for describing said past encoding parameters into said encoded video stream; and output means for outputting said encoded video stream in which said past parameters are described.
93. A video encoding method for encoding a video data to generate encoded video stream, comprising the steps of:

encoding a source video data by referring to the past encoding parameters corresponding to a number of different generations of a past encoding process to generate said encoded video stream;

describing said past encoding parameters into said encoded video stream; and outputting said encoded video stream in which said past parameters are described.
94. A video encoding apparatus for encoding a source video data, the apparatus comprising:

means for receiving a history information including a plurality of encoding parameters corresponding to different generations of previous encoding processes;

encoding means for encoding said source video to generate encoded video stream data by referring said history information; and means for describing said history information in said encoded video stream so that said history information is available in advance to perform encoding.
95. A video encoding method for encoding a source video data, comprising the steps of:

receiving a history information including a plurality of encoding parameters corresponding to different generations of previous encoding processes;

encoding said source video data to generate encoded video stream by referring said history information; and describing said history information in said encoded video stream so that said history information is available in advance to perform encoding.
96. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding video data as a present encoding process to generate an encoded video stream; and control means for receiving said past encoding parameters, and controlling said present encoding process based on said past encoding parameters.
97. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data as a present encoding process to generate an encoded video stream; and receiving said past encoding parameters, and controlling said present encoding process based on said past encoding parameters.
98. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream, and outputting said past encoding parameters as a history information;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and control means for receiving said history information including past encoding parameters, and for controlling said present encoding process of said encoding means based on said history information so that said present encoding process is optimized by selectively using said past encoding parameters.
99. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

outputting said past encoding parameters as a history information;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process;

receiving said history information including past encoding parameters; and controlling said present encoding process based on said history information so that said present encoding process is optimized by selectively using said past encoding parameters.
100. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and control means for receiving said past encoding parameters, selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters, and controlling said present encoding process based on optimum encoding parameters.
101. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process;

receiving said past encoding parameters, for selecting optimum encoding parameters which are commensurate with said present encoding process from said past encoding parameters; and controlling said present encoding process of said encoding means based on said optimum encoding parameters.
102. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding a reference picture included in said decoded video data with an assigned picture type; and control means for receiving past encoding parameters generated at past encoding processes, selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type, and controlling a present encoding process of said encoding means based on said optimum encoding parameters.
103. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding a reference picture included in said decoded video data with an assigned picture type; receiving past encoding parameters generated at past encoding processes;

selecting optimum encoding parameters from said past encoding parameters according to said assigned picture type; and controlling a present encoding process of said encoding means based on said optimum encoding parameters.
104. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding a reference picture included in said decoded video data with an assigned picture type; and control means for judging the fact of whether the reference picture has been encoded with said assigned picture type at the past encoding process, and controlling a present encoding process of said encoding means based on a consequence of the judgment.
105. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding means for encoding a reference picture included in said decoded video data with an assigned picture type;

control means for judging the fact of whether the reference picture has been encoded with said assigned picture type at a past encoding process; and controlling a present encoding process of said encoding means based on a consequence of the judgment.
106. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process;

stream generating means for generating a MPEG stream including said encoded video data, present encoding parameters generated at said present encoding process, and said past encoding parameters generated at said past encoding processes.
107. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process; and stream generating means for generating a MPEG stream including said encoded video data, present encoding parameters generated at said present encoding process, and said past encoding parameters generated at said past encoding processes.
108. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process; and stream generating means for generating a MPEG bit stream comprising a sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes said past encoding parameters generated at said past encoding processes.
109. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process;

generating a MPEG bit stream comprising a sequence layer, GOP layer, picture layer, slice layer, and macroblock layer, wherein each layer includes present encoding parameters generated at said present encoding process, and wherein said picture layer includes said past encoding parameters generated at said past encoding processes.
110. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream to generate decoded video data, and extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;
encoding means for encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said past encoding parameters;
describe means for describing said past encoding parameters into said encoded video stream; and output means for outputting said encoded video stream in which said past parameters are described.
111. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;

extracting past encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process by referring said past encoding parameters;

describing said past encoding parameters into said encoded video stream; and output means for outputting said encoded video stream in which said past parameters are described.
112. A stream processing system for processing a source encoded video stream, the system comprising:

means for decoding said source encoded video stream to generate decoded video data;
means for extracting history information including previous encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

means for encoding said decoded video data to generate an encoded video stream as a present encoding process by referring to said history information; and means for describing said history information in said encoded stream so that said history information is available in advance to perform encoding.
113. A stream processing method for processing a source encoded video stream, comprising the steps of:

decoding said source encoded video stream to generate decoded video data;
extracting history information including previous encoding parameters corresponding to different generations of past encoding processes from said source encoded video stream;

encoding said decoded video data to generate an encoded video stream as a present encoding process by referring to said history information; and describing said history information in said encoded stream so that said history information is available in advance to perform encoding.
114. A stream processing system for processing a source encoded video stream, the system comprising:

decoding means for decoding said source encoded video stream based on encoding parameters of latest encoding process to generate base-band video data, and extracting past encoding parameters corresponding to different generations of past encoding processes, and multiplexing said encoding parameters of said latest and past encoding process into said base-band video data;

encoding means for encoding said base-band video data based on said encoding parameters of said latest and past encoding process to generate a new encoded video stream.
115. A video decoding method for decoding source encoded video stream, comprising the steps of:

parsing a syntax of said source encoded video stream to extract past encoding parameters corresponding to different generations of past encoding process;
decoding said encoded video stream to generate decoded video data; and outputting said decoded video data and said past encoding parameters so that said past encoding parameters are available in advance to perform encoding of said decoded video data.
116. A video decoding apparatus for decoding source encoded video stream, the apparatus comprising:

parsing means for parsing a syntax of said source encoded video stream to extract present encoding parameters generated at a latest encoding process, and past encoding parameters corresponding to different generations of past encoding processes;
decoding means for decoding said encoded video stream based on said present encoding parameters; and means for outputting said decoded video data, said present encoding parameters, and said past encoding parameters so that both of said present and past encoding parameters are available in advance to perform encoding of said decoded video data.
117. A video decoding method for decoding source encoded video stream, comprising the steps of:

parsing a syntax of said source encoded video stream to extract present encoding parameters generated at a latest encoding process, and past encoding parameters corresponding to different generations of past encoding processes;

decoding said encoded video stream based on said present encoding parameters;
and outputting said decoded video data, said present encoding parameters and said past encoding parameters so that both of said present and past encoding parameters are available in advance to perform encoding of said decoded-video data.
CA002265089A 1998-03-10 1999-03-05 Transcoding system using encoding history information Expired - Fee Related CA2265089C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JPP10-058118 1998-03-10
JP5811898 1998-03-10
JP15724398 1998-06-05
JPP10-157243 1998-06-05

Publications (2)

Publication Number Publication Date
CA2265089A1 CA2265089A1 (en) 1999-09-10
CA2265089C true CA2265089C (en) 2007-07-10

Family

ID=26399199

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002265089A Expired - Fee Related CA2265089C (en) 1998-03-10 1999-03-05 Transcoding system using encoding history information

Country Status (6)

Country Link
US (8) US6560282B2 (en)
EP (4) EP1988720A2 (en)
KR (2) KR100729541B1 (en)
CN (3) CN1332565C (en)
BR (1) BR9900981B1 (en)
CA (1) CA2265089C (en)

Families Citing this family (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715009A (en) 1994-03-29 1998-02-03 Sony Corporation Picture signal transmitting method and apparatus
CA2265089C (en) 1998-03-10 2007-07-10 Sony Corporation Transcoding system using encoding history information
GB2339101B (en) 1998-06-25 2002-09-18 Sony Uk Ltd Processing of compressed video signals using picture motion vectors
KR100571687B1 (en) 1999-02-09 2006-04-18 소니 가부시끼 가이샤 Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
JP4487374B2 (en) * 1999-06-01 2010-06-23 ソニー株式会社 Encoding apparatus, encoding method, multiplexing apparatus, and multiplexing method
KR100357093B1 (en) * 1999-06-02 2002-10-18 엘지전자 주식회사 apparatus and method for concealing error in moving picture decompression system
GB2350984B (en) * 1999-06-11 2003-10-15 Mitel Corp Synchronisation method and system
US6742019B1 (en) * 1999-07-23 2004-05-25 International Business Machines Corporation Sieved caching for increasing data rate capacity of a heterogeneous striping group
JP3694888B2 (en) * 1999-12-03 2005-09-14 ソニー株式会社 Decoding device and method, encoding device and method, information processing device and method, and recording medium
CN1848941A (en) * 1999-12-15 2006-10-18 三洋电机株式会社 Image reproducing method and image processing method, and image reproducing device, image processing device, and television receiver capable of using the methods
JP3496604B2 (en) * 1999-12-20 2004-02-16 日本電気株式会社 Compressed image data reproducing apparatus and compressed image data reproducing method
FR2809573B1 (en) * 2000-05-26 2002-08-16 Thomson Broadcast Systems METHOD FOR ENCODING A VIDEO IMAGE STREAM
GB0013273D0 (en) * 2000-06-01 2000-07-26 Philips Electronics Nv Video signal encoding and buffer management
JP3519673B2 (en) 2000-07-07 2004-04-19 松下電器産業株式会社 Video data creation device and video encoding device
EP1744563A3 (en) * 2000-07-21 2007-02-28 Matsushita Electric Industrial Co., Ltd. Signal transmission system
US20030118102A1 (en) * 2000-11-10 2003-06-26 Weiping Li Encoding and decoding of truncated scalable bitstreams
JP3632591B2 (en) * 2000-11-13 2005-03-23 日本電気株式会社 Image processing apparatus, method, and computer-readable recording medium
GB2372657B (en) 2001-02-21 2005-09-21 Sony Uk Ltd Signal processing
KR100814431B1 (en) * 2001-04-25 2008-03-18 삼성전자주식회사 Apparatus for transmitting broadcast signal, encoding system for encoding broadcast signal adapted variable bit rate and decoding system thereof
US7409094B2 (en) * 2001-05-04 2008-08-05 Hewlett-Packard Development Company, L.P. Methods and systems for packetizing encoded data
JP3548136B2 (en) * 2001-06-01 2004-07-28 三洋電機株式会社 Image processing device
KR100395396B1 (en) * 2001-06-15 2003-08-21 주식회사 성진씨앤씨 Method and apparatus for data compression of multi-channel moving pictures
GB0116119D0 (en) * 2001-06-30 2001-08-22 Koninkl Philips Electronics Nv Transcoding of video data streams
US9894379B2 (en) * 2001-07-10 2018-02-13 The Directv Group, Inc. System and methodology for video compression
US7804899B1 (en) * 2001-07-13 2010-09-28 Cisco Systems Canada Co. System and method for improving transrating of MPEG-2 video
US7577333B2 (en) * 2001-08-04 2009-08-18 Samsung Electronics Co., Ltd. Method and apparatus for recording and reproducing video data, and information storage medium in which video data is recorded by the same
KR100828343B1 (en) * 2001-08-04 2008-05-08 삼성전자주식회사 Method, apparatus and information storage medium for recording broadcast program
KR100642043B1 (en) * 2001-09-14 2006-11-03 가부시키가이샤 엔티티 도코모 Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program
CN101360240B (en) * 2001-09-14 2012-12-05 株式会社Ntt都科摩 Coding method, decoding method, coding apparatus, decoding apparatus, and image processing system
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
EP1472847A1 (en) * 2002-01-30 2004-11-03 Koninklijke Philips Electronics N.V. Streaming multimedia data over a network having a variable bandwidth
JP3874179B2 (en) * 2002-03-14 2007-01-31 Kddi株式会社 Encoded video converter
ES2644005T3 (en) * 2002-04-19 2017-11-27 Panasonic Intellectual Property Corporation Of America Motion vector calculation procedure
US20040001546A1 (en) 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
BRPI0303336B8 (en) 2002-07-02 2020-06-02 Matsushita Electric Ind Co Ltd motion vector derivation method, animated image encoding method and animated image decoding method
US7154952B2 (en) 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
CN100385956C (en) * 2002-11-01 2008-04-30 诺基亚有限公司 A method and device for transcoding images
US7403660B2 (en) 2003-04-30 2008-07-22 Nokia Corporation Encoding picture arrangement parameter in picture bitstream
JP4196726B2 (en) * 2003-05-14 2008-12-17 ソニー株式会社 Image processing apparatus, image processing method, recording medium, and program
JP4224778B2 (en) * 2003-05-14 2009-02-18 ソニー株式会社 STREAM CONVERTING APPARATUS AND METHOD, ENCODING APPARATUS AND METHOD, RECORDING MEDIUM, AND PROGRAM
JP4120934B2 (en) * 2003-06-16 2008-07-16 ソニー株式会社 Image processing apparatus, image processing method, recording medium, and program
JP2005065122A (en) * 2003-08-19 2005-03-10 Matsushita Electric Ind Co Ltd Dynamic image encoding device and its method
JPWO2005025225A1 (en) * 2003-09-04 2006-11-16 日本電気株式会社 Moving picture data conversion method, apparatus, and program
WO2005043915A1 (en) * 2003-10-31 2005-05-12 Kddi Media Will Corporation Video analysis device and video trouble detection device
TWI225363B (en) * 2003-11-10 2004-12-11 Mediatek Inc Method and apparatus for controlling quantization level of a video encoding bit stream
US8170096B1 (en) 2003-11-18 2012-05-01 Visible World, Inc. System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US9715898B2 (en) * 2003-12-16 2017-07-25 Core Wireless Licensing S.A.R.L. Method and device for compressed-domain video editing
US7830959B2 (en) * 2003-12-26 2010-11-09 Electronics And Telecommunications Research Institute Apparatus and method for performing intra prediction for image decoder
US7797454B2 (en) * 2004-02-13 2010-09-14 Hewlett-Packard Development Company, L.P. Media data transcoding devices
US20050276548A1 (en) * 2004-06-10 2005-12-15 Jiang Fu Transcoding closed captioning data from broadcast DTV onto DVD
JP3827162B2 (en) * 2004-06-25 2006-09-27 ソニー株式会社 Data recording device
US8155186B2 (en) * 2004-08-11 2012-04-10 Hitachi, Ltd. Bit stream recording medium, video encoder, and video decoder
JP4655191B2 (en) * 2004-09-02 2011-03-23 ソニー株式会社 Information processing apparatus and method, recording medium, and program
JP4335779B2 (en) * 2004-10-28 2009-09-30 富士通マイクロエレクトロニクス株式会社 Encoding apparatus, recording apparatus using the same, encoding method, and recording method
JP2006174415A (en) * 2004-11-19 2006-06-29 Ntt Docomo Inc Image decoding apparatus, image decoding program, image decoding method, image encoding apparatus, image encoding program, and image encoding method
US7991908B2 (en) * 2004-12-24 2011-08-02 Telecom Italia S.P.A. Media transcoding in multimedia delivery services
JP2006203682A (en) * 2005-01-21 2006-08-03 Nec Corp Converting device of compression encoding bit stream for moving image at syntax level and moving image communication system
KR100772373B1 (en) * 2005-02-07 2007-11-01 삼성전자주식회사 Apparatus for data processing using a plurality of data processing apparatus, method thereof, and recoding medium storing a program for implementing the method
US7373009B2 (en) * 2005-02-09 2008-05-13 Lsi Corporation Method and apparatus for efficient transmission and decoding of quantization matrices
US20060222251A1 (en) * 2005-04-01 2006-10-05 Bo Zhang Method and system for frame/field coding
US8208540B2 (en) * 2005-08-05 2012-06-26 Lsi Corporation Video bitstream transcoding method and apparatus
US8045618B2 (en) * 2005-08-05 2011-10-25 Lsi Corporation Method and apparatus for MPEG-2 to VC-1 video transcoding
US8155194B2 (en) * 2005-08-05 2012-04-10 Lsi Corporation Method and apparatus for MPEG-2 to H.264 video transcoding
US7881384B2 (en) * 2005-08-05 2011-02-01 Lsi Corporation Method and apparatus for H.264 to MPEG-2 video transcoding
US7903739B2 (en) * 2005-08-05 2011-03-08 Lsi Corporation Method and apparatus for VC-1 to MPEG-2 video transcoding
US7912127B2 (en) * 2005-08-05 2011-03-22 Lsi Corporation H.264 to VC-1 and VC-1 to H.264 transcoding
US7953147B1 (en) * 2006-01-18 2011-05-31 Maxim Integrated Products, Inc. Iteration based method and/or apparatus for offline high quality encoding of multimedia content
US8634469B2 (en) * 2006-02-06 2014-01-21 Thomson Licensing Method and apparatus for reusing available motion information as a motion estimation predictor for video encoding
KR100761846B1 (en) * 2006-05-18 2007-09-28 삼성전자주식회사 Apparatus and method for detecting existence of a motion in a successive image
JP4221676B2 (en) * 2006-09-05 2009-02-12 ソニー株式会社 Information processing apparatus, information processing method, recording medium, and program
KR20080035891A (en) * 2006-10-20 2008-04-24 포스데이타 주식회사 Image playback apparatus for providing smart search of motion and method of the same
US8804829B2 (en) * 2006-12-20 2014-08-12 Microsoft Corporation Offline motion description for video generation
EP1978743B1 (en) * 2007-04-02 2020-07-01 Vestel Elektronik Sanayi ve Ticaret A.S. A method and apparatus for transcoding a video signal
US8189676B2 (en) * 2007-04-05 2012-05-29 Hong Kong University Of Science & Technology Advance macro-block entropy coding for advanced video standards
JP4450021B2 (en) * 2007-07-05 2010-04-14 ソニー株式会社 Recording / reproducing apparatus, recording apparatus, reproducing apparatus, recording method, reproducing method, and computer program
US8184715B1 (en) * 2007-08-09 2012-05-22 Elemental Technologies, Inc. Method for efficiently executing video encoding operations on stream processor architectures
JP5018332B2 (en) * 2007-08-17 2012-09-05 ソニー株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
US8098732B2 (en) * 2007-10-10 2012-01-17 Sony Corporation System for and method of transcoding video sequences from a first format to a second format
US8121197B2 (en) 2007-11-13 2012-02-21 Elemental Technologies, Inc. Video encoding and decoding using parallel processors
JP5256803B2 (en) * 2008-03-19 2013-08-07 株式会社メガチップス Transcoder
JP2009272921A (en) * 2008-05-08 2009-11-19 Panasonic Corp Moving image recording apparatus, moving image reproducing apparatus, moving image recording method, moving image reproducing method, and semiconductor integrated circuit
JP5250824B2 (en) * 2008-05-30 2013-07-31 株式会社メガチップス Transcoder
CN102119486A (en) * 2008-06-11 2011-07-06 新加坡国立大学 CMOS amplifier with integrated tunable band-pass function
US8249144B2 (en) * 2008-07-08 2012-08-21 Imagine Communications Ltd. Distributed transcoding
US8767838B1 (en) * 2008-09-05 2014-07-01 Zenverge, Inc. Cascading multiple video transcoders in a video processing system
US9083976B2 (en) 2008-09-05 2015-07-14 Freescale Semiconductor, Inc. Processing a video stream in real time based on binary information of the video stream
US8798150B2 (en) 2008-12-05 2014-08-05 Motorola Mobility Llc Bi-directional video compression for real-time video streams during transport in a packet switched network
US8830339B2 (en) * 2009-04-15 2014-09-09 Qualcomm Incorporated Auto-triggered fast frame rate digital video recording
US20100278231A1 (en) * 2009-05-04 2010-11-04 Imagine Communications Ltd. Post-decoder filtering
KR101045191B1 (en) * 2009-06-09 2011-06-30 (주)제너시스템즈 Improved image transcoder and transcoding method
US8458685B2 (en) * 2009-06-12 2013-06-04 Cray Inc. Vector atomic memory operation vector update system and method
US8583898B2 (en) * 2009-06-12 2013-11-12 Cray Inc. System and method for managing processor-in-memory (PIM) operations
JP5336945B2 (en) * 2009-06-29 2013-11-06 ルネサスエレクトロニクス株式会社 Image processing device
US9807422B2 (en) * 2009-09-03 2017-10-31 Nec Corporation Video encoding device, video encoding method, and video encoding program
EP2355510A1 (en) * 2009-12-21 2011-08-10 Alcatel Lucent Method and arrangement for video coding
US8358698B2 (en) * 2010-01-08 2013-01-22 Research In Motion Limited Method and device for motion vector estimation in video transcoding using full-resolution residuals
US8315310B2 (en) 2010-01-08 2012-11-20 Research In Motion Limited Method and device for motion vector prediction in video transcoding using full resolution residuals
US8340188B2 (en) * 2010-01-08 2012-12-25 Research In Motion Limited Method and device for motion vector estimation in video transcoding using union of search areas
US8559519B2 (en) 2010-01-08 2013-10-15 Blackberry Limited Method and device for video encoding using predicted residuals
US20110170608A1 (en) * 2010-01-08 2011-07-14 Xun Shi Method and device for video transcoding using quad-tree based mode selection
RU2533649C1 (en) * 2010-07-15 2014-11-20 Мицубиси Электрик Корпорейшн Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method
TWI716169B (en) 2010-12-03 2021-01-11 美商杜比實驗室特許公司 Audio decoding device, audio decoding method, and audio encoding method
KR101803970B1 (en) * 2011-03-16 2017-12-28 삼성전자주식회사 Method and apparatus for composing content
WO2012158705A1 (en) * 2011-05-19 2012-11-22 Dolby Laboratories Licensing Corporation Adaptive audio processing based on forensic detection of media processing history
RU2648605C1 (en) 2011-10-17 2018-03-26 Кт Корпорейшен Method of video signal decoding
JP5760950B2 (en) * 2011-10-28 2015-08-12 富士通株式会社 Moving picture re-encoding device, moving picture re-encoding method, and moving picture re-encoding computer program
CN102799572B (en) * 2012-07-27 2015-09-09 深圳万兴信息科技股份有限公司 A kind of text code mode and text code device
US9992490B2 (en) * 2012-09-26 2018-06-05 Sony Corporation Video parameter set (VPS) syntax re-ordering for easy access of extension parameters
US20140177729A1 (en) * 2012-12-21 2014-06-26 Ati Technologies Ulc Method and apparatus for transcoding video data
ITMI20131710A1 (en) 2013-10-15 2015-04-16 Sky Italia S R L "ENCODING CLOUD SYSTEM"
KR102071580B1 (en) * 2014-11-03 2020-01-30 삼성전자주식회사 Method and apparatus for re-encoding an image
US20160148279A1 (en) 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Badges
KR102437698B1 (en) * 2015-08-11 2022-08-30 삼성전자주식회사 Apparatus and method for encoding image thereof
US10200694B2 (en) * 2015-09-17 2019-02-05 Mediatek Inc. Method and apparatus for response of feedback information during video call
US10408289B2 (en) * 2016-08-12 2019-09-10 Akebono Brake Industry Co., Ltd. Parking brake torque locking mechanism
US11575922B2 (en) * 2017-12-06 2023-02-07 V-Nova International Limited Methods and apparatuses for hierarchically encoding and decoding a bytestream
CN110198474B (en) * 2018-02-27 2022-03-15 中兴通讯股份有限公司 Code stream processing method and device
CN110620635A (en) * 2018-06-20 2019-12-27 深圳市华星光电技术有限公司 Decoding method, apparatus and readable storage medium
CN113037733A (en) * 2021-03-01 2021-06-25 安徽商信政通信息技术股份有限公司 Non-physical contact nondestructive transmission method and system for aerospace secret-related data
CN113038277B (en) * 2021-03-15 2023-06-23 北京汇钧科技有限公司 Video processing method and device
CN116800976B (en) * 2023-07-17 2024-03-12 武汉星巡智能科技有限公司 Audio and video compression and restoration method, device and equipment for infant with sleep

Family Cites Families (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2443769A2 (en) * 1978-12-08 1980-07-04 Telediffusion Fse COMPRESSION AND EXPANSION (QUANTIFICATION) OF DIFFERENTIALLY CODED TELEVISION DIGITAL SIGNALS
DE3613343A1 (en) 1986-04-19 1987-10-22 Philips Patentverwaltung HYBRID CODERS
US4825448A (en) 1986-08-07 1989-04-25 International Mobile Machines Corporation Subscriber unit for wireless digital telephone system
US5187755A (en) 1988-06-30 1993-02-16 Dainippon Screen Mfg. Co., Ltd. Method of and apparatus for compressing image data
CA2040429C (en) * 1989-09-04 1996-12-17 Yoshihiro Tomita Relay and exchange system for time division multiplex data
EP0492528B1 (en) 1990-12-27 1996-10-09 Kabushiki Kaisha Toshiba Recording/reproducing apparatus
US5260783A (en) 1991-02-21 1993-11-09 Gte Laboratories Incorporated Layered DCT video coder for packet switched ATM networks
US5148272A (en) 1991-02-27 1992-09-15 Rca Thomson Licensing Corporation Apparatus for recombining prioritized video data
US5212549A (en) 1991-04-29 1993-05-18 Rca Thomson Licensing Corporation Error concealment apparatus for a compressed video signal processing system
US5227878A (en) * 1991-11-15 1993-07-13 At&T Bell Laboratories Adaptive coding and decoding of frames and fields of video
US5831668A (en) 1992-02-25 1998-11-03 Imatran Voima Oy Assembly for combustion chamber monitoring camera
US5327520A (en) * 1992-06-04 1994-07-05 At&T Bell Laboratories Method of use of voice message coder/decoder
JP3196906B2 (en) 1992-08-21 2001-08-06 富士ゼロックス株式会社 Image signal encoding device
JP3358835B2 (en) * 1992-12-14 2002-12-24 ソニー株式会社 Image coding method and apparatus
JP3163830B2 (en) 1993-03-29 2001-05-08 ソニー株式会社 Image signal transmission method and apparatus
TW301098B (en) * 1993-03-31 1997-03-21 Sony Co Ltd
JPH06292019A (en) * 1993-04-02 1994-10-18 Fujitsu Ltd Picture data compressor and picture code compressor
JP3085024B2 (en) * 1993-06-01 2000-09-04 松下電器産業株式会社 Image recompressor and image recording device
JPH0795584A (en) * 1993-07-30 1995-04-07 Matsushita Electric Ind Co Ltd Picture coder
NL9301358A (en) 1993-08-04 1995-03-01 Nederland Ptt Transcoder.
JP3081425B2 (en) 1993-09-29 2000-08-28 シャープ株式会社 Video coding device
US5452006A (en) * 1993-10-25 1995-09-19 Lsi Logic Corporation Two-part synchronization scheme for digital video decoders
KR970003789B1 (en) 1993-11-09 1997-03-21 한국전기통신공사 Bit allocation method for controlling bit-rate of video encoder
DE69422960T2 (en) * 1993-12-01 2000-06-15 Matsushita Electric Ind Co Ltd Method and device for editing or mixing compressed images
US6870886B2 (en) 1993-12-15 2005-03-22 Koninklijke Philips Electronics N.V. Method and apparatus for transcoding a digitally compressed high definition television bitstream to a standard definition television bitstream
US5537440A (en) * 1994-01-07 1996-07-16 Motorola, Inc. Efficient transcoding device and method
US5563593A (en) * 1994-03-18 1996-10-08 Lucent Technologies Inc. Video coding with optimized low complexity variable length codes
US5500678A (en) * 1994-03-18 1996-03-19 At&T Corp. Optimized scanning of transform coefficients in video coding
US5754235A (en) * 1994-03-25 1998-05-19 Sanyo Electric Co., Ltd. Bit-rate conversion circuit for a compressed motion video bitstream
US5715009A (en) 1994-03-29 1998-02-03 Sony Corporation Picture signal transmitting method and apparatus
US5541852A (en) * 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5534937A (en) 1994-04-14 1996-07-09 Motorola, Inc. Minimum-delay jitter smoothing device and method for packet video communications
JPH07288804A (en) 1994-04-18 1995-10-31 Kokusai Denshin Denwa Co Ltd <Kdd> Re-coding device for picture signal
US5940130A (en) 1994-04-21 1999-08-17 British Telecommunications Public Limited Company Video transcoder with by-pass transfer of extracted motion compensation data
DE4416967A1 (en) * 1994-05-13 1995-11-16 Thomson Brandt Gmbh Method and device for transcoding bit streams with video data
JPH10503895A (en) 1994-06-17 1998-04-07 スネル アンド ウィルコックス リミテッド Video compression
GB9413001D0 (en) 1994-06-28 1994-08-17 Ntl Methods for the synchronisation of successive digital video compression/decompression systems
DE69522861T2 (en) * 1994-06-30 2002-04-11 Koninkl Philips Electronics Nv Method and device for code conversion of coded data stream
US5512953A (en) * 1994-08-09 1996-04-30 At&T Corp. Method and apparatus for conversion of compressed bit stream representation of video signal
JPH0865663A (en) 1994-08-19 1996-03-08 Canon Inc Digital image information processor
JP3629728B2 (en) * 1994-08-31 2005-03-16 ソニー株式会社 Moving picture signal encoding method, moving picture signal encoding apparatus, and moving picture signal recording medium
JP3623989B2 (en) 1994-09-22 2005-02-23 キヤノン株式会社 Image conversion method and apparatus
JP3293369B2 (en) 1994-10-12 2002-06-17 ケイディーディーアイ株式会社 Image information re-encoding method and apparatus
JP3588153B2 (en) * 1994-10-31 2004-11-10 三洋電機株式会社 Data editing method and editing device
JP3058028B2 (en) 1994-10-31 2000-07-04 三菱電機株式会社 Image encoded data re-encoding device
US5889561A (en) * 1994-11-04 1999-03-30 Rca Thomson Licensing Corporation Method and apparatus for scaling a compressed video bitstream
GB9501736D0 (en) 1995-01-30 1995-03-22 Snell & Wilcox Ltd Video signal processing
WO1996025823A2 (en) * 1995-02-15 1996-08-22 Philips Electronics N.V. Method and device for transcoding video signals
US5812197A (en) * 1995-05-08 1998-09-22 Thomson Consumer Electronics, Inc. System using data correlation for predictive encoding of video image data subject to luminance gradients and motion
US5774206A (en) * 1995-05-10 1998-06-30 Cagent Technologies, Inc. Process for controlling an MPEG decoder
GB2301970B (en) * 1995-06-06 2000-03-01 Sony Uk Ltd Motion compensated video processing
KR100437298B1 (en) * 1995-07-19 2004-09-04 코닌클리케 필립스 일렉트로닉스 엔.브이. Method and apparatus for decoding a digital video bit stream and receiving apparatus including such apparatus
EP0779744A3 (en) * 1995-12-06 1997-08-20 Thomson Multimedia Sa Method and apparatus for decoding digital video signals
JPH09182083A (en) * 1995-12-27 1997-07-11 Matsushita Electric Ind Co Ltd Video image encoding method and decoding method and device therefor
JPH09307861A (en) * 1996-05-17 1997-11-28 Sony Corp Signal processing method and signal process
ATE278297T1 (en) 1996-07-15 2004-10-15 Snell & Wilcox Ltd VIDEO SIGNAL COMPRESSION
US6856650B1 (en) * 1996-07-16 2005-02-15 Kokusai Denshin Denwa Co., Ltd. Method and apparatus for second or later generation coding of video signal
JPH1032830A (en) 1996-07-16 1998-02-03 Kokusai Denshin Denwa Co Ltd <Kdd> Re-encoding method and device for image information
JP3956323B2 (en) 1996-07-16 2007-08-08 Kddi株式会社 Image information re-encoding method and apparatus
JPH1051766A (en) * 1996-08-05 1998-02-20 Mitsubishi Electric Corp Image coding data converter
US5936616A (en) * 1996-08-07 1999-08-10 Microsoft Corporation Method and system for accessing and displaying a compressed display image in a computer system
JP3623056B2 (en) * 1996-09-10 2005-02-23 ソニー株式会社 Video compression device
GB2318472B (en) * 1996-10-09 2000-11-15 Sony Uk Ltd Processing encoded signals
GB2318246B (en) 1996-10-09 2000-11-15 Sony Uk Ltd Processing digitally encoded signals
WO1998026602A1 (en) * 1996-12-12 1998-06-18 Sony Corporation Equipment and method for compressing picture data
US5870146A (en) * 1997-01-21 1999-02-09 Multilink, Incorporated Device and method for digital video transcoding
WO1998051077A1 (en) 1997-05-09 1998-11-12 Neomedia Technologies, Inc. Method for embedding links to a networked resource in a transmission medium
JP3022405B2 (en) * 1997-06-03 2000-03-21 日本電気株式会社 Image memory controller
US5990958A (en) * 1997-06-17 1999-11-23 National Semiconductor Corporation Apparatus and method for MPEG video decompression
US5907374A (en) 1997-06-30 1999-05-25 Hewlett-Packard Company Method and apparatus for processing a compressed input bitstream representing an information signal
US6012091A (en) * 1997-06-30 2000-01-04 At&T Corporation Video telecommunications server and method of providing video fast forward and reverse
US6043845A (en) * 1997-08-29 2000-03-28 Logitech Video capture and compression system and method for composite video
JP3338774B2 (en) * 1997-12-22 2002-10-28 エヌイーシーソフト株式会社 MPEG encoding apparatus, MPEG PS multiplexing method, and recording medium recording PS multiplexing program
US6100940A (en) * 1998-01-21 2000-08-08 Sarnoff Corporation Apparatus and method for using side information to improve a coding system
GB2333656B (en) 1998-01-22 2002-08-14 British Broadcasting Corp Compressed signals
US6574274B2 (en) * 1998-02-27 2003-06-03 Sony Corporation Picture signal processing system, decoder, picture signal processing method, and decoding method
JP3724204B2 (en) 1998-03-10 2005-12-07 ソニー株式会社 Encoding apparatus and method, and recording medium
CA2265089C (en) * 1998-03-10 2007-07-10 Sony Corporation Transcoding system using encoding history information
CN1175671C (en) * 1998-04-30 2004-11-10 皇家菲利浦电子有限公司 Transcoding of data stream
US6665687B1 (en) * 1998-06-26 2003-12-16 Alexander James Burke Composite user interface and search system for internet and multimedia applications
US6167084A (en) * 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
JP2000209425A (en) 1998-11-09 2000-07-28 Canon Inc Device and method for processing image and storage medium
KR100571687B1 (en) * 1999-02-09 2006-04-18 소니 가부시끼 가이샤 Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
US6437787B1 (en) * 1999-03-30 2002-08-20 Sony Corporation Display master control
JP4295861B2 (en) 1999-05-31 2009-07-15 株式会社東芝 Transcoder device
KR100357093B1 (en) * 1999-06-02 2002-10-18 엘지전자 주식회사 apparatus and method for concealing error in moving picture decompression system
GB9920929D0 (en) * 1999-09-03 1999-11-10 Sony Uk Ltd Video signal processor
JP3694888B2 (en) 1999-12-03 2005-09-14 ソニー株式会社 Decoding device and method, encoding device and method, information processing device and method, and recording medium
US7151800B1 (en) * 2000-01-15 2006-12-19 Sony Corporation Implementation of a DV video decoder with a VLIW processor and a variable length decoding unit
US6369722B1 (en) 2000-03-17 2002-04-09 Matra Nortel Communications Coding, decoding and transcoding methods
FR2809573B1 (en) 2000-05-26 2002-08-16 Thomson Broadcast Systems METHOD FOR ENCODING A VIDEO IMAGE STREAM
US20020016755A1 (en) * 2000-07-17 2002-02-07 Pearce Kenneth F. Method of establishing a commercial relationship between a service provider and a potential customer of the service, including a reasoning criterion, and method of face-to-face advertising in a public place
JP3632591B2 (en) * 2000-11-13 2005-03-23 日本電気株式会社 Image processing apparatus, method, and computer-readable recording medium
US8782254B2 (en) * 2001-06-28 2014-07-15 Oracle America, Inc. Differentiated quality of service context assignment and propagation
EP1292153B1 (en) * 2001-08-29 2015-08-19 Canon Kabushiki Kaisha Image processing method and apparatus, computer program, and storage medium
WO2003054848A1 (en) * 2001-12-21 2003-07-03 Matsushita Electric Industrial Co., Ltd. Computer display system, computer apparatus and display apparatus
JP2005304065A (en) 2005-05-16 2005-10-27 Sony Corp Decoding device and method, coding device and method, information processing device and method, and recording medium

Also Published As

Publication number Publication date
CN1599463A (en) 2005-03-23
US8687690B2 (en) 2014-04-01
US8938000B2 (en) 2015-01-20
US8982946B2 (en) 2015-03-17
EP1988720A2 (en) 2008-11-05
KR100729541B1 (en) 2007-06-19
US8934536B2 (en) 2015-01-13
CA2265089A1 (en) 1999-09-10
US8948251B2 (en) 2015-02-03
US8638849B2 (en) 2014-01-28
KR19990077748A (en) 1999-10-25
EP0942605A2 (en) 1999-09-15
CN1549603A (en) 2004-11-24
US20030016755A1 (en) 2003-01-23
CN1178516C (en) 2004-12-01
BR9900981B1 (en) 2013-12-03
EP1976300A2 (en) 2008-10-01
CN1241095A (en) 2000-01-12
CN1332565C (en) 2007-08-15
EP0942605A3 (en) 2005-08-31
KR100766740B1 (en) 2007-10-12
US20080013627A1 (en) 2008-01-17
BR9900981A (en) 2000-09-12
US20080008237A1 (en) 2008-01-10
EP1976301A2 (en) 2008-10-01
US20080013625A1 (en) 2008-01-17
US7469007B2 (en) 2008-12-23
US20080232464A1 (en) 2008-09-25
US20080013626A1 (en) 2008-01-17
US20070263719A1 (en) 2007-11-15
KR20060069804A (en) 2006-06-22
US6560282B2 (en) 2003-05-06
US20030128766A1 (en) 2003-07-10

Similar Documents

Publication Publication Date Title
CA2265089C (en) Transcoding system using encoding history information
US7126993B2 (en) Information processing apparatus, information processing method and recording medium
KR100571687B1 (en) Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
JP3724205B2 (en) Decoding device and method, and recording medium
JP3724204B2 (en) Encoding apparatus and method, and recording medium
JP2000059766A (en) Encoding device, its method and serving medium thereof
JP4139983B2 (en) Encoded stream conversion apparatus, encoded stream conversion method, stream output apparatus, and stream output method
JP4482811B2 (en) Recording apparatus and method
JP4543321B2 (en) Playback apparatus and method
JP4478630B2 (en) Decoding device, decoding method, program, and recording medium
JP4016348B2 (en) Stream conversion apparatus, stream conversion method, and recording medium
JP2000059783A (en) Image data processor and its method, and providing medium

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20190305

MKLA Lapsed

Effective date: 20190305