CN101431681B - Method and device for encoding of picture - Google Patents

Method and device for encoding of picture Download PDF

Info

Publication number
CN101431681B
CN101431681B CN 200810184669 CN200810184669A CN101431681B CN 101431681 B CN101431681 B CN 101431681B CN 200810184669 CN200810184669 CN 200810184669 CN 200810184669 A CN200810184669 A CN 200810184669A CN 101431681 B CN101431681 B CN 101431681B
Authority
CN
China
Prior art keywords
frame
motion vector
piece
decoding
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN 200810184669
Other languages
Chinese (zh)
Other versions
CN101431681A (en
Inventor
近藤敏志
*野真也
羽饲诚
安倍清史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Corp of America
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN101431681A publication Critical patent/CN101431681A/en
Application granted granted Critical
Publication of CN101431681B publication Critical patent/CN101431681B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Abstract

A coding control unit ( 110 ) and a mode selection unit ( 109 ) are included. The coding control unit ( 110 ) determines the coding order for a plurality of consecutive B-pictures located between I-pictures and P-pictures so that the B-picture whose temporal distance from two previously coded pictures is farthest in display order is coded by priority, so as to reorder the B-pictures in coding order. When a current block is coded in direct mode, the mode selection unit 109 scales a forward motion vector of a block which is included in a backward reference picture of a current picture and co-located with the current block, so as to generate motion vectors of the current block, if the forward motion vector has been used for coding the co-located block.

Description

Picture decoding method and picture decoding apparatus
The present patent application is that the applicant submitted on February 26th, 2003, and application number is 03805346.2, and denomination of invention is divided an application for " motion image encoding method and dynamic image decoding method ".
Technical field
The present invention relates to a kind of picture decoding method and picture decoding apparatus, relate in particular to the process object frame is implemented the frame that will the handle method that inter prediction encoding is handled and interframe prediction decoding is handled as reference frame.
Background technology
In moving picture encoding is handled, utilize the direction in space that dynamic image has and the redundancy of time orientation to come the compressed information amount usually.Here, use the method that is used as the redundancy of usage space direction to the conversion of frequency domain usually, the method that is used as direction redundancy service time is handled in predictive coding between use frame (image).In inter prediction encoding is handled, when certain frame of coding, with the coded object frame of relative coding process object by showing that the intact frame of the encoding process of time sequencing before or after being in is as the reference frame corresponding to the coded object frame.Detect the relatively amount of exercise of this reference frame of coded object frame in addition, and carry out motion compensation process, by the view data that will obtain after the motion compensation process difference, the redundancy of removal time orientation with the view data of coded object frame according to this amount of exercise.And by this difference value being removed the redundancy of direction in space, the compression relative coding is to the amount of information of picture frame.
In current standard, be called under the moving picture encoding mode H.264, do not carry out inter prediction encoding and handle, be about to carry out the frame that intraframe coding handles and be called I frame (image).In addition, with reference to by showing that time sequencing is in 1 frame having handled before or after the coded object frame and the frame that carries out inter prediction encoding is called the P frame, with reference to by showing that frame that time sequencing is in two frames having handled before or after the coded object frame and carries out inter prediction encoding is called B frame (for example with reference to ISO/IEC 14496-2[Information technology-Coding ofaudio-visualobjects-Part2:Visual] pp.218-219).
Fig. 1 (a) is the graph of a relation between each frame and the corresponding reference frame in the above-mentioned moving picture encoding mode of expression, and Fig. 1 (b) is yard row figure in proper order that presentation code generates.
Frame I1 is the I frame, and frame P5, P9, P13 are the P frames, and frame B2, B3, B4, B6, B7, B8, B10, B11, B12 are the B frames.That is, P frame P5, P9, P13 are used as reference frame with I frame I1, P frame P5, P frame P9 respectively as shown by arrows, implement inter prediction encoding.
In addition, B frame B2, B3, B4 are respectively as shown by arrows, I frame I1 and P frame P5 are used as reference frame, implement inter prediction encoding, B frame B6, B7, B8 are used as reference frame with P frame P5 and P frame B9 respectively as shown by arrows, implement inter prediction encoding, B frame B10, B11, B12 are used as reference frame with P frame P9 and P frame P13 respectively as shown by arrows, implement inter prediction encoding.
When above coding, encode prior to frame with reference to this frame as the frame of reference frame.Therefore, the sign indicating number row that generate by above-mentioned coding become the frame sequential shown in Fig. 1 (b).
Yet, under moving picture encoding mode H.264, can select to be called the coding mode of Direct Model (direct mode) to the coding of B frame.With Fig. 2 inter-frame prediction method in the Direct Model is described.Fig. 2 is the key diagram of the motion vector under the expression Direct Model, the situation that expression comes the piece a of coded frame B6 with Direct Model.At this moment, use be used to as the frame P9 that is arranged in the reference frame after the frame B6, be in piece a same position on the motion vector c of piece b when having carried out encoding.The motion vector that motion vector c uses when being encoding block b, and reference frame P5.Piece a uses the motion vector parallel with motion vector c, and frame P5 and the frame P9 of conduct back to reference frame according to as the forward direction reference frame obtain reference block (Block ロ Star Network), and carry out bi-directional predicted the coding.Motion vector when promptly being used for encoding block a becomes motion vector d for frame P5, becomes motion vector e for frame P9.
But as mentioned above, when the enforcement of B frame was handled with reference to the inter prediction encoding of I frame or P frame, the time gap between coded object frame and reference frame was elongated sometimes, in this case, causes code efficiency low.Especially in the insertion number of B frame, be between adjacent I frame and the P frame or the quantity of the B frame that disposes between two P frames in proximal most position becomes for a long time, code efficiency obviously reduces.
Therefore, the present invention is proposed in order to address the above problem, its purpose is to provide a kind of motion image encoding method and dynamic image decoding method, even become under the situation how in the B frame number of inserting between I frame and the P frame or insert between the P frame, also can avoid the code efficiency of B frame to worsen.In addition, its purpose is to provide a kind of motion image encoding method and dynamic image decoding method, and the code efficiency under the Direct Model is improved.
Summary of the invention
To achieve these goals, the present invention takes technical scheme:
A kind of picture decoding method, with the picture decoding that is encoded, it is characterized in that, comprise: decoding step, when decoder object piece (a) is decoded, determine the motion vector (E1 of described decoder object piece (a) according to the motion vector (C1) of same position piece (b), E2), described same position piece (b) is the piece in the decoded frame (B7), and be and the piece of described decoder object piece at same position, and use the motion vector (E1 of described decoder object piece, E2) and with the motion vector (E1 of described decoder object piece, E2) Dui Ying reference frame, described decoder object piece (a) is carried out motion compensation and decoding with Direct Model, in described decoding step, when using a motion vector (C1) and a rear reference frame corresponding to decode described same position piece (b) with this motion vector (C1), a described motion vector (C1) that uses when decoding described same position piece (b), use the difference of the information of the DISPLAY ORDER of representing frame to convert, thus, described decoder object piece (a) is generated two motion vector (E1 that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding, E2), use described two the motion vector (E1 that generate, E2) and with each self-corresponding two reference frame of described two motion vectors that generate, described decoder object piece (a) is carried out motion compensation and decoding with Direct Model.
Described picture decoding method, it is characterized in that: in corresponding with two motion vectors of described decoder object piece respectively described two reference frame, the 1st reference frame is the frame that comprises described same position piece, the 2nd reference frame is the reference frame at a rear using with the decoding of described same position piece the time, and is as the corresponding reference frame of conversion motion of objects vector during with two motion vectors generating described decoder object piece.
Described picture decoding method, it is characterized in that: the described information of the DISPLAY ORDER of expression frame is: the 1st information, expression comprises the DISPLAY ORDER of the frame of described decoder object piece, the 2nd information, represent described decoder object piece described the 2nd reference frame DISPLAY ORDER and, the 3rd information, expression frame DISPLAY ORDER, this frame is to comprise the frame of described same position piece and is described the 1st reference frame of described decoder object piece, the difference of described information is the poor of the 1st information and the 2nd information, the difference of the 1st information and the 3rd information and the 2nd information and the 3rd information poor.
A kind of picture decoding apparatus, with the picture decoding that is encoded, it is characterized in that, comprise: decoding unit, when the decoder object piece is decoded, determine the motion vector of described decoder object piece according to the motion vector of same position piece, described same position piece is the piece in the decoded frame, and be and the piece of described decoder object piece at same position, and use the motion vector of described decoder object piece and the reference frame corresponding with the motion vector of described decoder object piece, described decoder object piece is carried out motion compensation and decoding with Direct Model, described decoding unit, when being carried out decoding at described motion vector of same position piece use with the reference frame at this corresponding rear of motion vector, a relative described motion vector that uses during the described same position piece of decoding, the difference of the information of the DISPLAY ORDER by using the expression frame converts, thus, described relatively decoder object piece generates two motion vectors that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding, and described two motion vectors that use to generate and with each self-corresponding two reference frame of described two motion vectors that generate, described decoder object piece is carried out motion compensation and decoding with Direct Model.
In addition, under coded object piece the situation as Direct Model of coding as the B frame of coded object, the 1st reference frame of this used piece reference when being in the piece in the P frame of back on the same position according to coding and described coded object piece, the 1st motion vector is converted the motion vector in the time of also can obtaining the described coded object piece of motion compensation thus with the information gap of representing the frame DISPLAY ORDER.
Thus, selecting under the situation of Direct Model, the 1st motion vector at back P frame is being converted, thus can be to sign indicating number row additional movement Vector Message, and can improve forecasting efficiency.
In addition, under coded object piece the situation as Direct Model of coding as the B frame of coded object, at least use according to the 1st motion vector of the 1st reference frame and encode when being in piece in described the 2nd reference frame on the same position with described coded object piece, with the information gap of representing the frame DISPLAY ORDER described the 1st motion vector is converted, encode according to the 2nd motion vector of the 2nd reference frame when being in piece in described the 2nd reference frame on the described same position only using, described the 2nd motion vector is converted the motion vector in the time of also can obtaining the described coded object piece of motion compensation thus with the information gap of representing the frame DISPLAY ORDER.
Thus, under the situation of selecting Direct Model, if the 2nd reference frame has the 1st motion vector, then the 1st motion vector is converted, in addition, if the 2nd reference frame does not have the 1st motion vector and only has the 2nd motion vector, then the 2nd motion vector is converted, so can be, and can improve forecasting efficiency to sign indicating number row additional movement Vector Message.
According to dynamic image decoding method of the present invention, the sign indicating number row that decoding generates after will the coded image data corresponding to each frame that constitutes dynamic image, it is characterized in that: comprise decoding step, with decoded frame as reference frame, to what become decoder object picture frame is carried out interframe prediction decoding, in described decoding step, when carrying out interframe prediction decoding based on the two-way reference that decoded frame is used as the 1st reference frame and the 2nd reference frame, decoding comprises each comfortable sign indicating number row that show frame nearest in the time sequencing at least, as described the 1st reference frame and the 2nd reference frame.
Thus, can be in when decoding, to carrying out inter prediction encoding when handling based on two-way reference, will showing that near the frame being positioned in the time sequencing decodes as the 1st reference frame and the 2nd reference frame yard being listed as of encoding that the back generates.
In addition, according to dynamic image decoding method of the present invention, the sign indicating number row that generate after the view data of decoding corresponding to each frame that constitutes dynamic image, it is characterized in that: comprise decoding step, with decoded frame as reference frame, to what become decoder object picture frame (object picture) is carried out interframe prediction decoding, in described decoding step, the decoder object frame is to have the frame that carries out based on the interframe prediction decoding piece of two-way reference, this two-way reference is as the 1st reference frame and the 2nd reference frame with decoded frame, and, according to a decoded motion vector that is had the decoder object piece is being decoded, and when carrying out the Direct Model of motion compensation by the motion vector of described decoder object piece, the 1st reference frame of this used piece reference when being in the piece in described the 2nd reference frame on the same position according to decoding and described decoder object piece, the 1st motion vector is converted the motion vector when obtaining the described decoder object piece of motion compensation thus with the information gap of representing the frame DISPLAY ORDER.
Thus, selecting under the situation of Direct Model, the 1st motion vector of the 2nd reference frame is being converted, so can when decoding, correctly carry out decoding processing.
The decoder object frame is to have the frame that carries out the interframe prediction decoding piece based on two-way reference, and, the decoder object piece is decoded and situation as Direct Model under, the 2nd reference frame of this used piece reference when being in piece in described the 2nd reference frame on the same position according to decoding and described decoder object piece, the 2nd motion vector is converted the motion vector in the time of also can obtaining the described decoder object piece of motion compensation with the information gap of representing the frame DISPLAY ORDER.
Thus, selecting under the situation of Direct Model, the 2nd motion vector of the 2nd reference frame (with reference to picture) is being converted, so can when decoding, correctly carry out decoding processing.
In addition, the decoder object frame is to have the frame that carries out the interframe prediction decoding piece based on two-way reference, and the decoder object piece is decoded and situation as Direct Model under, when decoding under Direct Model and described decoder object piece are in piece in described the 2nd reference frame on the same position, the 1st reference frame of this piece reference of using in fact during according to the piece in described the 2nd reference frame of decoding, the 1st motion vector is converted the motion vector in the time of also can obtaining the described decoder object piece of motion compensation with the information gap of representing the reference frame DISPLAY ORDER.
Thus, under the situation of selecting Direct Model, the 1st motion vector that the 2nd reference frame is used in fact converts, so can correctly carry out decoding processing when decoding.
In addition, the decoder object frame is to have the frame that carries out the interframe prediction decoding piece based on two-way reference, and, the decoder object piece is decoded and situation as Direct Model under, the 1st reference frame of this used piece reference when being in piece on the same position according to decoding and described decoder object piece, with the information gap of representing the frame DISPLAY ORDER the 1st motion vector is converted, motion vector in the time of also can obtaining the described decoder object piece of motion compensation, wherein, this piece is positioned at the back on the demonstration time sequencing, carrying out in the frame of interframe prediction decoding based on the unidirectional reference that decoded frame is used as the 1st reference frame.
Thus, under the situation of selecting Direct Model, the 1st motion vector that carries out the frame of interframe prediction decoding based on unidirectional reference is converted, so can when decoding, correctly carry out decoding processing.
A kind of dynamic image decoding method, the picture decoding with being encoded is characterized in that:
Comprise decoding step, when the decoder object piece is decoded, determine that according to the motion vector of same position piece the motion vector of described decoder object piece, described same position piece are the pieces in the decoded frame, and be and the piece of described decoder object piece at same position
And use the motion vector of above-mentioned decoder object piece and the reference frame corresponding with the motion vector of above-mentioned decoder object piece, described decoder object piece is carried out motion compensation and decoding with Direct Model, in described decoding step, when described same position piece uses two motion vectors and each self-corresponding two reference frame of these two motion vectors to be carried out decoding
A relative motion vector in described two motion vectors that use during the described same position piece of decoding, convert by the information gap that uses the DISPLAY ORDER of representing frame, thus, described relatively decoder object piece generates two motion vectors that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding
Use two motion vectors and each self-corresponding two reference frame of described two motion vectors that are generated of described generation, described decoder object piece is carried out motion compensation and decoding with Direct Model.
Described dynamic image decoding method, it is characterized in that: in described two reference frame of two motion vectors of corresponding described decoder object piece separately, the 1st reference frame is the frame that comprises described same position piece, the 2nd reference frame is in two reference frame using with the decoding of described same position piece the time, and is as the corresponding reference frame of conversion motion of objects vector during with two motion vectors generating described decoder object piece.
Described dynamic image decoding method is characterized in that: described same position piece is with Direct Model when decoded, and a motion vector in two motion vectors using when using the described same position piece of decoding generates two motion vectors of described decoder object piece.
Described dynamic image decoding method is characterized in that: the described information of the DISPLAY ORDER of expression frame is: the 1st information, and expression comprises the DISPLAY ORDER of the frame of described decoder object piece;
The 2nd information is represented the DISPLAY ORDER of described the 2nd reference frame of described decoder object piece; And, the 3rd information, expression comprises the frame DISPLAY ORDER of frame and described the 1st reference frame that be described decoder object piece of described same position piece,
Described information gap is the poor of the 1st information and the 2nd information, the difference of the 1st information and the 3rd information and the 2nd information and the 3rd information poor.
A kind of dynamic image decoding device, with the picture decoding that is encoded, it is characterized in that: comprise decoding unit, when the decoder object piece is decoded, determine the motion vector of described decoder object piece according to the motion vector of same position piece, described same position piece is the piece in the decoded frame, and is and the piece of described decoder object piece at same position
And use the motion vector of above-mentioned decoder object piece and the reference frame corresponding with the motion vector of above-mentioned decoder object piece, described decoder object piece is carried out motion compensation and decoding with Direct Model, described decoding unit, use two motion vectors and each self-corresponding two reference frame of these two motion vectors and when decoded at described same position piece
A relative motion vector in described two motion vectors that use during the described same position piece of decoding, convert by the information gap that uses the DISPLAY ORDER of representing frame, thus, described relatively decoder object piece generates two motion vectors that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding, use two motion vectors and each self-corresponding two reference frame of described two motion vectors that are generated of described generation, described decoder object piece is carried out motion compensation and decoding with Direct Model.
In addition, the present invention not only can be embodied as this motion image encoding method and dynamic image decoding method, also can be embodied as to possess characterization step that this motion image encoding method and dynamic image decoding method are comprised moving picture encoding device and the dynamic image decoding device as parts.In addition, can be embodied as the sign indicating number row of encoding, and can provide and deliver through transmission mediums such as recording medium such as CD-ROM or internets by motion image encoding method.
Description of drawings
Fig. 1 is the frame projected relationship in the existing motion image encoding method of expression and the ideograph of order, (a) is each frame of expression and the graph of a relation of corresponding reference frame, (b) is yard precedence diagram that is listed as that presentation code generates.
Fig. 2 is the ideograph of the motion vector under Direct Model (ダ イ レ Network ト モ-De) in the existing motion image encoding method of expression.
Fig. 3 is the structured flowchart that moving picture encoding device one embodiment of motion image encoding method of the present invention is used in expression.
Fig. 4 is the frame number in the embodiment of the invention and the key diagram of relative indexing (index).
Fig. 5 is based on the schematic diagram of image coding signal form of the moving picture encoding device of the embodiment of the invention.
Fig. 6 is the ideograph of the replacement of the expression embodiment of the invention with the frame sequential in the memory, is the figure of expression input sequence (a), (b) is the figure of expression replacement order.
Fig. 7 is the ideograph of the motion vector under the Direct Model in the expression embodiment of the invention, (a) be that the indicated object piece is, (b) is that indicated object piece a is the figure of first and second examples under the frame B6 situation under the frame B7 situation, (c) being that indicated object piece a is the figure of the 3rd example under the frame B6 situation, (d) is that indicated object piece a is the figure of the 4th example under the frame B6 situation.
Fig. 8 is the ideograph of the motion vector under the Direct Model in the expression embodiment of the invention, (a) be that indicated object piece a is the figure of the 5th example under the frame B6 situation, (b) be that indicated object piece a is the figure of the 6th example under the frame B6 situation, (c) being that indicated object piece a is the figure of the 7th example under the frame B6 situation, (d) is that indicated object piece a is the figure under the frame B8 situation.
Fig. 9 is frame projected relationship and the ideograph in proper order in the expression embodiment of the invention, (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, (b) is the figure that expression replaces to the frame sequential of coded sequence (the sign indicating number row in proper order).
Figure 10 is frame projected relationship and the ideograph in proper order in the expression embodiment of the invention, (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, (b) is the figure that expression replaces to the frame sequential of coded sequence (the sign indicating number row in proper order).
Figure 11 is frame projected relationship and the ideograph in proper order in the expression embodiment of the invention, (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, (b) is the figure that expression replaces to the frame sequential of coded sequence (the sign indicating number row in proper order).
Figure 12 is the ideograph of the frame predict structure shown in Figure 6 of the layer representation embodiment of the invention.
Figure 13 is the ideograph of the frame predict structure shown in Figure 9 of the layer representation embodiment of the invention.
Figure 14 is the ideograph of the frame predict structure shown in Figure 10 of the layer representation embodiment of the invention.
Figure 15 is the ideograph of the frame predict structure shown in Figure 11 of the layer representation embodiment of the invention.
Figure 16 is the structured flowchart that dynamic image decoding device one embodiment of dynamic image decoding method of the present invention is used in expression.
Figure 17 is that storage realizes the motion image encoding method of embodiment and dynamic image decoding method is used recording medium with program key diagram by computer system, (a) be the key diagram of expression as the physical format example of the floppy disk of recording medium main body, (b) being the key diagram of outward appearance, cross section structure and the floppy disk seen from the front of floppy disk of expression, (c) is to be illustrated in the structure key diagram that the record-playback that carries out said procedure on the floppy disk FD is used.
Figure 18 is that expression realizes that the content of content converting service provides the overall structure block diagram of system.
Figure 19 is the schematic diagram of expression mobile phone one example.
Figure 20 is the internal structure block diagram of expression mobile phone.
Figure 21 is the overall structure block diagram of expression digital broadcasting with system.
Embodiment
With reference to accompanying drawing embodiments of the invention are described.
(embodiment 1)
Fig. 3 is the structured flowchart that moving picture encoding device one embodiment of motion image encoding method of the present invention is used in expression.
Moving picture encoding device possesses as shown in Figure 3: replace with memory 101, calculus of differences portion 102, coded prediction error portion 103, sign indicating number column-generation portion 104, predicated error lsb decoder 105, addition operation division 106, reference frame (with reference to image) memory 107, motion vector detection section 108, mode selection part 109, coding control part 110, switch 111-115 and motion vector storage part 116.
Replace with memory 101 storages by showing the dynamic image of time sequencing with the input of frame unit.Coding control part 110 is replaced by coded sequence and is stored in each frame of replacing with in the memory 101.In addition, coding control part 110 controlled motion vectors are to the storage action of motion vector storage part 116.
The decode image data that motion vector detection section 108 will have been encoded detects the motion vector that expression is predicted as the optimum position as reference frame in the region of search in this frame.Mode selection part 109 uses motion vector detection section 108 detected motion vectors, determines the macroblock encoding pattern, according to this coding mode generation forecast view data.Calculus of differences portion 102 calculates from replacing poor with the view data of reading the memory 101 and the predicted image data of importing from mode selection part 109, generation forecast error image data.
The prediction error image data of 103 pairs of inputs of coded prediction error portion are carried out encoding process such as frequency translation or quantification, generate coded data.The coded data of sign indicating number column-generation portion 104 pairs of inputs is carried out Variable Length Code etc., and the information of the information by additional motion vector from mode selection part 109 inputs, coding mode and other related information etc., the generated code row.
The coded data of 105 pairs of inputs of predicated error lsb decoder is gone decoding processing such as quantification or frequency inverse conversion, generates the decoding difference image data.Addition operation division 106 will generate decode image data from the decoded difference image data of predicated error lsb decoder 105 inputs and the predicted image data addition of importing from mode selection part 109.The decode image data that reference frame generates with memory 107 storages.
Fig. 4 is the key diagram of frame and relative indexing (pointer).Relative indexing is used for the reference frame of unique identification reference frame with memory 107 storages, as shown in Figure 4, is that correspondence is additional to the sequence number on each frame.The reference frame of using when in addition, relative indexing is used to indicate by the inter prediction encoding piece.
Fig. 5 is based on the schematic diagram of the moving picture encoding signal format of moving picture encoding device.The code signal Picture of 1 frame unit is by the header encoder signal Header that comprises in the frame beginning, based on the code signal Block1 of the piece of Direct Model, constitute based on the code signal Block2 of the piece of the outer inter prediction of Direct Model etc.In addition, have the 1st relative indexing Rldx1 that two reference frame using in the expression inter prediction use and the 2nd relative indexing Rldx2, the 1st motion vector MV1, the 2nd motion vector MV2 in proper order based on the code signal Block2 of the piece of the outer inter prediction of Direct Model.On the other hand, the code signal Block1 based on Direct Model does not have the 1st relative indexing Rldx1, the 2nd relative indexing Rldx2, the 1st motion vector MV1, the 2nd motion vector MV2.Can judge by types of forecast PredType and use which of the 1st relative indexing Rldx1, the 2nd relative indexing Rldx2.In addition, the 1st relative indexing Rldx1 represents the 1st reference frame, and the 2nd relative indexing Rldx2 represents the 2nd reference frame.That is be that the 1st reference frame or the 2nd reference frame are determined by the Data Position in the sign indicating number row.
The frame that carries out inter prediction encoding as the unidirectional reference of the 1st reference frame based on the frame that one of will show before or after being positioned in the time sequencing, encoded is the P frame, the frame that carries out inter prediction encoding as the two-way reference of the 1st reference frame and the 2nd reference frame based on the frame that one of will show before or after being positioned in the time sequencing, encoded is the B frame, but in the present embodiment, with the 1st reference frame as the forward direction reference frame, with the 2nd reference frame as the back describe to reference frame.In addition, will be as the 1st motion vector, the 2nd motion vector of the motion vector of relative the 1st reference frame and the 2nd reference frame describe as forward motion vector, backward motion vector respectively respectively.
Below, the addition method of the 1st relative indexing, the 2nd relative indexing is described with Fig. 4 (a).
In the 1st relative indexing, at first with regard to the information of expression DISPLAY ORDER, to the reference frame before the coded object piece, by near the order assignment of the coded object frame value since 0.Reference frame before all coded objects is distributed value since 0, and then, the reference frame later to the coded object piece pressed the order assignment value subsequently near the coded object frame.
In the 2nd relative indexing, at first with regard to the information of expression DISPLAY ORDER, the reference frame later to the coded object piece is by near the order assignment of the coded object frame value since 0.The later reference frame of all coded objects is distributed value since 0, then,, press order assignment value subsequently near coded object frame (object picture) to the reference frame before the coded object piece.
For example, among Fig. 4 (a), be that the 0, the 2nd relative indexing Rldx2 is that the forward direction reference frame is that frame number is 6 B frame under 1 the situation at the 1st relative indexing (pointer) Rldx1, the back is that frame number is 9 P frame to reference frame.Here, frame number is the sequence number of expression DISPLAY ORDER.
Relative indexing in the piece is showed by the variable length code word, is worth more for a short time, then distributes the short more code of code length.Usually, select, so if as mentioned above by apart from the near order assignment relative indexing value of coded object frame (object picture), then code efficiency improves apart from the reference frame (with reference to picture) of the nearest frame of coded object frame as inter prediction.
On the other hand, by expressing indication, can change the distribution of reference frame arbitrarily to relative indexing with the buffer control signal in the code signal (RPSL in the Header shown in Figure 5).By this distributing altering, the 2nd relative indexing can be become 0 reference frame and become reference frame with any reference frame in the memory 107, for example, and shown in Fig. 4 (b), the distribution of the relative frame of variable relative indexing.
Below, the action of the moving picture encoding device of above-mentioned formation is described.
Fig. 6 is that the key diagram with the frame sequential in the memory 101 is replaced in expression, (a) is the key diagram of expression input sequence, (b) is the key diagram of expression replacement order.Wherein, vertical line is represented frame, and in the mark, the α head of the 1st literal is represented frame type (I, P or B) shown in each frame bottom right, the numeral frame number that the 2nd literal is later, and this frame number is represented DISPLAY ORDER.
Input picture for example shown in Fig. 6 (a), is used memory 101 by showing that time sequencing is replaced with the input of frame unit.If to replacing with memory 101 incoming frames, the control part 110 of then encoding will be imported each frame of replacing with in the memory 101 and replace to the order of encoding.Carry out replacement according to the reference relation in the inter prediction encoding, be replaced, be encoded earlier as before the frame of reference frame making it as the frame of reference frame to coded sequence.
Here, establish the P frame and show near I that has encoded or the P frame that is positioned at the place ahead or rear on the time sequencing with reference to 1.In addition, establish the B frame and show near the frame of having encoded that is positioned at the place ahead or rear on the time sequencing with reference to two.
The coded sequence of frame begins coding in the B frame between two P frames (being 3 in the example of Fig. 6 (a)) from the frame that is positioned at central authorities, afterwards, the B frame near the P frame is encoded.For example, for frame B6-P9, the order of P9, B7, B6, B8 is encoded frame by frame.
At this moment, in each frame of frame B6-P9, the frame of the ending frame reference arrow starting point of arrow shown in Fig. 6 (a).That is, frame B7 reference frame P5, P9, frame B6 reference frame P5, B7, frame B8 reference frame B7, P9.In addition, at this moment, coding control part 110 replaces to each frame the order of encoding shown in Fig. 6 (b).
Below, read by motion compensation unit and to replace with each frame after replacing in the memory 101.Wherein, motion compensation unit is called macro block, macro block is made as the size of level 16 * vertical 16 pixels.Below, the encoding process of frame P9, B7, B6, B8 shown in the order key diagram 6 (a).
(encoding process of frame P9)
Frame P9 is the P frame, so with reference to showing that 1 frame having handled that is positioned at the place ahead or rear on the time sequencing carries out inter prediction encoding.In the coding of frame P9, as mentioned above, reference frame becomes frame P5.After frame P5 coding stops, decoded picture is stored in reference frame with in the memory 107.In the coding of P frame, coding control part 110 each switch of control make switch 113,114,115 become conducting.Thereby, at first be transfused to motion vector detection section 108, mode selection part 109 and calculus of differences portion 102 from the macro block of replacing with the frame P9 that reads the memory 101.
Motion vector detection section 108 uses the decode image data of the frame P5 of storage in the memory 107 as reference frame with reference to frame, to the macro block detection motion vector of frame P9.In addition, motion vector detection section 108 is to the detected motion vector of mode selection part 109 outputs.
Mode selection part 109 uses motion vector detection section 108 detected motion vectors, determines the macroblock encoding pattern of frame P9.Here, so-called coding mode is that expression comes coded macroblocks with which kind of method.Under the situation of P frame, for example from intraframe coding, use motion vector inter prediction encoding, do not use the inter prediction encoding of motion vector (will move and be treated to 0), determine someways to encode.In coding mode is determined, the general method of selecting further to reduce encoding error by few bit quantity.
The coding mode that mode selection part 109 is determined to 104 outputs of sign indicating number column-generation portion.At this moment, when the coding mode that mode selection part 109 is determined is inter prediction encoding, the motion vector that in sign indicating number column-generation portion 104 these inter prediction encodings of output, uses, and be stored in the motion vector storage part 116.
Mode selection part 109 is according to the coding mode generation forecast view data of determining, and this predicted image data is outputed to calculus of differences portion 102 and addition operation division 106.But select under the situation of intraframe coding not prediction of output view data at mode selection part 109.In addition, mode selection part 109 is under the situation of selecting intraframe coding, and control switch 111 is connected to a side, and control switch 112 is connected to the c side, and when selecting inter prediction encoding, control switch 111 is connected to the b side, and control switch 112 is connected to the d side.Below, the situation of being selected inter prediction encoding by mode selection part 109 is described.
The view data of the macro block of the frame P9 that from replace, reads to calculus of differences portion 102 input and the predicted image data of exporting from mode selection part 109 with memory 101.The view data of the macro block of the 102 computing frame P9 of calculus of differences portion and predicted image data poor, the generation forecast view data outputs to coded prediction error portion 103.
Coded prediction error portion 103 implements encoding process such as frequency translation or quantification by the prediction error image data to input, generates coded data, and outputs to yard column-generation portion 104 and predicated error lsb decoder 105.Wherein, establishing frequency translation or quantification treatment for example carries out with level 8 * vertical 8 pixels or level 4 * vertical 4 pixel units.
The coded data of sign indicating number column-generation portion 104 pairs of inputs is implemented Variable Length Code etc., and by information such as additional movement vector or coding mode or header etc., the generated code row are also exported.
On the other hand, the coded data of 105 pairs of inputs of predicated error lsb decoder implements to go decoding processing such as quantification or frequency inverse conversion, and after generating the decoding difference image data, outputs to addition operation division 106.Addition operation division 106 generates decode image data by with decoded difference image data and the predicted image data addition of importing from mode selection part 109, and is stored in reference frame with in the memory 107.
By above processing, finish the processing of 1 macro block of frame P9.By same processing, all the other macro blocks of frame P9 are also carried out encoding process.In addition, as if all macro block end process, then then carry out the encoding process of frame B7 to frame P9.
(encoding process of frame B7)
In the reference frame of frame B7, the forward direction reference frame is frame P5, and the back is P9 to reference frame.Because frame B7 is used as reference frame when other frame coding, so coding control part 110 each switch of control make switch 113,114,115 become conducting.Thereby, be transfused to motion vector detection section 108, mode selection part 109 and calculus of differences portion 102 from the macro block of replacing with the frame B7 that reads the memory 101.
Motion vector detection section 108 with reference to frame with the decode image data of the frame P5 of storage in the memory 107 as the forward direction reference frame, with the decode image data of frame P9 as the back to reference frame, the macro block of frame B7 is detected forward motion vector and backward motion vector.In addition, motion vector detection section 108 is to the detected motion vector of mode selection part 109 outputs.
Mode selection part 109 uses motion vector detection section 108 detected motion vectors, determines the macro-block coding pattern of frame B7.Here, the coding mode of B frame can be selected from for example intraframe coding, the inter prediction encoding that uses forward motion vector, the inter prediction encoding that uses backward motion vector, the inter prediction encoding that uses bi-directional motion vector, Direct Model.
With Fig. 7 (a) action when encoding with Direct Model is described.Fig. 7 (a) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B7 under the Direct Model.The motion vector c that uses when under this situation, utilize among the coded frame P9, the position is identical with piece a piece b, frame P9 is as the reference frame that is positioned at after the frame B7.Motion vector c is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector c to obtain, and frame P5 and the frame P9 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, as the method for utilizing motion vector c, the method that generates the vector that is parallel to motion vector c is arranged.Used motion vector becomes motion vector d for frame P5 during the piece a of coding this moment, becomes motion vector e for frame P9.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B7) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece institute reference of reference frame thereafter, present frame (frame B7) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector d, motion vector e is obtained by (formula 1), (formula 2) respectively.In addition, can determine the time gap of each interframe according to information or this information gap of the expression DISPLAY ORDER (position) that for example is additional to each frame.
MVF=MV * TRF/TRD formula 1
MVB=(TRF-TRD) * MV/TRD formula 2
Wherein, MVF, MVB show the horizontal composition of motion vector, vertical composition respectively, and sign symbol is represented the direction of motion vector.
In the selection of coding mode, select further to reduce the method for encoding error usually by few bit quantity.The coding mode that mode selection part 109 is determined to 104 outputs of sign indicating number column-generation portion.At this moment, when the coding mode that mode selection part 109 is determined is inter prediction encoding, the motion vector that in sign indicating number column-generation portion 104 these inter prediction encodings of output, uses, and be stored in the motion vector storage part 116.In addition, selecting under the situation of Direct Model, will calculate the motion vector that obtain, use in the Direct Model by (formula 1), (formula 2) and be stored in the motion vector storage part 116.
Mode selection part 109 is according to the coding mode generation forecast view data of determining, and this predicted image data is outputed to calculus of differences portion 102 and addition operation division 106.But select under the situation of intraframe coding not prediction of output view data at mode selection part 109.In addition, mode selection part 109 is under the situation of selecting intraframe coding, and control switch 111 is connected to a side, control switch 112 is connected to the c side, and when selecting inter prediction encoding or Direct Model, control switch 111 is connected to the b side, and control switch 112 is connected to the d side.Below, the situation of being selected inter prediction encoding or Direct Model by mode selection part 109 is described.
The view data of the macro block of the frame B7 that from replace, reads to calculus of differences portion 102 input and the predicted image data of exporting from mode selection part 109 with memory 101.The view data of the macro block of the 102 computing frame B7 of calculus of differences portion and predicted image data poor, the generation forecast view data outputs to coded prediction error portion 103.
Coded prediction error portion 103 implements encoding process such as frequency translation or quantification by the prediction error image data to input, generates coded data, and outputs to yard column-generation portion 104 and predicated error lsb decoder 105.
The coded data of 104 pairs of inputs of sign indicating number column-generation portion is implemented Variable Length Code etc., and by information such as additional movement vector or coding modes, generated code row and output.
On the other hand, the coded data of 105 pairs of inputs of predicated error lsb decoder implements to go decoding processing such as quantification or frequency inverse conversion, and after generating the decoding difference image data, outputs to addition operation division 106.Addition operation division 106 generates decode image data by with decoded difference image data and the predicted image data addition of importing from mode selection part 109, and is stored in reference frame with in the memory 107.
By above processing, finish the processing of 1 macro block of frame B7.By same processing, all the other macro blocks of frame B7 are also carried out encoding process.In addition, as if all macro block end process, then then carry out the encoding process of frame B6 to frame B7.
(encoding process of frame B6)
Because frame B6 is the B frame, so with reference to showing that two frames having handled that are positioned at the place ahead or rear on the time sequencing carry out inter prediction encoding.As mentioned above, frame B6 with reference in the image, the forward direction reference frame is frame P5, the back is B7 to reference frame.When carrying out the coding of other frame, frame B6 is not used as reference frame.Thereby coding control part 110 each switch of control make switch 113 conductings, and switch 114,115 ends.Thus, be transfused to motion vector detection section 108, mode selection part 109 and calculus of differences portion 102 from the macro block of replacing with the frame B6 that reads the memory 101.
Motion vector detection section 108 with reference to frame with the decode image data of the frame P5 of storage in the memory 107 as the forward direction reference frame, with the decode image data of frame B7 as the back to reference frame, the macro block of frame B6 is detected forward motion vector and backward motion vector.In addition, motion vector detection section 108 is to the detected motion vector of mode selection part 109 outputs.
Mode selection part 109 uses motion vector detection section 108 detected motion vectors, determines the macro-block coding pattern of frame B6.
Here, with Fig. 7 (b) first example of moving when the macro block of frame B6 used Direct Model is described.Fig. 7 (b) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.At this moment, the motion vector c that uses when utilize among the coded frame B7, the position is identical with piece a piece b, frame B7 is as the reference frame that is positioned at after the frame B6.If piece b only encodes by the forward direction reference or by two-way reference, establishing this forward motion vector is motion vector c.If motion vector c is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector c to generate, and frame P5 and the frame B7 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned frame B7, use the method that generates the motion vector that is parallel to motion vector c, then used motion vector becomes motion vector d for frame P5 during encoding block a, becomes motion vector e for frame B7.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P5) of the piece B of reference frame institute reference thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector d, motion vector e is obtained by above-mentioned (formula 1), (formula 2) respectively.In addition, can determine the time gap of each interframe according to information or this information gap of the expression DISPLAY ORDER that for example is additional to each frame.
Like this, under Direct Model,, needn't send the information of motion vector, and can improve action prediction efficient by to converting as the forward motion vector of back to the B of reference frame frame.Thus, can improve code efficiency.And,, can improve code efficiency by available reference frame nearest in showing time sequencing being used as forward direction reference frame and back to reference frame.
Below, second example when the use Direct Model being described with Fig. 7 (b).At this moment, the motion vector that uses when utilize among the coded frame B7, the position is identical with piece a piece b, frame B7 is as the reference frame that is positioned at after the frame B6.At this, piece b is encoded by Direct Model, establishes the forward motion vector of using in fact this moment and is motion vector c.That is, motion vector c be by convert (ス ケ-リ Application グ) behind the frame B7 in the frame P9 of reference, the motion vector that obtains of the motion vector that uses when being in the piece i coding with piece b same position.Motion vector c uses the motion vector of storage in the motion vector storage part 116, or behind the motion vector of the piece i in the frame P9 that uses when reading by Direct Model encoding block b from motion vector storage part 116, calculating is obtained.Mode selection part 109 also can be by the piece b of Direct Model coded frame B7 the time be handled the motion vector of obtaining and is stored under the situation in the motion vector storage part 116 by converting, only store forward motion vector.Piece a uses the motion vector that utilizes motion vector c to generate, and frame P5 and the frame B7 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned first example, use the method that generates the motion vector that is parallel to motion vector c, then used motion vector becomes motion vector d for frame P5 during encoding block a, becomes motion vector e for frame B7.
At this moment, as to the big or small MVF of the motion vector d of the forward motion vector of piece a, with the same, can use (formula 1), (formula 2) to obtain as first example of the big or small MVB of the motion vector e of backward motion vector and Direct Model.
Like this, under Direct Model, to converting to the forward motion vector that the B of reference frame frame uses in fact under Direct Model as the back, so needn't send the information of motion vector, and, even under Direct Model, during the piece of coding back same position in reference frame, also can improve action prediction efficient.Thus, can improve code efficiency.And, by will in showing time sequencing, available nearest reference frame improving code efficiency as forward direction and back to reference frame.
Below, the 3rd example when illustrating with Direct Model with Fig. 7 (c).Fig. 7 (c) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.The motion vector c that uses when under this situation, utilize among the coded frame B7, the position is identical with piece a piece b, frame B7 is as the reference frame that is positioned at after the frame B6.Wherein, establish and only use backward motion vector to come encoding block b, and to establish this backward motion vector be motion vector f.If motion vector f is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector f to generate, and frame P5 and the frame B7 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same method that generates the motion vector that is parallel to motion vector f of using with the situation of above-mentioned first example, then used motion vector becomes motion vector g for frame P5 during encoding block a, becomes motion vector h for frame B7.
At this moment, if the size of establishing as the motion vector g of forward motion vector is MVF, if the size as the motion vector h of backward motion vector is MVB, the size of motion vector f is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P9) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), present frame (frame B6) is TRB with the back to the time gap of reference frame (frame B7), then the big or small MVF of motion vector g, the big or small MVB of motion vector h is respectively by (formula 3), (formula 4) obtained.
MVF=-TRF * MV/TRD formula 3
MVB=TRB * MV/TRD formula 4
Like this, under Direct Model, the backward motion vector that coding is used during the piece on the same position in the B of reference frame frame as the back converts, so needn't send the information of motion vector, and, even when the piece on the same position of back in reference frame only has backward motion vector, also can improve forecasting efficiency.Thus, can improve code efficiency.And, by will in showing time sequencing, available nearest reference frame improving code efficiency as forward direction and back to reference frame.
Below, the 4th example when the use Direct Model being described with Fig. 7 (d).Fig. 7 (d) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.At this moment, the motion vector that uses when utilize among the coded frame B7, the position is identical with piece a piece b, frame B7 is as the reference frame that is positioned at after the frame B6.If the same with the 3rd example, only use backward motion vector to come encoding block b, establishing this backward motion vector is motion vector f.If motion vector f is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector f to generate, and frame P9 and the frame B7 of conduct back to reference frame according to as motion vector f reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned first example, use the method that generates the motion vector that is parallel to motion vector f, then used motion vector becomes motion vector g for frame P9 during encoding block a, becomes motion vector h for frame B7.
At this moment, if the size of establishing as the motion vector g of forward motion vector is MVF, if the size as the motion vector h of backward motion vector is MVB, the size of motion vector f is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P9) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the back to the time gap of the frame (frame P9) of the piece institute reference of reference frame (frame B7), and then the big or small MVB of the big or small MVF of motion vector g, motion vector h is obtained by (formula 1), (formula 2) respectively.
Like this, under Direct Model, the backward motion vector that coding is used during the piece of same position in the B of reference frame frame as the back converts, thereby needn't send the information of motion vector, and, even the piece of same position in reference frame only has under the situation of backward motion vector in the back, also can improve forecasting efficiency.Thus, can improve code efficiency.And, be used as the forward direction reference frame by frame with the reference of backward motion vector institute, will in showing time sequencing, available nearest reference frame be used as the back to reference frame, can improve code efficiency.
Below, the 5th example when illustrating with Direct Model with Fig. 8 (a).Fig. 8 (a) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.Under this situation, with the size of motion vector as 0, with frame P5 as the forward direction reference frame, with frame B7 as the back to reference frame, by carrying out two-way reference, carry out motion compensation.
Like this, under Direct Model, force to be set to 0, under the situation of selecting Direct Model, can send the information of motion vector, and needn't can cut down treating capacity the motion vector processing that converts by motion vector.
Below, the 6th example when illustrating with Direct Model with Fig. 8 (b).Fig. 8 (b) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.At this moment, the motion vector g that uses when utilize among the coded frame P9, the position is identical with piece a piece f, frame P9 is as the P frame that is positioned at after the frame B6.Motion vector g is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector g to generate, and frame P5 and the frame B7 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same method that generates the motion vector that is parallel to motion vector g of using with the situation of above-mentioned first example, then used motion vector becomes motion vector h for frame P5 during encoding block a, becomes motion vector i for frame B7.
At this moment, if the size of establishing as the motion vector h of forward motion vector is MVF, if the size as the motion vector i of backward motion vector is MVB, the size of motion vector g is MV, the time gap that shows the frame (frame P5) of the piece f institute reference that is positioned at present frame (frame B6) P frame (frame P9) afterwards and this P frame on the time sequencing is TRD, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), present frame (frame B6) is TRB with the back to the time gap of reference frame (frame B7), then the big or small MVF of motion vector h, the big or small MVB of motion vector i is respectively by (formula 1), (formula 5) obtained.
MVB=-TRB * MV/TRD formula 5
Like this, under Direct Model, converting to showing the motion vector that is positioned at the P frame at rear on the time sequencing, is under the situation of B frame in the back to reference frame, needn't store the motion vector of this B frame, and needn't send the information of motion vector.And,, can improve code efficiency by will in showing time sequencing, nearest reference frame being used as forward direction and back to reference frame.
Below, the 7th example when the use Direct Model being described with Fig. 8 (c).Fig. 8 (c) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B6 under the Direct Model.This example is the distribution of frame number change (mapping again) relative indexing to above-mentioned explanation, and the back becomes the situation of frame P9 to reference frame.At this moment, the motion vector g that uses when utilize among the coded frame P9, the position is identical with piece a piece f, frame P9 is back to reference frame as frame B7's.Motion vector g is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector g to generate, and frame P5 and the frame P9 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned first example, use the method that generates the motion vector that is parallel to motion vector g, then used motion vector becomes motion vector h for frame P5 during encoding block a, becomes motion vector i for frame P9.
At this moment, if the size of establishing as the motion vector h of forward motion vector is MVF, if the size as the motion vector i of backward motion vector is MVB, the size of motion vector g is MV, the back of present frame (frame B6) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector h, motion vector i is obtained by (formula 1), (formula 2) respectively.
Like this, under Direct Model,, also can the motion vector of the frame of having encoded be converted, and under the situation of selecting Direct Model, needn't send the information of motion vector even in branch timing to frame number change relative indexing.
In addition, by the piece a of Direct Model coded frame B6 the time, only by forward direction with reference to, two-way reference or Direct Model come coded frame B6 back in reference frame the position piece identical with piece a, when coding, use under the situation of forward motion vector, this forward motion vector is converted, as above-mentioned first example, second example or the 7th example, by Direct Model encoding block a.On the other hand, only come the coding site piece identical to reference, when coding, use under the situation of backward motion vector, this backward motion vector is converted, as above-mentioned the 3rd example or the 4th example, by Direct Model encoding block a with piece a by the back.
Above-mentioned Direct Model is not only applicable to constant situation of the time interval of interframe, also applicable to the situation of variable frame period.
The coding mode that mode selection part 109 is determined to 104 outputs of sign indicating number column-generation portion.In addition, mode selection part 109 is according to the coding mode of determining, the generation forecast view data, and this predicted image data outputed to calculus of differences portion 102.But mode selection part 109 is being selected under the situation of intraframe coding not prediction of output view data.In addition, mode selection part 109 is under the situation of selecting intraframe coding, and control switch 111 is connected to a side, and control switch 112 is connected to the c side, and when selecting inter prediction encoding or Direct Model, control switch 111 is connected to the b side, and control switch 112 is connected to the d side.In addition, mode selection part 109 is under the situation of inter prediction encoding at the coding mode of determining, the motion vector that uses in sign indicating number column-generation portion 104 these inter prediction encodings of output.Here, because frame B6 is not used as reference frame when other frame of coding, so the motion vector that uses in the inter prediction encoding needn't be stored in the motion vector storage part 116.Below, the situation of being selected inter prediction encoding or Direct Model by mode selection part 109 is described.
The view data of the macro block of the frame B6 that from replace, reads to calculus of differences portion 102 input and the predicted image data of exporting from mode selection part 109 with memory 101.The view data of the macro block of the 102 computing frame B6 of calculus of differences portion and predicted image data poor, generation forecast error image data output to coded prediction error portion 103.Coded prediction error portion 103 implements encoding process such as frequency translation or quantification by the prediction error image data to input, generates coded data, and outputs to a yard column-generation portion 104.
The coded data of 104 pairs of inputs of sign indicating number column-generation portion is implemented Variable Length Code etc., and by information such as additional movement vector or coding modes, generated code row and output.
By above processing, finish 1 macroblock encoding of frame B6 and handle.Also handle equally by all the other macro blocks,, then carry out the encoding process of frame B8 in case finish processing to frame B6.
(encoding process of frame B8)
Because frame B8 is the B frame, so with reference to showing that two frames having handled that are positioned at the place ahead or rear on the time sequencing carry out inter prediction encoding.As mentioned above, frame B8 with reference in the image, the forward direction reference frame is frame B7, the back is P9 to reference frame.When carrying out the coding of other frame, frame B8 is not used as reference frame, thereby coding control part 110 each switch of control make switch 113 conductings, and switch 114,115 ends.Thus, from replacing with the frame of reading the memory 101,8 macro block is transfused to motion vector detection section 108, mode selection part 109 and calculus of differences portion 102.
Motion vector detection section 108 with reference to frame with the decode image data of the frame B7 of storage in the memory 107 as the forward direction reference frame, with the decode image data of frame P9 as the back to reference frame, the macro block of frame B8 is detected forward motion vector and backward motion vector.In addition, motion vector detection section 108 is to the detected motion vector of mode selection part 109 outputs.
Mode selection part 109 uses motion vector detection section 108 detected motion vectors, determines the macro-block coding pattern of frame B8.
Here, with Fig. 8 (d) action when the macro block of frame B8 used Direct Model is described.Fig. 8 (d) is the key diagram of motion vector under the expression Direct Model, is illustrated in the situation of the piece a of coded frame B8 under the Direct Model.At this moment, the motion vector that uses when utilize among the coded frame P9, the position is identical with piece a piece b, frame P9 is as the reference frame that is positioned at after the frame B8.If piece b is only encoded by the forward direction reference, establishing this forward motion vector is motion vector c.If motion vector c is stored in the motion vector storage part 116.Piece a uses the motion vector that utilizes motion vector c to generate, and frame B7 and the frame P9 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned frame B7, use the method that generates the motion vector that is parallel to motion vector c, then used motion vector becomes motion vector d for frame B7 during encoding block a, becomes motion vector e for frame P9.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B8) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece b of reference frame institute reference thereafter, present frame (frame B8) is TRF with the time gap of forward direction reference frame (frame B7), present frame (frame B8) is TRB with the back to the time gap of reference frame (frame P9), then the big or small MVF of motion vector d, the big or small MVB of motion vector e is respectively by above-mentioned (formula 1), (formula 5) obtained.
Like this, under Direct Model,, needn't send the information of motion vector, and can improve forecasting efficiency by the forward motion vector of back to reference frame converted.Thus, can improve code efficiency.And,, can improve code efficiency by available reference frame nearest in showing time sequencing being used as forward direction reference frame and back to reference frame.
Above-mentioned Direct Model is not only applicable to constant situation of the time interval of interframe, also applicable to the situation of variable frame period.
The coding mode that mode selection part 109 is determined to 104 outputs of sign indicating number column-generation portion.In addition, mode selection part 109 is according to the coding mode of determining, the generation forecast view data, and this predicted image data outputed to calculus of differences portion 102.But mode selection part 109 is being selected under the situation of intraframe coding not prediction of output view data.In addition, mode selection part 109 is under the situation of selecting intraframe coding, and control switch 111 is connected to a side, and control switch 112 is connected to the c side, and when selecting inter prediction encoding or Direct Model, control switch 111 is connected to the b side, and control switch 112 is connected to the d side.In addition, mode selection part 109 is under the situation of inter prediction encoding at the coding mode of determining, the motion vector that uses in sign indicating number column-generation portion 104 these inter prediction encodings of output.Here, because frame B8 is not used as reference frame when other frame of coding, so the motion vector that uses in the inter prediction encoding needn't be stored in the motion vector storage part 116.Below, the situation of being selected inter prediction encoding or Direct Model by mode selection part 109 is described.
The view data of the macro block of the frame B8 that from replace, reads to calculus of differences portion 102 input and the predicted image data of exporting from mode selection part 109 with memory 101.The view data of the macro block of the 102 computing frame B8 of calculus of differences portion and predicted image data poor, generation forecast error image data output to coded prediction error portion 103.Coded prediction error portion 103 implements encoding process such as frequency translation or quantification by the prediction error image data to input, generates coded data, and outputs to a yard column-generation portion 104.
The coded data of 104 pairs of inputs of sign indicating number column-generation portion is implemented Variable Length Code etc., and by information such as additional movement vector or coding modes, generated code row and output.
By above processing, finish 1 macroblock encoding of frame B8 is handled.All the other macro blocks to frame B8 are also handled equally.
Below, showing the coding method of time sequencing position corresponding to the frame kind of each frame and frame, by carrying out the encoding process of each frame with frame P9, B7, method that B6, B8 are the same.
In above embodiment, be example with the situation of using frame predict structure shown in Fig. 6 (a), the action of motion image encoding method of the present invention is described.Figure 12 is the key diagram of layer representation frame predict structure at this moment.Among Figure 12, arrow is represented projected relationship, and expression is positioned at the frame of arrow terminal point with reference to the frame that is positioned at starting point.In frame predict structure shown in Fig. 6 (a),, as shown in figure 12, preferentially determine coded sequence apart from the frame frame farthest of having encoded by showing under the situation that time sequencing is considered.For example, be the frame that is positioned at continuous B frame central authorities apart from I or P frame frame farthest.Therefore, under the state that for example frame P5, P9 have encoded, frame B7 becomes next coded object frame.Under the state that frame P5, B7, P9 have encoded, frame B6, B8 become next coded object frame.
In addition, even under Fig. 6, the situation with different frame predict shown in Figure 12, also can use the method the same, can realize effect of the present invention with motion image encoding method of the present invention.Fig. 9-Figure 11 illustrates other frame predict structure example.
The number that Fig. 9 represents to be clipped in the B frame of I or P interframe is 3, as the order of coding B frame, from the situation that begins apart from the nearest frame of frame of having encoded to select to encode.Fig. 9 (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, Fig. 9 (b) is the figure that expression replaces to the frame sequential of coded sequence (sign indicating number row order).Figure 13 is the hierarchical diagram corresponding to the frame predict structure of Fig. 9 (a).In frame predict structure shown in Fig. 9 (a), under by the situation that shows the time sequencing consideration, as shown in figure 13, from beginning sequential encoding apart from the nearest frame of frame of having encoded.For example, under the state that frame P5, P9 have encoded, frame B6, B8 become next coded object frame.Under the state that frame P5, B6, B8, P9 have encoded, frame B7 becomes next coded object frame.
The number that Figure 10 represents to be clipped in the B frame of I or P interframe is 5, in the priority encoding B frame apart from the situation of the frame frame farthest of having encoded.Figure 10 (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, Figure 10 (b) is the figure that expression replaces to the frame sequential of coded sequence (sign indicating number row order).Figure 14 is the hierarchical diagram corresponding to the frame predict structure of Figure 10 (a).In frame predict structure shown in Figure 10 (a), preferential apart from the frame frame farthest of having encoded as shown in figure 14 by showing under the situation that time sequencing is considered, determine coded sequence.For example, be the frame that is positioned at continuous B frame central authorities apart from I or P frame frame farthest.Therefore, for example under the state that frame P7, P13 have encoded, frame B10 becomes next coded object frame.Under the state that frame P7, B10, P13 have encoded, frame B8, B9, B11, B12 become next coded object frame.
The number that Figure 11 represents to be clipped in the B frame of I or P interframe is 5, in the priority encoding B frame apart from the situation of the nearest frame of the frame of having encoded.Figure 11 (a) is the projected relationship figure that represents by showing each frame that time sequencing is represented, Figure 11 (b) is the figure that expression replaces to the frame sequential of coded sequence (sign indicating number row order).Figure 15 is the hierarchical diagram corresponding to the frame predict structure of Figure 11 (a).In frame predict structure shown in Figure 11 (a), under by the situation that shows the time sequencing consideration, as shown in figure 15, from beginning sequential encoding apart from the nearest frame of frame of having encoded.For example, under the state that frame P5, P9 have encoded, frame B8, B12 become next coded object frame.Under the state that frame P5, B8, B12, P9 have encoded, frame B9, B11 become next coded object frame.And under the state that frame P5, B8, B9, B11, B12, P9 have encoded, frame B10 becomes next coded object frame.
As mentioned above, in motion image encoding method of the present invention, when using bi-directional predicted the coding when carrying out B frame that inter prediction encoding handles, encoding by the order different with showing time sequencing is clipped in a plurality of B frames of I or P interframe.At this moment, be positioned in nearest frame in the time sequencing as forward direction and back to reference frame show.Under the available situation of B frame, also can be with the B frame as this reference frame.In addition, when being clipped in a plurality of B frame of I or P interframe when encoding, from beginning sequential encoding apart from the frame frame farthest of having encoded by the order different with showing time sequencing.In addition, encoding by the order different when being clipped in a plurality of B frame of I or P interframe, from beginning sequential encoding apart from the nearest frame of frame of having encoded with showing time sequencing.
By this action, use motion image encoding method of the present invention, thereby when coding B frame, can be with frame nearer in showing time sequencing as reference frame, and the forecasting efficiency can improve motion compensation thus the time, so can improve code efficiency.
In addition, in motion image encoding method of the present invention, with reference to the frame that is encoded to the B frame, as the back to reference frame, and by the piece in the Direct Model coding B frame, at this moment, when the piece of the back same position in reference frame of encoding by forward direction reference or two-way reference, will be used as the motion vector under the Direct Model by the motion vector that this forward motion vector that converts obtains.
Like this, under Direct Model,, needn't send the information of motion vector, and can improve forecasting efficiency by to converting as the forward motion vector of back to the B of reference frame frame.And,, can improve code efficiency by going up nearest reference frame service time as the forward direction reference frame.
In addition, when being encoded the piece of conduct back same position in the B of reference frame frame by Direct Model, the motion vector that will obtain by the forward motion vector that essence under the conversion Direct Model is used is as the motion vector under the Direct Model.
Like this, under Direct Model, by the forward motion vector of using to the B of reference frame frame essence under Direct Model as the back is converted, needn't send the information of motion vector, and, even under Direct Model, during the piece of coding back same position in reference frame, also can improve forecasting efficiency.And,, can improve code efficiency by going up nearest reference frame the time as the forward direction reference frame.
In addition, encoding as the back in the B of reference frame frame during the piece of same position to reference by the back, the motion vector that this backward motion vector that converts is obtained is used as the motion vector under the Direct Model.
Like this, under Direct Model, convert by the backward motion vector that coding is used during the piece of same position in the B of reference frame frame as the back, needn't send the information of motion vector, and, even the piece of same position in reference frame only has under the situation of backward motion vector in the back, also can improve forecasting efficiency.And,, can improve code efficiency by going up nearest reference frame the time as the forward direction reference frame.
In addition, encoding as the back in the B of reference frame frame during the piece of same position to reference by the back, be used as motion vector under the Direct Model by the frame of this backward motion vector institute reference and back will be scaled the resulting motion vector of reference frame to reference frame in the backward motion vector that will use this moment.
Like this, under Direct Model, convert by the backward motion vector that coding is used during the piece of same position in the B of reference frame frame as the back, needn't send the information of motion vector, and, even the piece of same position in reference frame only has under the situation of backward motion vector in the back, also can improve forecasting efficiency.Thus, can improve code efficiency.And, be used as the forward direction reference frame by frame with the backward motion vector reference, with showing that available nearest reference frame is used as the back to reference frame in the time sequencing, can improve code efficiency.
In addition, under Direct Model, use size to be forced to be made as 0 motion vector.
Like this, force to be set to 0, under the situation of selecting Direct Model, needn't send the information of motion vector by the motion vector under the Direct Model, and, do not need the conversion of motion vector to handle, can cut down treating capacity.
In addition, in motion image encoding method of the present invention, with reference to the frame that is encoded to the B frame as the back to reference frame, and when encoding piece in the B frame with Direct Model, the motion vector that the time forward motion vector used in the P frame after conversion is coded on the same position obtains is as the motion vector under the Direct Model.
Like this, under Direct Model,, be under the situation of B frame to reference frame in the back by the motion vector at back P frame is converted, needn't store the motion vector of this B frame, and, needn't send the information of motion vector, can improve forecasting efficiency.And,, can improve code efficiency by going up nearest reference frame the time as the forward direction reference frame.
In addition, to the distribution of frame number change relative indexing, by forward direction during with reference to the piece of the back same position in reference frame of encoding, the motion vector that this forward motion vector that converts is obtained is as the motion vector under the Direct Model.
Like this, under Direct Model,, also can the motion vector of the frame of having encoded be converted, and needn't send the information of motion vector even in branch timing to frame number change relative indexing.
In the present embodiment, illustrate with level 16 * vertical 16 pixel units and handle motion compensation, handle the situation that prediction error image is encoded, but these units also can be other number of picture elements with level 8 * vertical 8 pixel units or level 4 * vertical 4 units.
In addition, in the present embodiment, illustrate continuous B frame number and be 3 or 5 s' situation, but the number of B frame also can be other number.
In the present embodiment, the coding mode that illustrates the P frame from intraframe coding, use motion vector inter prediction encoding, do not use the inter prediction encoding of motion vector and select, and, the situation that the coding mode of B frame is selected from intraframe coding, the inter prediction encoding that uses forward motion vector, the inter prediction encoding that uses backward motion vector, the inter prediction encoding that uses bi-directional motion vector, Direct Model, but these coding modes also can be other methods.
In addition, in the present embodiment, Direct Model has been illustrated 7 examples, but also can use, also can from a plurality of methods, select a method each piece or macro block to each macro block or the well-determined method of piece.Under the situation of using a plurality of methods, use the information of which Direct Model to be recorded in the sign indicating number row expression.
In addition, in the present embodiment, illustrate that the P frame shows that with reference to 1 being positioned at front or rear I that has encoded or P frame on the time sequencing encodes, the B frame is with reference to two situations that show that near the frame of having encoded before or after being positioned on the time sequencing is encoded, but at these frames is under the situation of P frame, substitute as reference frame showing a plurality of I frames of having encoded or P frame before or after being positioned in the time sequencing, and encode with reference to 1 maximum in each piece frame, under the situation that is the B frame, to show that being positioned near front or rear a plurality of frames of having encoded in the time sequencing substitutes as reference frame, and encode with reference to two maximum in each piece frames.
In addition, mode selection part 109 when coming the coded object piece by bi-directional predicted or Direct Model, can be stored forward direction and back to both motion vectors when motion vector being stored in the motion vector storage part 116, also can only store forward motion vector.If only store forward motion vector, the amount of memory that then can cut down motion vector storage part 116.
(embodiment 2)
Figure 16 is the structured flowchart that dynamic image decoding device one embodiment of motion image encoding method of the present invention is used in expression.
Dynamic image decoding device possesses as shown in figure 16: sign indicating number row analysis portion 1401, predicated error lsb decoder 1402, mode decoding portion 1403, frame memory control part 1404, motion compensation decoding portion 1405, motion vector storage part 1406, frame memory 1407, addition operation division 1408 and switch 1409,1410.
Sign indicating number row analysis portion 1401 is extracted various data such as coding mode information and motion vector information out from the sign indicating number row of input.1402 decodings of predicated error lsb decoder are from the coded prediction error data of sign indicating number row analysis portion 1401 inputs, generation forecast error image data.Mode decoding portion 1403 is with reference to the coding mode information of extracting out from the sign indicating number row, control switch 1409,1410.
Frame memory control part 1404 is according to the information of the expression frame DISPLAY ORDER of importing from sign indicating number row analysis portion 1401, and the decode image data of storage in the output frame memory 1407 is as output image.
Motion compensation decoding portion 1405 carries out the decoding processing of reference frame sequence number and motion vector information, according to decoded reference frame sequence number and motion vector, obtains motion-compensated image data from frame memory 1407.Motion vector storage part 1406 storing moving vectors.
Addition operation division 1408 will generate decode image data from the coded prediction error data and the motion-compensated image data addition of importing from motion compensation decoding portion 1405 of predicated error lsb decoder 1402 inputs.The decode image data that frame memory 1407 storages generate.
Below, the action of the dynamic image decoding device of above-mentioned formation is described.Here, establish to dynamic image decoding device and import the sign indicating number row that generate in the above-mentioned moving picture encoding device.That is, here, establish the P frame and show near I that has encoded or the P frame that is positioned at the place ahead or rear on the time sequencing with reference to 1.In addition, establish the B frame and show near the frame of having encoded that is positioned at the place ahead or rear on the time sequencing with reference to two.
Frame in the sign indicating number row of this moment becomes order shown in Fig. 6 (b).Below, order illustrates the decoding processing of frame P9, B7, B6, B8.
(decoding processing of frame P9)
Sign indicating number row input code row analysis portion 1401 with frame P9.Sign indicating number row analysis portion 1401 is extracted various data out from the sign indicating number row of input.Here, so-called various data are model selection information or motion vector information etc.The model selection information of extracting out is outputed to mode decoding portion 1403.In addition, the motion vector information of extracting out is outputed to motion compensation decoding portion 1405.And, the coded prediction error data are outputed to predicated error lsb decoder 1402.
Information, control switch 1409,1410 are selected with reference to the coding mode of extracting out by mode decoding portion 1403 from the sign indicating number row.Be chosen as at coding mode under the situation of intraframe coding, mode decoding portion 1403 control switchs 1409 are connected to a side, and control switch 1410 is connected to the c side.In addition, when coding mode was chosen as inter prediction encoding, mode decoding portion 1403 control switchs 1409 were connected to the b side, and control switch 1410 is connected to the d side.
Mode decoding portion 1403 is also to motion compensation decoding portion 1405 output encoder model selection information.Below, illustrate that coding mode is chosen as the situation of inter prediction encoding.The coded prediction error data of predicated error lsb decoder 1402 decoding inputs, generation forecast error image data.The prediction error image data that predicated error lsb decoder 1402 generates to switch 1409 outputs.Here, because switch 1409 is connected in the b side, so to addition operation division 1408 prediction of output error image data.
Motion compensation decoding portion 1405 obtains motion-compensated image data according to the motion vector information etc. of input from frame memory 1407.Frame P9 reference frame P5 encodes, and after frame P5 is decoded, remains in the frame memory 1407.Therefore, motion compensation decoding portion 1405 obtains motion-compensated image data according to motion vector information in the view data of the frame P5 that keeps from frame memory 1407.The motion-compensated image data that so generates is outputed to addition operation division 1408.
Motion compensation decoding portion 1405 under the situation of decoding P frame, with the information stores of motion vector in motion vector storage part 1406.
Addition operation division 1408 generates decode image data with the prediction error image data and the motion-compensated image data addition of input.The decode image data that generates outputs to frame memory 1407 through switch 1410.
As mentioned above, finish the processing of 1 macro block of frame P9.By same processing, remaining macro block of decoding in proper order.If the macro block of all frame P9 of decoding then carries out the decoding of frame B7.
(decoding processing of frame B7)
Action in sign indicating number row analysis portion 1401, mode decoding portion 1403 and the predicated error lsb decoder 1402 before the generation forecast error image data is the same during with the decoding processing of frame P9, so the omission explanation.
Motion compensation decoding portion 1405 generates motion compensation (motion compensation) view data according to the motion vector information etc. of input.P5 is as the forward direction reference frame for frame B7 reference frame, encodes to reference frame as back with reference to P9, after these frames are decoded, remains in the frame memory 1407.
When model selection was bi-directional predicted inter prediction encoding, motion compensation decoding portion 1405 obtained the forward direction reference image data according to forward motion vector information from frame memory 1407.In addition, according to backward motion vector information, from frame memory 1407, obtain the back to reference image data.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
In model selection is under the situation of Direct Model, and motion compensation decoding portion 1405 obtains the motion vector of the frame P9 of storage in the motion vector storage part 1406.In addition, motion compensation decoding portion 1405 uses this motion vector, obtains forward direction reference image data and back to reference image data from frame memory 1407.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
Also use Fig. 7 (a) to illustrate that model selection is the situation of Direct Model.Wherein, establish the piece a of decoded frame B7, and to establish the piece that is positioned at the frame P9 on the same position with piece a be piece b.In addition, the motion vector of piece b is motion vector c, this motion vector c reference frame P5.At this moment, use motion vector d to be used as forward motion vector, use motion vector e with reference to the frame P9 that utilizes motion vector c to obtain as backward motion vector with reference to the frame P5 that utilizes motion vector c to obtain.For example, as the method for utilizing motion vector c, the method that generates the motion vector that is parallel to motion vector c is arranged.If summation averaging is motion-compensated image data according to the forward direction comparable data that these motion vectors obtain with the view data of back after comparable data.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B7) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece b of reference frame institute reference thereafter, present frame (frame B7) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector d, motion vector e is obtained by (formula 1), (formula 2) respectively.Wherein, MVF, MVB show the horizontal composition of motion vector, vertical composition respectively.In addition, the time gap that for example can determine each interframe according to information or its information gap of the expression DISPLAY ORDER (position) that is additional to each frame.
The motion-compensated image data that so generates is outputed to addition operation division 1408.In addition, motion compensation (motion compensation) lsb decoder 1405 is stored in motion vector information in the motion vector storage part 1406.
Addition operation division 1408 generates decode image data with the prediction error image data and the motion-compensated image data addition of input.The decode image data that generates is outputed to frame memory 1407 through switch 1410.
As mentioned above, finish the processing of 1 macro block of frame B7.By same processing, remaining macro block of decoding in proper order.If the macro block of the whole frame B7 of decoding, then decoded frame B6.
(decoding processing of frame B6)
Action in sign indicating number row analysis portion 1401, mode decoding portion 1403 and the predicated error lsb decoder 1402 before the generation forecast error image data is the same during with the decoding processing of frame P9, so the omission explanation.
Motion compensation decoding portion 1405 generates motion-compensated image data according to the motion vector information etc. of input.P5 is as the forward direction reference frame for frame B6 reference frame, encodes to reference frame as back with reference to B7, after these frames are decoded, remains in the frame memory 1407.
When model selection was bi-directional predicted inter prediction encoding, motion compensation decoding portion 1405 obtained the forward direction reference image data according to forward motion vector information from frame memory 1407.In addition, according to backward motion vector information, from frame memory 1407, obtain the back to reference image data.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
In model selection is under the situation of Direct Model, and motion compensation decoding portion 1405 obtains the motion vector of the frame B7 of storage in the motion vector storage part 1406.Motion compensation decoding portion 1405 uses this motion vector, obtains forward direction reference image data and back to reference image data from frame memory 1407.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
First example when illustrating that with Fig. 7 (b) model selection is Direct Model.Wherein, establish the piece a of decoded frame B6, and to establish the piece that is positioned at the frame B7 on the same position with piece a be piece b.In addition, establish, and the forward motion vector of establishing piece b is motion vector c based on the inter prediction encoding of forward direction reference or based on the inter prediction encoding piece b of two-way reference.This motion vector c reference frame P5.At this moment, use motion vector d with reference to the frame P5 that utilizes motion vector c to generate to be used as forward motion vector, use motion vector e with reference to the frame B7 that utilizes motion vector c to generate as backward motion vector to piece a.For example, as the method for utilizing motion vector c, the method that generates the motion vector that is parallel to motion vector c is arranged.If summation averaging is motion-compensated image data according to the forward direction reference image data that these motion vectors obtain with the view data of back after reference image data.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P5) of the piece b of reference frame institute reference thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector d, motion vector e is obtained by (formula 1), (formula 2) respectively.In addition, the time gap that for example can determine each interframe according to information or its information gap of the expression DISPLAY ORDER (position) that is additional to each frame.In addition, the value of TRD, TRF also can be used the setting that each frame is determined.This setting also can be used as header and is recorded in the sign indicating number row.
In addition, illustrate that with Fig. 7 (b) model selection is second example under the Direct Model situation.
At this moment, used motion vector when utilize among the decoded frame B7, the position is identical with piece a piece b, frame B7 are the reference frame that is positioned at after the frame B6.Here, if come encoding block b, establish the forward motion vector of using in fact this moment and be motion vector c with Direct Model.This motion vector c uses the motion vector of storage in the motion vector storage part 1406, or behind the motion vector of used frame P9, carries out obtaining after the Conversion Calculation when reading by Direct Model encoding block b from motion vector storage part 1406.When motion compensation decoding portion 1405 also can be will be by the piece b of Direct Model decoded frame B7 the time handles the motion vector of obtaining by converting and is stored in the motion vector storage part 1406, only store forward motion vector.
At this moment, use motion vector d with reference to the frame P5 that utilizes motion vector c to generate to be used as forward motion vector, use motion vector e with reference to the frame B7 that utilizes motion vector c to generate as backward motion vector to piece a.For example, as the method for utilizing motion vector c, the method that generates the motion vector that is parallel to motion vector c is arranged.If summation averaging is motion-compensated image data according to the forward direction reference image data that these motion vectors obtain with the view data of back after reference image data.
At this moment, as the big or small MVF of the motion vector d of forward motion vector, with the same as first example of the big or small MVB of the motion vector e of backward motion vector and Direct Model, available (formula 1), (formula 2) are obtained.
Below, illustrate that with Fig. 7 (c) model selection is the 3rd example under the Direct Model situation.
Here, establish the piece a of decoded frame B6, and the piece of establishing the position frame B7 identical with piece a is piece b.If the back is to reference predictive coding piece b, and the backward motion vector of establishing piece b is motion vector f.This motion vector f reference frame P9.At this moment, use motion vector g with reference to the frame P5 that utilizes motion vector f to obtain to be used as forward motion vector, use motion vector h with reference to the frame B7 that utilizes motion vector f to obtain as backward motion vector to piece a.For example, as the method for utilizing motion vector f, the method that generates the motion vector that is parallel to motion vector f is arranged.If summation averaging is motion-compensated image data according to the forward direction reference image data that these motion vectors obtain with the view data of back after reference image data.
At this moment, if the size of establishing as the motion vector g of forward motion vector is MVF, if the size as the motion vector h of backward motion vector is MVB, the size of motion vector f is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P9) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), present frame (frame B6) is TRB with the back to the time gap of reference frame (frame B7), then the big or small MVF of motion vector g, the big or small MVB of motion vector h is respectively by (formula 3), (formula 4) obtained.
Below, illustrate that with Fig. 7 (d) model selection is the 4th example under the Direct Model situation.
Here, establish the piece a of decoded frame B6, and the piece of establishing the position frame B7 identical with piece a is piece b.If back the same with the 3rd example is to reference predictive coding piece b, and the backward motion vector of establishing piece b is motion vector f.This motion vector f reference frame P9.At this moment, use motion vector g with reference to the frame P9 that utilizes motion vector f to obtain to be used as forward motion vector, use motion vector h with reference to the frame B7 that utilizes motion vector f to obtain as backward motion vector to piece a.For example, as the method for utilizing motion vector f, the method that generates the motion vector that is parallel to motion vector f is arranged.If summation averaging is motion-compensated image data according to the forward direction reference image data that these motion vectors obtain with the view data of back after reference image data.
At this moment, if the size of establishing as the motion vector g of forward motion vector is MVF, if the size as the motion vector h of backward motion vector is MVB, the size of motion vector f is MV, the back of present frame (frame B6) is TRD to reference frame (frame B7) with the time gap to the frame (frame P9) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the back to the time gap of the frame (frame P9) of the piece institute reference of reference frame (frame B7), and then the big or small MVB of the big or small MVF of motion vector g, motion vector h is obtained by (formula 1), (formula 2) respectively.
In addition, illustrate that with Fig. 8 (a) model selection is the 5th example under the Direct Model situation.Here, establish the piece a that comes decoded frame B6 by Direct Model.At this moment, the size of establishing motion vector is 0, and frame P5 as the forward direction reference frame, is used as the back to reference frame with frame B7, by carrying out two-way reference, carries out motion compensation.
Below, illustrate that with Fig. 8 (b) model selection is the 6th example under the Direct Model situation.Here, establish piece a by Direct Model decoded frame B6.Here, the motion vector g that uses when utilizing the piece f identical with piece a of position among the decoded frame P9, frame P9 are the P frames that is positioned at after the frame B6.Motion vector g is stored in the motion vector storage part 1406.Piece a uses the motion vector that utilizes motion vector g to obtain, and frame P5 and the frame B7 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same method that generates the motion vector that is parallel to motion vector g of using with the situation of above-mentioned first example, the relative frame P5 of motion vector that then is used to obtain the motion-compensated image data of piece a becomes motion vector h, and frame B7 becomes motion vector i relatively.
At this moment, if the size of establishing as the motion vector h of forward motion vector is MVF, if the size as the motion vector I of backward motion vector is MVB, the size of motion vector g is MV, the time gap of frame (frame P5) that is positioned at present frame (frame B6) P frame (frame P9) afterwards and the piece f institute reference that is positioned at frame thereafter is TRD, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), present frame (frame B6) is TRB with the back to the time gap of reference frame (frame B7), and then motion vector MVF, motion vector MVB are obtained by (formula 1), (formula 5) respectively.
Below, illustrate that with Fig. 8 (c) model selection is the 7th example under the Direct Model situation.Here, establish the piece a that comes decoded frame B6 by Direct Model.In this example, to the distribution that the frame number of above-mentioned explanation changes (mapping again) relative indexing, the back becomes frame P9 to reference frame.At this moment, the motion vector g that uses when utilize among the coded frame P9, the position is identical with piece a piece f, frame P9 is back to reference frame as frame B6's.Motion vector g is stored in the motion vector storage part 1406.Piece a uses the motion vector that utilizes motion vector g to generate, and frame P5 and the frame P9 of conduct back to reference frame according to as the forward direction reference frame carry out bi-directional predicted.For example, if the same with the situation of above-mentioned first example, use the method that generates the motion vector that is parallel to motion vector g, the motion vector that then is used to obtain the motion-compensated image data of piece a becomes motion vector h for frame P5, becomes motion vector i for frame P9.
At this moment, if the size of establishing as the motion vector h of forward motion vector is MVF, if the size as the motion vector i of backward motion vector is MVB, the size of motion vector g is MV, the back of present frame (frame B6) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece institute reference of reference frame thereafter, present frame (frame B6) is TRF with the time gap of forward direction reference frame (frame P5), and then the big or small MVB of the big or small MVF of motion vector h, motion vector i is obtained by (formula 1), (formula 2) respectively.
The motion-compensated image data that so generates is outputed to addition operation division 1408.Addition operation division 1408 generates decode image data with the prediction error image data and the motion-compensated image data addition of input.The decode image data that generates outputs to frame memory 1407 through switch 1410.
As mentioned above, finish the processing of the macro block of frame B6.By same processing, remaining macro block of decoding in proper order.If the macro block of the whole frame B6 of decoding, then decoded frame B8.
(decoding processing of frame B8)
Action in sign indicating number row analysis portion 1401, mode decoding portion 1403 and the predicated error lsb decoder 1402 before the generation forecast error image data is the same during with the decoding processing of frame P9, so the omission explanation.
Motion compensation decoding portion 1405 generates motion-compensated image data according to the motion vector information etc. of input.B7 is as the forward direction reference frame for frame B8 reference frame, encodes to reference frame as back with reference to P9, after these frames are decoded, remains in the frame memory 1407.
When model selection was bi-directional predicted inter prediction encoding, motion compensation decoding portion 1405 obtained the forward direction reference image data according to forward motion vector information from frame memory 1407.In addition, according to backward motion vector information, from frame memory 1407, obtain the back to reference image data.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
In model selection is under the situation of Direct Model, and motion compensation decoding portion 1405 obtains the motion vector of the frame P9 of storage in the motion vector storage part 1406.Motion compensation decoding portion 1405 uses this motion vector, obtains forward direction reference image data and back to reference image data from frame memory 1407.Motion compensation decoding portion 1405 generates motion-compensated image data by summation averaging forward direction reference image data and back to reference image data.
Example when illustrating that with Fig. 8 (d) model selection is Direct Model.Wherein, establish the piece a of decoded frame B8, and be made as the back in the frame P9 of reference frame, be positioned at piece b on the same position with piece a.In addition, the forward motion vector of establishing piece b is motion vector c.This motion vector c reference frame P5.At this moment, use motion vector d with reference to the frame B7 that utilizes motion vector c to generate to be used as forward motion vector, use motion vector e with reference to the frame P9 that utilizes motion vector c to generate as backward motion vector to piece a.For example, as the method for utilizing motion vector c, the method that generates the motion vector that is parallel to motion vector c is arranged.If summation averaging is motion-compensated image data according to the forward direction reference image data that these motion vectors obtain with the view data of back after reference image data.
At this moment, if the size of establishing as the motion vector d of forward motion vector is MVF, if the size as the motion vector e of backward motion vector is MVB, the size of motion vector c is MV, the back of present frame (frame B8) is TRD to reference frame (frame P9) with the time gap to the frame (frame P5) of the piece b of reference frame institute reference thereafter, present frame (frame B8) is TRF with the time gap of forward direction reference frame (frame B7), present frame (frame B8) is TRB with the back to the time gap of reference frame (frame P9), then the big or small MVF of motion vector d, the big or small MVB of motion vector e is respectively by (formula 1), (formula 5) obtained.
The motion-compensated image data that so generates is outputed to addition operation division 1408.Addition operation division 1408 generates decode image data with the prediction error image data and the motion-compensated image data addition of input.The decode image data that generates outputs to frame memory 1407 through switch 1410.
As mentioned above, finish the processing of the macro block of frame B8.By same processing, remaining macro block of decoding in proper order.Below, by each frame of decoding corresponding to the same processing of frame kind.
Then, frame memory control part 1404 is exported as output image after as mentioned above the view data of each frame of maintenance in the frame memory 1407 being replaced shown in Fig. 6 (a) in chronological order.
As mentioned above, in dynamic image decoding method of the present invention, when using bi-directional predicted the decoding when carrying out B frame that inter prediction encoding handles, as forward direction reference frame and back frame to reference frame, use near the decoded frame in showing time sequencing, being positioned at, decode.
In addition, in the B frame, select under the situation of Direct Model as coding mode, by the decoded back motion vector that keeps in the reference motion vector storage part 1406, from decoded view data, obtain reference image data, and obtain motion-compensated image data to reference frame.
By this action, using bi-directional predicted the coding when carrying out B frame that inter prediction encoding handles, when decoding use to show that near being positioned in the time sequencing frame is listed as the sign indicating number that generates to the frame of reference frame and the back of encoding as forward direction reference frame and back, can correctly carry out decoding processing.
In addition, in the present embodiment, Direct Model has been illustrated 7 examples, but for example also can use by the coding/decoding method etc. of the piece of back on the same position of reference frame, to each macro block or the well-determined method of piece, also can piece or macro block unit switch a plurality of methods and use.Under the situation of using a plurality of methods, information record, which Direct Model of expression use is decoded in the use sign indicating number row.At this moment, the action of motion compensation decoding portion 1405 changes with this information.For example, add under the situation of this information at the block unit with motion compensation, which Direct Model mode decoding portion determines to use encode, and to the motion compensation decoding portion 1405 of sending.In addition, motion compensation decoding portion 1405 uses the coding/decoding method that illustrates in the present embodiment to carry out decoding processing according to using which Direct Model.
In addition, in the present embodiment, the frame structure situation that clips 3 B frames in I or P interframe is described, but the number of this B frame also can be other value, for example 4 or 5.
In addition, in the present embodiment, illustrate that decoding P frame shows that with reference to 1 being positioned at front or rear I that has encoded or P frame on the time sequencing encodes, the B frame is with reference to two situations that show the sign indicating number row that near the frame of having encoded before or after being positioned on the time sequencing is encoded, but at these frames is under the situation of P frame, can be to show that being positioned at front or rear a plurality of I frames of having encoded or P frame in the time sequencing substitutes as reference frame, and the sign indicating number of encoding with reference to 1 frame of maximum in each piece is listed as, under the situation that is the B frame, can be to show that being positioned near front or rear a plurality of frames of having encoded in the time sequencing substitutes as reference frame, and be listed as with reference to the sign indicating number that two maximum in each piece frames are encoded.
Motion compensation decoding portion 1405 is being come by bi-directional predicted or Direct Model under the situation of coded object piece when motion vector being stored in the motion vector storage part 1406, can store forward direction and back to both motion vectors, also can only store forward motion vector.If only store forward motion vector, the amount of memory that then can cut down motion vector storage part 1406.
(embodiment 3)
By will realizing that program that motion image encoding method shown in the various embodiments described above or dynamic image decoding method structure are used is recorded in the medium such as floppy disk, can be in computer system independently simply implement the processing shown in the various embodiments described above.
Figure 17 is to use the motion image encoding method of having stored the various embodiments described above and the floppy disk of dynamic image decoding method, comes key diagram under the performance by computer system.
Outward appearance, cross section structure and floppy disk that Figure 17 (b) expression is seen from the front of floppy disk, Figure 17 (a) expression is as the example of the physical format of the floppy disk of recording medium main body.Floppy disk FD is built in the housing F, on the surface of this dish, forms a plurality of track Tr with concentric circles from outside to inside, along angle direction each track is divided into 16 sector Se.Therefore, in the floppy disk of storage said procedure, in the assigned region, record is as the motion image encoding method of said procedure on above-mentioned floppy disk FD.
In addition, Figure 17 (c) is illustrated in the structure that the record-playback that carries out said procedure on the floppy disk FD is used.Under situation about said procedure being recorded among the floppy disk FD, write motion image encoding method or dynamic image decoding method from computer system Cs as said procedure through floppy disk.In addition, constructing in computer system by the program in the floppy disk under the situation of above-mentioned motion image encoding method,, and be sent to computer system by floppy disk read routine from floppy disk.
In the above description, floppy disk is described as recording medium, even but use CD to carry out too.In addition, recording medium is not limited thereto, but IC-card, ROM box etc. so long as the medium of logging program all can implement equally.
Here, the application example of motion image encoding method shown in the foregoing description or dynamic image decoding method and use the system of this application example also is described.
Figure 18 is that expression realizes that the content of content converting service provides the overall structure block diagram of the ex100 of system.The differentiation that provides of communication service is slit into desired size, and in each sub-district, is provided as the base station ex107-ex110 of stationary wireless stations respectively.
Content provides the ex100 of system for example through ISP ex102 and telephone network ex104 and base station ex107-ex110, and (personal digitalassistant: personal digital assistant) each equipment such as mobile phone ex115 of ex112, video camera ex113, mobile phone ex114, band video camera is on internet ex101 with computer ex111, PDA.
But, the combination that content provides the ex100 of system to be not limited to Figure 18, but also connect after the combination in any.In addition, also can be without as the base station ex107-ex110 of fixed base stations and each equipment is directly connected on the telephone network ex104.
Video camera ex113 is the equipment that digital camera etc. can be taken dynamic image.In addition, mobile phone is PDC (Personal Digital Communications: individual digital communication) mode, CDMA (Code Division Multiple Access: code division multiple access) mode, W-CDMA (Wideband-Code Division Multiple Access: Wideband Code Division Multiple Access (WCDMA)) mode or GSM (Global System for Mobile Communications: Global Systems for Mobile communications) the mobile phone machine of mode, or PHS (Personal Handyphone System: personal handy phone system) etc., be which kind of is all harmless.
In addition, streaming server ex103 is connected with video camera ex113 by base station ex109, telephone network ex104, uses video camera ex113, can carry out on-site delivery etc. according to the data after the encoding process of user's transmission.The encoding process of camera data can be undertaken by video camera ex113, also can be undertaken by server that carries out data sending processing etc.In addition, the dynamic image data taken of video camera ex116 also as calculated machine ex111 send to streaming server ex103.Video camera ex116 is the equipment that digital camera etc. can be taken rest image, dynamic image.Thus, the coding of dynamic data is still undertaken unimportant by computer ex111 by video camera ex116.In addition, encoding process becomes in the LSIex117 that computer ex111 or video camera ex116 have and handles.Also can with moving picture encoding, the decoding with software loading in arbitrary medium (CD-ROM, floppy disk, hard disk etc.) of the recording medium that can read as computer ex111.And, also can send dynamic image data by the mobile phone ex115 of band video camera.The dynamic image data of this moment is to carry out data after the encoding process by the LSI that mobile phone ex115 has.
Provide among the ex100 of system in this content, the same encoding process with the foregoing description of user is by the content (for example taking the photo at music scene etc.) of shootings such as video camera ex113, video camera ex116, and send to streaming server ex103, on the other hand, streaming server ex103 banishs to serve to the client computer that requirement is arranged and states content-data.As client computer, data computing machine ex111, PDAex112 after the above-mentioned encoding process of decodable code, video camera ex113, mobile phone ex114 etc. are arranged.Thereby content provides the ex100 of system to be received and reproduced data behind the coding by client computer, and by being received in real time by client computer and decoding, reproducing, also can realize personal broadcaster.
In the coding of each equipment that constitutes this system, decoding, also can use moving picture encoding device shown in the various embodiments described above or dynamic image decoding device.
As an example, mobile phone is described.
Figure 19 is the figure that the mobile phone ex115 of the motion image encoding method that illustrates in the foregoing description and dynamic image decoding method is used in expression.Mobile phone ex115 has: antenna ex201, and transceiver electric wave between the ex110 of base station; Image pickup part ex203 such as ccd video camera can take pictures, rest image; Display part ex202 such as LCD, the data of the photo that demonstration decoding image pickup part ex203 takes, the photo that antenna ex201 receives etc.; Main part, ex204 group constitutes by operation keys; Audio output unit ex208 such as loud speaker carry out voice output; Sound input part ex205 such as microphone carry out the sound input; Recording medium ex207 preserves dynamic image or coded data or the decoded datas such as the data of rest image, the mail data that receives, dynamic image data or Still image data taken; The ex206 of fluting portion can be installed in recording medium ex207 on the mobile phone ex115.Recording medium ex207 is at EEPROM (the ElectricallyErasable and Programmable Read Only Memory: electricallyerasable ROM (EEROM)) one of flash element of plastic casing stored such as SD card as the non-volatile memories of can electricity rewriting or wiping.
With Figure 20 mobile phone ex115 is described.Mobile phone ex115 is through synchronous bus ex313, to master control part ex311 be connected to each other power circuit part ex310, operation input control part ex304, the ex312 of image encoding portion, (Liquid Crystal Display: LCD) control part ex302, the ex309 of picture decoding portion, multiplexing separated part ex308, the ex307 of record-playback portion, the ex306 of modulation-demodulation circuit portion and sound handling part ex305, the unified control of master control part ex311 possesses each one of the main part of display part ex202 and operation keys ex204 for the ex303 of camera interface portion, LCD.
Power circuit part ex310 be if will finish conversation and power key becomes on-state by user operation, then from battery pack to each power supply, thus, will be with the digital mobile phone ex115 of video camera to start to operable state.
Mobile phone ex115 is according to the control of the master control part ex311 that is made of CPU, ROM and RAM etc., the voice signal of being collected by sound input part ex205 during with the sound call mode by the ex305 of acoustic processing portion is transformed to digital audio signal, and this signal is carried out spread processing by the ex306 of modulation-demodulation circuit portion, after implementing digital to analog conversion processing and frequency conversion process by the ex301 of transceiver circuit portion, ex201 sends through antenna.In addition, the reception data that received by antenna ex201 during mobile phone machine ex115 voice emplifying call mode are implemented frequency conversion process and contrary spread processing, after being transformed to analog sound data by the ex305 of acoustic processing portion, export through audio output unit ex208.
When data communication mode under the situation of send Email,, will send to master control part ex311 by the text data of the Email of the operation input of the operation keys ex204 of main part through operation input control part ex304.Master control part ex311 carries out spread processing by modulation-demodulation circuit ex306 to text data, and by after the ex301 of transceiver circuit portion enforcement digital to analog conversion processing and the frequency conversion process, sends to base station ex110 through antenna ex201.
When data communication mode, send under the situation of view data,, will offer the ex312 of image encoding portion by image pickup part ex203 shot image data through the ex303 of camera interface portion.In addition, under the situation that does not send view data, also can in display part ex202, directly show through ex303 of camera interface portion and LCD control part ex302 by image pickup part ex203 shot image data.
The ex312 of image encoding portion has the moving picture encoding device that illustrates among the present invention, by the view data that provides from image pickup part ex203 by code used method compressed encoding in the moving picture encoding device shown in the foregoing description, be transformed to coded image data, and send to multiplexing separated part ex308.Meanwhile, the sound that mobile phone machine ex115 will be collected by sound input part ex205 in image pickup part ex203 takes is as digital audio data, and ex305 sends to multiplexing separated part ex308 through acoustic processing portion.
Multiplexing separated part ex308 is with multiplexing coded image data that provides from the ex312 of image encoding portion of prescribed manner and the voice data that provides from the ex305 of acoustic processing portion, and the multiplex data that obtains by modulation-demodulation circuit ex306 spread processing result, after implementing digital to analog conversion processing and frequency conversion process, send through antenna ex201 by the ex301 of transceiver circuit portion.
When data communication mode, receive under the dynamic image file data conditions be linked on the homepage etc., the reception data that the contrary spread processing of modulation-demodulation circuit ex306 receives from base station ex110 through antenna ex201, and the multiplex data that the result obtains sent to multiplexing separated part ex308.
Decoding in the multiplex data that antenna ex201 receives, multiplexing separated part ex308 is by the separation multiplexing data, the bit stream with voice data is filled in the position that is divided into view data, through synchronous bus ex313 this coded image data is offered the ex309 of picture decoding portion, simultaneously, this voice data is offered the ex305 of acoustic processing portion.
The ex309 of picture decoding portion has the dynamic image decoding device that illustrates among the present invention, by come the bit stream of decode image data with coding/decoding method corresponding to coding method shown in the foregoing description, generate and reproduce dynamic image data, and offer display part ex202 through LCD control part ex302, thereby, show the dynamic image data that comprises in the dynamic image file be linked on the homepage for example.Meanwhile, the ex305 of acoustic processing portion offers audio output unit ex208 with it after voice data is transformed to analog sound data, reproduces the voice data that comprises in the dynamic image file be linked on the homepage for example thus.
Be not limited to the example of said system, recently, become topic based on the digital broadcasting of satellite, surface wave, as shown in figure 21, data broadcasting is with one of the moving picture encoding device at least that also can load the foregoing description in the system or dynamic image decoding device.Particularly, broadcasting station ex409 is sent to communication or broadcasting satellite ex410 through electric wave with the bit stream of photographic intelligence.The broadcasting satellite ex410 that receives bit stream sends broadcasting and uses electric wave, and the antenna ex406 with family expenses of satellite broadcasting receiving equipment receives this electric wave, by device decoding bit streams such as TV (receiver) ex401 or set-top box (STB) ex407, reproduces.In addition, read, decode and also the dynamic image decoding device shown in the foregoing description can be installed among the transcriber ex403 as the bit stream that writes down among the medium ex402 such as the CD of recording medium or DVD.At this moment, the photo signal of reproduction is shown among the monitor ex404.In addition, also can dynamic image decoding device be installed in the set-top box ex407 on the antenna ex406 of cable ex405 or satellite/terrestrial ripple broadcasting being connected in cable TV, and reproduce by the monitor ex408 of TV.At this moment, also can be not in set-top box and at TV internal stowage dynamic image decoding device.In addition, the automobile ex412 with antenna ex411 also can be from satellite ex410 or from received signals such as base station ex107, and reproduces dynamic image in the display unit such as automobile navigation apparatus ex413 that automobile ex412 has.
In addition, also can be by the coding image signal of moving picture encoding device shown in the foregoing description, and be recorded in the recording medium.As concrete example, the DVD register of recording image signal in DVD dish ex412 is arranged, or the register ex420 such as dish register that in hard disk, write down.In addition, also can be recorded among the SD card ex422.If register ex420 possesses dynamic image decoding device shown in the foregoing description, then can reproduce the picture signal that writes down among DVD dish ex421 or the SD card ex422, and show by monitor ex408.
The structure of automobile navigation apparatus ex413 considers to remove the structure behind image pickup part ex203 and the ex303 of camera interface portion, the ex312 of image encoding portion in structure shown in Figure 20 for example, computer ex111 or television set (receiver) ex401 also can consider equally.
The terminal of above-mentioned mobile phone ex114 etc. is also considered the 3 kinds of installation forms such as receiving terminal that only have the transmission terminal of encoder, only have decoder except that having encoder, both transceiver type terminals of decoder.
Like this, motion image encoding method shown in the foregoing description or dynamic image decoding method can be used for above-mentioned arbitrary equipment, system, thus, the effect that can obtain illustrating in the foregoing description.
In addition, the invention is not restricted to the foregoing description, in not departing from the scope of the present invention, can carry out various distortion and corrigendum.
As mentioned above, according to motion image encoding method of the present invention, when coding B frame, can will show frame nearer in the time sequencing as reference frame, thus, the forecasting efficiency in the time of can improving motion compensation is so improve code efficiency.
In addition, under Direct Model, convert, can send motion vector information, and improve forecasting efficiency by the 1st motion vector to the 2nd reference frame.
In addition, under Direct Model, convert by the 1st motion vector that the 2nd reference frame is used in fact under Direct Model, can send motion vector information, even and under Direct Model, in coding the 2nd reference frame during piece of same position, also can improve forecasting efficiency.
In addition, under Direct Model, the 2nd motion vector that uses by to the piece of same position in the 2nd reference frame of encoding the time converts, and can send motion vector information, even and the piece of same position only has under the situation of the 2nd motion vector in the 2nd reference frame, also can improve forecasting efficiency.
In addition, force to be set to 0, under the situation of selecting Direct Model, can send motion vector information, and conversion that can motion vector handles, cut down treating capacity by the motion vector under the Direct Model.
In addition, under Direct Model,, be under the situation of B frame in the 2nd reference frame by the motion vector at back P frame is converted, needn't store the motion vector of this B frame.In addition, the information of motion vector needn't be sent, and forecasting efficiency can be improved.
In addition, under Direct Model, if the 2nd reference frame has the 1st motion vector, then the 1st motion vector is converted, in addition, if the 2nd reference frame does not have the 1st motion vector and only has the 2nd motion vector, then the 2nd motion vector is converted, so needn't can improve forecasting efficiency to sign indicating number row additional movement Vector Message.
In addition, according to dynamic image decoding method of the present invention, when decoding sign indicating number row, can correctly carry out decoding processing, this yard row by use to show near being positioned in the time sequencing frame, as using the bi-directional predicted inter prediction encoding that carries out to encode as the frame of the 1st reference and the 2nd reference when handling and generate.
Workability on the industry
As mentioned above, the method that dynamic image coding method of the present invention and dynamic image decoding method are used as in such as mobile phone, DVD device and personal computer etc., coding comes the code row usefulness of generated code row, simultaneously decoding generation corresponding to the view data of each frame that consists of dynamic image.

Claims (4)

1. picture decoding method, the picture decoding with being encoded is characterized in that, comprises:
Decoding step, when decoder object piece (a) is decoded, determine the motion vector (E1, E2) of described decoder object piece (a) according to the motion vector (C1) of same position piece (b), described same position piece (b) is the piece in the decoded frame (B7), and be and the piece of described decoder object piece at same position
And use described decoder object piece motion vector (E1, E2) and with the corresponding reference frame of motion vector (E1, E2) of described decoder object piece, described decoder object piece (a) is carried out motion compensation and decoding with Direct Model,
In described decoding step, when using a motion vector (C1) and a rear reference frame corresponding to decode described same position piece (b) with this motion vector (C1),
A described motion vector (C1) that uses when decoding described same position piece (b), use the difference of the information of the DISPLAY ORDER of representing frame to convert, thus, described decoder object piece (a) is generated two motion vectors (E1, E2) that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding
Described two motion vectors (E1, E2) that use to generate and with each self-corresponding two reference frame of described two motion vectors that generate, described decoder object piece (a) is carried out motion compensation and decoding with Direct Model.
2. picture decoding method according to claim 1 is characterized in that:
In corresponding with two motion vectors of described decoder object piece respectively described two reference frame,
The 1st reference frame is the frame that comprises described same position piece,
The 2nd reference frame is the reference frame at a rear using with the decoding of described same position piece the time, and is as the corresponding reference frame of conversion motion of objects vector during with two motion vectors generating described decoder object piece.
3. picture decoding method according to claim 2 is characterized in that:
The described information of the DISPLAY ORDER of expression frame is:
The 1st information, expression comprises the DISPLAY ORDER of the frame of described decoder object piece;
The 2nd information is represented the DISPLAY ORDER of described the 2nd reference frame of described decoder object piece; And
The 3rd information, expression frame DISPLAY ORDER, this frame is to comprise the frame of described same position piece and is described the 1st reference frame of described decoder object piece;
The difference of described information is the poor of the 1st information and the 2nd information, the difference of the 1st information and the 3rd information and the 2nd information and the 3rd information poor.
4. picture decoding apparatus, the picture decoding with being encoded is characterized in that, comprises:
Decoding unit when the decoder object piece is decoded, is determined that according to the motion vector of same position piece the motion vector of described decoder object piece, described same position piece are the pieces in the decoded frame, and is and the piece of described decoder object piece at same position,
And use the motion vector of described decoder object piece and the reference frame corresponding with the motion vector of described decoder object piece, described decoder object piece is carried out motion compensation and decoding with Direct Model,
Described decoding unit, when being carried out decoding at described motion vector of same position piece use with the reference frame at this corresponding rear of motion vector,
A relative described motion vector that uses during the described same position piece of decoding, the difference of the information of the DISPLAY ORDER by using the expression frame converts, thus, described relatively decoder object piece generates two motion vectors that are used for described decoder object piece is carried out with Direct Model motion compensation and decoding
And described two motion vectors that use to generate and with each self-corresponding two reference frame of described two motion vectors that generate, described decoder object piece is carried out motion compensation and decoding with Direct Model.
CN 200810184669 2002-03-04 2003-02-26 Method and device for encoding of picture Expired - Lifetime CN101431681B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2002056919 2002-03-04
JP056919/2002 2002-03-04
JP2002118598 2002-04-19
JP118598/2002 2002-04-19
JP193027/2002 2002-07-02
JP2002193027 2002-07-02

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB038053462A Division CN100474933C (en) 2002-03-04 2003-02-26 Moving picture coding method and moving picture decoding device

Publications (2)

Publication Number Publication Date
CN101431681A CN101431681A (en) 2009-05-13
CN101431681B true CN101431681B (en) 2011-03-30

Family

ID=40646805

Family Applications (3)

Application Number Title Priority Date Filing Date
CN 200810184669 Expired - Lifetime CN101431681B (en) 2002-03-04 2003-02-26 Method and device for encoding of picture
CN 200810184664 Expired - Lifetime CN101431679B (en) 2002-03-04 2003-02-26 Method and device for encoding of picture
CN 200810184668 Expired - Lifetime CN101431680B (en) 2002-03-04 2003-02-26 Method and device for encoding of picture

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN 200810184664 Expired - Lifetime CN101431679B (en) 2002-03-04 2003-02-26 Method and device for encoding of picture
CN 200810184668 Expired - Lifetime CN101431680B (en) 2002-03-04 2003-02-26 Method and device for encoding of picture

Country Status (1)

Country Link
CN (3) CN101431681B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102378000B (en) * 2010-08-13 2013-07-17 炬力集成电路设计有限公司 Video frequency decoding device and method thereof
KR101532665B1 (en) 2011-03-14 2015-07-09 미디어텍 인크. Method and apparatus for deriving temporal motion vector prediction
EP3174298B1 (en) * 2011-11-02 2018-09-12 Tagivan Ii Llc Video codec
CN105898328A (en) * 2015-12-14 2016-08-24 乐视云计算有限公司 Self-reference coding included setting method and device for reference frame set
KR20240042245A (en) * 2019-10-10 2024-04-01 베이징 다지아 인터넷 인포메이션 테크놀로지 컴퍼니 리미티드 Methods and apparatuses for video coding using triangle partition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386234A (en) * 1991-11-13 1995-01-31 Sony Corporation Interframe motion predicting method and picture signal coding/decoding apparatus
CN1136877A (en) * 1993-07-07 1996-11-27 Rca.汤姆森许可公司 Method and apparatus for providing compressed non-interlaced scanned video signal
EP0863674A2 (en) * 1997-03-07 1998-09-09 General Instrument Corporation Prediction and coding of bi-directionally predicted video object planes for interlaced digital video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5386234A (en) * 1991-11-13 1995-01-31 Sony Corporation Interframe motion predicting method and picture signal coding/decoding apparatus
CN1136877A (en) * 1993-07-07 1996-11-27 Rca.汤姆森许可公司 Method and apparatus for providing compressed non-interlaced scanned video signal
EP0863674A2 (en) * 1997-03-07 1998-09-09 General Instrument Corporation Prediction and coding of bi-directionally predicted video object planes for interlaced digital video

Also Published As

Publication number Publication date
CN101431680B (en) 2012-01-25
CN101431680A (en) 2009-05-13
CN101431679B (en) 2011-08-10
CN101431679A (en) 2009-05-13
CN101431681A (en) 2009-05-13

Similar Documents

Publication Publication Date Title
CN100474933C (en) Moving picture coding method and moving picture decoding device
JP4130783B2 (en) Motion vector encoding method and motion vector decoding method
CN1312936C (en) Moving image coding method and moving image decoding method
CN102355586B (en) Mobile telephone, set top box, TV set and automobile navigator
CN1662067B (en) Motion estimation method and moving picture coding method
CN101115199B (en) Image decoding method and apparatus
CN101790095B (en) Decoding system, decoding device, encoding device and recording method
WO2003098939A1 (en) Moving image encoding method, moving image decoding method, and data recording medium
CN100428803C (en) Method for encoding and decoding motion picture
JP2004048711A (en) Method for coding and decoding moving picture and data recording medium
CN101431681B (en) Method and device for encoding of picture
CN100574437C (en) Motion vector decoding method and motion vector decoding device
JP2006187039A (en) Motion picture coding method and motion picture decoding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: MATSUSHITA ELECTRIC (AMERICA) INTELLECTUAL PROPERT

Free format text: FORMER OWNER: MATSUSHITA ELECTRIC INDUSTRIAL CO, LTD.

Effective date: 20141009

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20141009

Address after: Seaman Avenue Torrance in the United States of California No. 2000 room 200

Patentee after: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

Address before: Osaka Japan

Patentee before: Matsushita Electric Industrial Co.,Ltd.

CX01 Expiry of patent term

Granted publication date: 20110330

CX01 Expiry of patent term