CN1324533C - Decoding method for encoding image data - Google Patents

Decoding method for encoding image data Download PDF

Info

Publication number
CN1324533C
CN1324533C CNB2003101161093A CN200310116109A CN1324533C CN 1324533 C CN1324533 C CN 1324533C CN B2003101161093 A CNB2003101161093 A CN B2003101161093A CN 200310116109 A CN200310116109 A CN 200310116109A CN 1324533 C CN1324533 C CN 1324533C
Authority
CN
China
Prior art keywords
unit
zone
coding
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB2003101161093A
Other languages
Chinese (zh)
Other versions
CN1523538A (en
Inventor
关口俊一
井须芳美
浅井光太郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN1523538A publication Critical patent/CN1523538A/en
Application granted granted Critical
Publication of CN1324533C publication Critical patent/CN1324533C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Abstract

In partitioning and encoding an image into multiple regions, the degree of freedom of the region shape has generally been low and setting regions based on image features was difficult. A moving image encoding apparatus includes a region partitioning section, an encoder, and a memory for motion-compensated prediction. The region partitioning section includes a partitioning processing section and an integration processing section. The partitioning processing section partitions the input image based on a criterion relating to the state of partition. The integration processing section integrates mutually close regions based on a criterion relating to the state of integration. Thereafter, each region is encoded. A large variety of region shapes can be produced by the integration processing section.

Description

The coding/decoding method of coded image data
Patented claim of the present invention is that number of patent application is 98105236.3, applicant's name is called " Mitsubishi Electric Corporation ", denomination of invention dividing an application for female case application of " moving picture encoding method, moving picture encoding device and decoding device ", the applying date of this mother's case is February 26 in 1998, formerly application number is JP97-107072, and formerly the applying date is on April 24th, 1997.
Technical field
The present invention relates to import the method for encoding behind the animated image and the image encoded data being decoded.
Background technology
Figure 27 is first prior art, is the structured flowchart of expression based on the suggestion moving picture encoding device H.263 of ITU-T.Among the figure, the 1st, input digital image signal (being designated hereinafter simply as input picture), the 101st, difference engine, the 102nd, prediction signal, the 103rd, predictive error signal, the 104th, coding unit, the 105th, coded data, the 106th, decoding unit, the 107th, the predictive error signal of decoding, the 108th, totalizer, the 109th, local decoded image signal, the 110th, storer, the 111st, predicting unit, the 112nd, motion vector.
The input picture 1 input difference device 101 that at first should encode.Difference engine 101 is got the difference of input picture 1 and prediction signal 102, and it is exported as predictive error signal 103.104 pairs of original signals of coding unit are after input picture 1 or predictive error signal 103 are encoded, outputting encoded data 105.As the coding method in the coding unit 104, in above-mentioned suggestion, having adopted and having used a kind of orthogonal transformation is that discrete cosine transform (DCT) is transformed into the method that frequency domain comes the conversion coefficient of gained is carried out equal interval quantizing with predictive error signal 103 from area of space.
Coded data 105 is to both direction branch.One side is that the picture decoding apparatus (not shown) to receiver side is sent, and the opposing party is the decoding unit 106 that is input to this device.Decoding unit 106 carries out the action opposite with coding unit 104, generates the 107 back outputs of decoding predictive error signals from coded data 105.Totalizer 108 will decode predictive error signal 107 and prediction signal 102 additions, with it as decoded image signal 109 outputs.Predicting unit 111 uses the preceding decoded image signal 109 of 1 frame of storage in input picture 1 and the storer 110 to move compensation prediction, prediction of output signal 102 and moving vector 112.At this moment, moving compensation is carried out with the block unit of the fixed size that is made of 16 * 16 pixels that is called macro block.Piece in the violent district that moves as optional function, also can move compensation prediction with the sub-block unit that macro block is divided into four parts 8 * 8 pixels.The moving vector of being asked 112 is sent to picture decoding apparatus, prediction signal 102 is sent to difference engine 101 and totalizer 108.Utilize this device,, can simultaneously keep the data volume that picture quality is simultaneously compressed animated image by using moving compensation prediction.
Figure 28 is the pie graph of the picture coding device relevant with second prior art.This device is based on L.C.Real etc. at " based on the low bitrate video encoder of vector quantization " (A VeryLow Bit Rate Video Coder Based on Vector Quantization) (IEEE proceedings: Flame Image Process, Vol.5, No.2,1996) coding method of motion in.Among the figure, the 113rd, the Region Segmentation unit, the 114th, predicting unit, the 115th, regional determining unit, the 116th, comprise the coded system information of interframe encode/intra coded information, the 117th, moving vector, the 118th, coding unit, the 119th, coded data.
In this device, at first input picture 1 is divided into a plurality of zones by Region Segmentation unit 113.Area size is determined according to moving compensating error in Region Segmentation unit 113.Region Segmentation unit 113 uses the threshold value relevant with the dispersion of interframe signal, from pre-prepd ten kinds of block sizes 4 * 4,4 * 8,8 * 4,8 * 8,8 * 16,16 * 8,16 * 16,16 * 32,32 * 16,32 * 32, judge, in the big zone of motion, distribute little piece, in the little zone of motions such as backstage, distribute big piece, specifically, for the predictive error signal that obtains by predicting unit 114, calculate its dispersion value by regional determining unit 115, come to determine the size of piece in view of the above.Determine attribute information and motion vectors 117 such as region shape information and coded system constantly at this.Thereby, utilize coded system information that predictive error signal or original signal are encoded by coding unit 118, obtain coded data 119.After this processing is identical with first prior art.
In first prior art, with the coding unit region shape be defined as 2 kinds.And they all are squares.Thereby, naturally can be restricted to the coding of the architecture that is adapted to image or characteristics of image.For example, when only wanting the big quilt that moves write body and improve encoding amount, definition is write the identical shaped zone of body with it as far as possible, but is difficult in the prior art.
On the piece this point of preparing multiple size, second prior art has the dirigibility of more processing than first prior art.Thereby even in this device, each zone still is limited to square as a result.Yet, even the square of ten kinds of sizes aspect the adaptability of the image-region of arbitrary shape, also also has room for improvement.
Summary of the invention
The present invention system is in view of such problem and motion, and its purpose is to provide the moving picture encoding that carries out more flexible processing technology according to the image condition of handling.Purpose more specifically of the present invention be to provide use can with the moving picture encoding technology of the corresponding reliably Region Segmentation technology of various picture structures.Segmentation standard when another object of the present invention is to provide cut zone for coding according to various viewpoints.Other purposes of the present invention are to provide the technology that the coded data in the zone that is divided into different shape is correctly decoded.
Moving picture encoding method of the present invention comprises: whether alienable criterion according to the rules is divided into input picture the step in a plurality of zones; To a plurality of zones of cutting apart, respectively according to the rules whether can be comprehensive criterion comprehensive step is carried out in this zone and its adjacent domain.In addition, respectively last zone is carried out the step of image signal encoding after also being included in comprehensively.
To a certain zone, the comparative result associated of the coding quality of above-mentioned whether divisible criterion when cutting apart this zone and when not cutting apart this zone.
To a certain zone, above-mentioned whether comprehensively criterion with should the zone with its adjacent domain the comprehensive and fine or not comparative result associated of should the zone not encoding when comprehensive with its adjacent domain.
On the other hand, moving picture encoding device of the present invention comprises Region Segmentation unit and coding unit.The Region Segmentation unit comprises dividing processing unit and overall treatment unit, the dividing processing unit according to the rules whether alienable criterion is divided into a plurality of zones with input picture, the overall treatment unit according to the rules whether can be comprehensive criterion, the comprehensive of this zone and its adjacent domain carried out in a plurality of zones of being cut apart by the dividing processing unit.Coding unit is to being carried out the coding of picture signal by each last behind overall treatment unit comprehensive zone.
Above-mentioned overall treatment unit comprises temporary transient coding unit, decoding unit, coding distortion computing unit and evaluation of estimate computing unit, temporary transient coding unit carries out precoding to each zone with image, decoding unit is to being decoded by temporary transient encoding section image encoded, the coding distortion computing unit uses the image calculation coding distortion by decoding unit decodes, encoding amount and coding distortion are considered in evaluation of estimate computing unit limit, the evaluation of estimate that is used to judge the coding quality is calculated on the limit, to each zone, according to adjacent domain when comprehensive the comparative result of the evaluation of estimate of gained evaluation of estimate of gained when comprehensive determine whether the zone can be comprehensive.
Above-mentioned dividing processing unit comprises the mobility computing unit and cuts apart judging unit, the mobility computing unit will follow the predicated error power of the moving compensation prediction in each zone to calculate as this regional mobility, cutting apart judging unit compares the mobility of calculating with the standard value that presets, result relatively, during mobility overgauge value, this zone further is divided into little zone.
In addition, above-mentioned dividing processing unit comprises the mobility computing unit and cuts apart judging unit, the mobility computing unit calculates the edge strength of each regional original signal, with it as this regional mobility, cutting apart judging unit compares the mobility of calculating with the standard value that presets, result relatively during mobility overgauge value, further is divided into little zone with this zone.
In addition, above-mentioned dividing processing unit comprises the mobility computing unit and cuts apart judging unit, the mobility computing unit each zone is calculated this area image characteristic of expression a plurality of numerical value linearity and, it as this regional mobility, is cut apart judging unit the mobility of calculating is compared with the standard value that presets; Result relatively is during mobility overgauge value, with the zone of this Region Segmentation Cheng Gengxiao.
At this moment, above-mentioned a plurality of numerical value also can comprise the encoding amount and the predicated error power of each regional kinematic parameter of following moving compensation prediction.
Also have, above-mentioned a plurality of numerical value also can comprise the size of dispersion value, edge strength and the kinematic parameter that each is regional of the encoding amount of the kinematic parameter that each is regional, the predicated error power of following moving compensation, original signal.
Above-mentioned dividing processing unit also comprises the grade recognition unit of determining the importance degree that each is regional as grade, judges from above-mentioned mobility and grade two aspects whether each zone is divisible.
The quilt that above-mentioned grade recognition unit strides across a plurality of zones is write body structure, determines the grade that each is regional.
At this moment, the above-mentioned body structure of being write also can disperse according to the original signal in zone, and the connection degree at the edge of edge strength and adjacent domain is judged.
Also have, the characteristic quantity that above-mentioned grade recognition unit is watched image attentively carries out being write the detection of body, determines the grade that each is regional according to this result.
At this moment, above-mentioned grade recognition unit also can be according to expecting that the quilt for being included in the image writes body, preserve in advance and comprise that this quilt writes the characteristic quantity of the image of body, write the consistent degree of the characteristic quantity of body according to the characteristic quantity and the quilt of preservation of each regional image and determine the grade that each is regional.
Above-mentioned dividing processing unit comprises temporary transient coding unit, decoding unit, coding distortion computing unit and evaluation of estimate computing unit, temporary transient coding unit carries out precoding to each zone with image, calculate its encoding amount simultaneously, decoding unit is to being decoded by temporary transient coding unit image encoded, the coding distortion computing unit uses the image calculation coding distortion of decoding unit decodes, encoding amount and coding distortion are considered in evaluation of estimate computing unit limit, and the evaluation of estimate of judging the coding quality is calculated on the limit; To each zone, the comparative result of the evaluation of estimate of the evaluation of estimate of gained when not cutting apart determines whether the zone is cut apart when it is divided into the smaller area territory.
Also have, it is variable to follow the quantization parameter of the predictive error signal of moving compensation prediction to be set in above-mentioned temporary transient coding unit, and above-mentioned evaluation of estimate computing unit limit changes quantization parameter, and evaluation of estimate is calculated on the limit.
Further, also can will follow the linear of the encoding amount of each regional kinematic parameter of moving compensation prediction and predicated error power and be arranged on the prime of above-mentioned temporary transient coding unit as the evaluation of estimate computing unit that evaluation of estimate is calculated, above-mentioned temporary transient coding unit detects kinematic parameter according to this evaluation of estimate.
On the other hand, animated image decoding device of the present invention is the device of decoding after the coded data input with image encoded after cutting apart to a plurality of zones, comprise region shape decoding unit and image data decoding unit, the region shape decoding unit is according to region shape information contained in the coded data, each regional shape of cutting apart during to coding is recovered, the image data decoding unit is except that the order of determining according to each regional shape of recovering regional code, also from coded data each regional image that goes to decode.
At this moment, the above-mentioned zone shape information is included in when coding and carries out Region Segmentation and the relevant information of processing procedure when comprehensive, above-mentioned zone decoded shape unit by according to this information regeneration the processing identical with code device grasp regional cutting state.
Description of drawings
Fig. 1 is the pie graph of the moving picture encoding device integral body relevant with embodiment.
Fig. 2 is the process flow diagram of action of the code device of presentation graphs 1.
Fig. 3 is the cut-away view of the Region Segmentation unit of Fig. 1.
Fig. 4 is the cut-away view of the dividing processing unit of Fig. 3.
Fig. 5 is the process flow diagram of action of the dividing processing unit of presentation graphs 4.
Fig. 6 is the figure of example of even segmentation result of the dividing processing unit of presentation graphs 4.
Fig. 7 is result's the figure of first initial segmentation of the dividing processing unit of presentation graphs 4.
Fig. 8 is the figure of net result of initial segmentation of the dividing processing unit of presentation graphs 4.
Fig. 9 is the cut-away view of the overall treatment unit of Fig. 3.
Figure 10 is the process flow diagram of action of the overall treatment unit of presentation graphs 9.
Figure 11 is the figure of example of band sign in zone of the overall treatment unit of presentation graphs 9.
Figure 12 is the figure of setting example of adjacent domain of the overall treatment unit of presentation graphs 9.
Figure 13 is the process flow diagram of order of the S19 of expression Figure 10.
Figure 14 is the cut-away view of other embodiment of the dividing processing unit of Fig. 3.
Figure 15 is the figure of net result of initial segmentation of the dividing processing unit of expression Figure 14.
Figure 16 is the cut-away view of other embodiment of the dividing processing unit of Fig. 3.
Figure 17 is the process flow diagram of action of the dividing processing unit of expression Figure 16.
Figure 18 is the figure of other embodiment of the grade recognition unit of expression Figure 16.
Figure 19 is that expression utilizes Block Matching Algorithm to move the figure of compensation prediction.
Figure 20 is the internal structure figure of other embodiment of the dividing processing unit of Fig. 3.
Figure 21 is the process flow diagram of action of the dividing processing unit of expression Figure 20.
Figure 22 is the internal structure figure of other embodiment of the overall treatment unit of Fig. 3.
Figure 23 is the process flow diagram of action of the overall treatment unit of expression Figure 22.
Figure 24 is the internal structure figure of other embodiment of the overall treatment unit of Fig. 3.
Figure 25 is the internal structure figure of the animated image decoding device relevant with embodiment.
Figure 26 is the process flow diagram of action of the decoding device of expression Figure 22.
Figure 27 is the figure of the expression animated image decoding device relevant with first prior art.
Figure 28 is the figure of the expression animated image decoding device relevant with second prior art.
Embodiment
Embodiment 1
Fig. 1 is the block diagram of the formation of the expression moving picture encoding device relevant with present embodiment.It is portable or be installed with type equipment that this device can be used for that for example Image Communication such as videophone and video conference uses.Also have, can use as the image storage of digital VTR, video server etc. and the moving picture encoding device in the pen recorder.Further, the processing sequence of this device also can be used with the moving picture encoding program that the form of the firmware of software or DSP (digital signal processor) is installed.
Among Fig. 1, the 1st, input picture, the 2nd, Region Segmentation portion, the 3rd, region shape information, the 4th, region image signal, the 5th, regional movement information, the 6th, area attribute information, the 7th, coding unit, the 8th, local decoded picture, the 9th, storer, the 10th, reference picture, the 11st, coding stream.Fig. 2 is the process flow diagram of the action of this device of expression.The action of device integral body at first, is described according to Fig. 1 and Fig. 2.
Input picture 1 is input to Region Segmentation unit 2 (S1), is divided into a plurality of zones here.Region Segmentation unit 2 carries out the processing of two systems of such initial segmentation of aftermentioned (S2) and adjacent domain comprehensive (S3).Coding unit 7 is given with shape information 3, picture signal 4, movable information 5, attribute informations such as coded system 6 that each is regional in each zone of the 2 pairs of segmentation result gained in Region Segmentation unit.At coding unit 7, coding method is according to the rules carried out bit pattern conversion and multipleization with these information, as coding stream 11 output (S4, S5).With each zone be encoded to final area till (S6, S7).And, according to moving compensation prediction,,, each zone is generated local decoded picture 8 at coding unit 7 for carrying out Region Segmentation and coding, it is stored in the storer 9.Region Segmentation unit 2 and coding unit 7 take out the local decoded picture of storage in the storer 9 as reference image 10, move compensation prediction.
Fig. 3 is the detailed pie graph of Region Segmentation unit 2.Among the figure, the 12nd, dividing processing unit, the 13rd, initial segmentation shape information, the 14th, overall treatment unit.
(1) initial segmentation
The initial segmentation that is equivalent to the S2 of Fig. 2 in dividing processing unit 2.So-called initial segmentation is meant cutting apart of carrying out before comprehensive, the state that the total number of times of cutting apart depends on image is the feature or the characteristic of image.
The inside that Fig. 4 shows dividing processing unit 12 constitutes.Among the figure, the 15th, even cutting unit, the 16th, the mobility computing unit, the 17th, mobility, the 18th, cut apart judging unit, the 19th, the cutting state indicator signal.So-called mobility is to be the feature of judging image or the characteristic data that quantized relevant with the character of regulation.Here, as mobility, adopt the predicated error power of the moving compensation prediction of following the zone.
Figure 19 shows the method for utilizing Block Matching Algorithm to move compensation prediction.In Block Matching Algorithm, provide the motion vector amount of orientation V that expression is used as predicted region S.
(expression formula 1)
D min = min v ∈ R ( Σ s [ fs ( x + v x , y + v y , t - 1 ) - fs ( x , y , t ) ] )
Wherein, the moment of predicted region S t (x, y) pixel value on is fs (x, y, t), moment t-1 (x, y) pixel value on be fs (x, y, t-1), make position (x, y, t-1) pixel value of position of only carrying out the displacement of vectorial V be fs (x+vx ', y+vy ', t-1).Also have, R represents the motion vector range of search.
Utilization is by this vector of gained as a result, by fs (x+vx ', y+vy ' t-1) provides predicted picture, predicated error power is that mobility becomes D MinBy by this method definition of activities degree, can carry out Region Segmentation by the complexity of image local motion.The thick controls such as coding of part that the violent part of can moving is close, motion is weak.In addition, also can use the affine moving compensation of calculating the affine motion parameter, the moving compensation of perspective that detects three-dimensional motion etc.
Fig. 5 is the process flow diagram of the action of expression dividing processing unit 12.As figure, at first, unconditionally carry out homogeneous blocks by even cutting unit 15 and cut apart (S8).At this moment, for example as shown in Figure 61 frame is divided into the piece of 32 * 32 pixels.This dividing processing is called the 0th cuts apart level.At the 0th piece number scale of cutting apart the level generation is No, and each piece is designated as B 0 n(1≤n≤N 0).
Then, to each B 0 nJudge whether individually also to carry out that piece is cut apart and how to carry out piece cuts apart (S9).Therefore in mobility computing unit 16, calculate each B 0 nMobility 17.Cut apart judging unit 18 the threshold value TH0 that presets and the mobility of each piece are compared, when mobility 17 is also bigger than TH0, B 0 nFurther be divided into 4 parts (S10).Till it is proceeded to final piece (S11, S12).Here it is the 1st cuts apart level.
Fig. 7 the 1st is cut apart the cutting state of the image of grade finish time.With 16 * 16 newly-generated pixel block number scales is N1, and each piece is designated as B 1 n(1≤n≤N 1).Afterwards, calculate each B 1 nMobility, use threshold value TH1 to carry out the 2nd and cut apart level.Below, threshold value THj is applied to j cuts apart the piece B that level generates j n, carry out j+1 and cut apart level (S13~S16).When j arrives the set upper limit value, finish initial segmentation, here,, cut apart level with the 2nd and finish to cut apart for ease of explanation.In this case, finally generate piece shown in Figure 8.The size of piece is 8 * 8 pixels~32 * 32 pixels.With the piece number scale of initial segmentation finish time is M 0, each piece is designated as prime area S 0 nS 0 nShape information as initial segmentation shape information 13, deliver to overall treatment unit 14.
(2) adjacent domain is comprehensive
Then, in overall treatment unit 14, to each S 0 nCarry out comprehensive with adjacent domain.Fig. 9 shows the internal structure figure of overall treatment unit 14.Among the figure, the 20th, tag unit, the 21st, the adjacent domain setup unit, the 22nd, temporary transient coding unit, the 23rd, decoding unit, the 24th, coding distortion computing unit, the 25th, the evaluation of estimate computing unit, the 26th, the constant that the evaluation of estimate computing unit is used, the 27th, comprehensive judging unit, the 28th, overall treatment is indicator signal repeatedly.
Figure 10 is the process flow diagram of the action of expression overall treatment unit 14.As shown in the figure, according to fixing rule, at first, in tag unit 20, to prime area S 0 nGive the i.e. sign (S17) of numbering.For example, the limit with pixel unit from the upper left corner to lower right corner scintigram picture frame flatly, the limit order provides numbering to the zone.Figure 11 shows the simple example of band sign.As shown in the figure, with the order that on sweep trace, occurs, on the zone, give sign [1] [2] ...At this moment, Qu Yu size is not limit.Below, with region S k nValue of statistical indicant be designated as 1 (S k n).Also have, k is corresponding with the comprehensive level of aftermentioned k, is k=0 in original state.
Then, at adjacent domain setup unit 21, " adjacent domain " of utilizing each zone of sign definition (S18).Figure 12 is the example of adjacent domain.Among the figure, show region S according to the sign of Figure 11 0 nAdjacent domain.That is, subject area A is linked to each other with the limit and will be defined as adjacent domain by big area B, C, D than the value of statistical indicant in object zone.
Then, could be comprehensive according to the region decision zone with its adjacent domain.Therefore, at temporary transient coding unit 22, decoding unit 23, coding distortion computing unit 24, evaluation of estimate computing unit 25, calculating is used for comprehensive evaluation of estimate (S19).Evaluation of estimate is the encoding amount shown in the expression, i.e. distortion cost L (S k n).
L (S k n)=D (S k n)+λ (S k n) (expression formula 1)
Here, D (S k n) be S k nCoding distortion be the square error summation, R (S k n) be S k nEncoding amount, λ be constant 26.Comprehensively along L (S k n) direction that reduces carries out.L (S k n) diminishing is equivalent to reduce coding distortion with given constant λ in the scope of the encoding amount of stipulating.By reducing the L (S in the frame k n) summation, can reduce the coding distortion when using same encoding amount.
Figure 13 is the detail flowchart of S19.At first, at temporary transient 22 couples of S of coding unit k nPrecoding (S22).The purpose of this coding is to prepare calculation code amount R (S k n) and derive coding distortion D (S k n).In the present embodiment, temporary transient coding unit 22 uses reference picture to move compensation prediction.Coded data comprises view data, i.e. predictive error signal or original signal are used for determining attribute informations such as the movable information of predicted picture and coded system, and the summation of these encoding amounts is R (S k n).Predictive error signal is as region S k nOriginal signal and differing from of predicted picture and obtaining.
On the other hand, at decoding unit 23, use the coded data of temporary transient coding unit 22 gained to generate S k nLocal decoded picture (S23).Then, calculate the distortion D (S24) of local decoded picture and original image at coding distortion computing unit 24.Evaluation of estimate computing unit 25 is according to R (S k n) and D (S k n) calculation code amount distortion cost L (S k n) (S25).
At operation S19,, implement above-mentioned evaluation of estimate with three kinds of kinds and calculate Zone Full.
1. each region S k nItself ... L (S k n)
2.S k nAdjacent domain Ni[S k n] ... L (Ni[S k n])
3.S k nWith Ni (S k n) interim zone ... L (S k n+ Ni[S k n])
Here, Ni (S k n) expression S k nAdjacent domain, i represents to be used to distinguish the numbering of a plurality of adjacent domains.
Then, in comprehensive judging unit 27, in picture frame, search makes D L=L (S k n)+L (Ni[S k n])-L (S k n+ Ni[S k n]) maximum place, with its S k nWith Ni (S k n) comprehensive (S20).This is the comprehensive level of k.Afterwards, comprehensive judging unit 27 is by overall treatment indicator signal 28 repeatedly, to the renewal of tag unit 20 Warning Marks.Tag unit 20 usefulness sign 1 (S k n) displacement sign 1 (Ni[S k n]), at adjacent domain setup unit 21, adjacent domain is set again.With this, obtain new region S K+1 nWith adjacent domain Ni[S K+1 n], ask for L (S K+1 n), L (Ni[S K+1 n], L (S K+1 n+ Ni[S K+1 n]).Not with D LValue be combined as the positive moment, comprehensive judging unit 27 stops indication to tag unit 20, finishes overall treatment (S21).
More than, be through with and regional cutting apart and comprehensive relevant processing, represent that the most at last the information 3 of the Region Segmentation state of input picture 1, view data 4, movable information 5 and the attribute information 6 that each is regional output to coding unit 7.Afterwards, encode with the coding method of regulation.
In the present embodiment, not only cut apart, also carry out comprehensive.Therefore, final, the set of the square block of the enough all sizes of each zone energy shows.For example, the big quilt of motion in the image can be write body comprehensively to a district of its outline line similar shape in.As a result, according to being write structural reform variable design parameter control encoding amount etc., can be corresponding neatly with the real image structure.In addition, under the restriction of the encoding amount that provides, realized making the best region of coding distortion minimum to cut apart.Thereby, compare with existing common moving picture encoding device, realize higher picture quality with littler encoding amount easily.
Also having, in the present embodiment, is to cut apart level with the 2nd to finish initial segmentation, finishes but can certainly cut apart level at other.For example, under the little situation of integral image motion, can under opposite situation, also can increase progression the 1st grade of end.Also have, in the present embodiment, be with picture frame as coded object, but to by rectangular image data that external quadrangle enclosed etc., the quilt of the arbitrary shape in the picture frame is write body and can be used equally.
In the present embodiment, to coding unit 7 and temporary transient coding unit 22, make up by DCT and equal interval quantizing and to carry out region S k nCoding, but also can use other coding methods, as vector quantization, sub-band coding, wavelet coding etc.Also can prepare multiple coding method, select use the structure of the highest method of code efficiency.
In the present embodiment, adopted predicated error power, but, can consider to have following described example as different therewith examples as mobility.
First example is the dispersion value in the zone, and dispersion value is represented the complexity that the pixel in zone distributes, and in the zone that comprises pixel value image jumpy such as edge, it is big that dispersion value becomes.With the pixel value in the region S as fs (x, y, t), with the mean value of the pixel value in the region S as μ sThe time, dispersion value σ sProvide by expression.
(expression formula 2)
σ s = 1 N Σ s ( fs ( x , y , t ) - μ s ) 2
If adopt this mobility, then can also can carry out pixel value is changed violent portion's code the control that pixel value part with low uncertainty is slightly encoded according to the complexity cut zone of the partial structurtes of image.
Second example is the edge strength in the zone.Edge strength for example can be by " Edge detection by compass gradient masks " (Journal of ComputerGraphics and Image Processing of G.Robinson, Vol.6, No.5, Oct.1977) the Sobel operational symbol of record is calculated, and tries to achieve as the number of picture elements that distributes on the edge (marginal distribution area).Under the situation of this method, can carry out the part code that has the edge, the slightly control of coding of part according to edge of image segmentation of structures zone to there not being the edge.
The 3rd example is according to the size of the moving compensation prediction parameter in zone.The result of moving compensation prediction calculates kinematic parameter.Under the situation of Block Matching Algorithm, vectorial V promptly for this reason.According to this method, can be according to the degree cut zone of image motion, carry out the big part of moving being carried out code partly, zone, backstage etc. not too being produced the control that the part of moving is slightly encoded to being write generations such as tagma.
The 4th example be based on the moving compensation prediction in zone kinematic parameter encoding amount and predicated error power linearity with.With expression definition evaluation of estimate in this case.
L Mc=D Mc+ λ R Mc(expression formula 2)
Here, D McBe the predicated error power that calculates in the kinematic parameter testing process, λ is a constant, R McIt is the encoding amount of kinematic parameter.Calculating is with L McBe processed into minimum kinematic parameter, with at that time evaluation of estimate as mobility.According to this method, can make whole coding costs of the quantity of information that comprises quantity of information and kinematic parameter little for complexity Region Segmentation according to image motion, available few quantity of information is carried out regional code.
The 5th example be the value of described mobility so far linearity and.On each mobility that is added in right amount, can be corresponding with various images.
Embodiment 2
Present embodiment relates to the device that the Region Segmentation unit 2 of embodiment 1 is carried out the part distortion.Figure 14 is the internal structure figure of the Region Segmentation unit 2 of present embodiment.As shown in the figure, the Region Segmentation unit 2 of embodiment 2 is the forms that the dividing processing unit 12 of Fig. 3 are replaced as even cutting unit 15.In this constitutes, as shown in figure 15, in initial segmentation is handled, do not carry out the threshold decision of mobility, but unconditionally be evenly divided into the square block of the minimum area in zone.Also can set minimum region area.
Do not need to carry out threshold setting in the present embodiment, only encoding amount distortion cost is carried out Region Segmentation as evaluation of estimate.Thereby, except that not needing the formality relevant, also do not need the computational activity degree, compare processing such as judgement with threshold setting.Therefore, when seeking to alleviate the computational load relevant, can utilize present embodiment to come alternate embodiment 1 with these processing.
Embodiment 3
In the dividing processing of present embodiment, mobility just not also judges whether to comprise the cutting apart of index (calling grade in the following text) of the importance degree in expression zone.Preferably code is carried out in the high zone of importance degree and reduced region area.Make the low zone of importance degree big as far as possible, reduce the encoding amount of each pixel.
Mobility for example is the local statistic of being surrounded in the zone.On the other hand, the grade of present embodiment is based on the feature across interregional image.In the present embodiment, write degree of watching attentively body structure, that the people watches this regional degree attentively and is the people across the quilt in zone and define grade according to resulting from.For example, the marginal distribution in a certain zone strides across wide region, when the people is strong with being connected of adjacent domain, and this zone is positioned at a certain possibility of being write the body border and uprises.
Figure 16 is the internal structure figure of the dividing processing unit 12 of present embodiment.Structure in addition is identical with embodiment 1, the part that main explanation and embodiment 1 are different.Among the figure, the 29th, the grade recognition unit, the 30th, grade, the 31st, cut apart judging unit.Figure 17 is the process flow diagram of the action of expression dividing processing shown in Figure 16 unit 12.
As shown in figure 17, at first evenly cut apart (S26).Afterwards, determine each regional grade 30 (S27) at grade recognition unit 29.The connectivity γ of big or small α, regional inward flange distribution β (comprising edge direction, distribution area etc.), edge and the adjacent domain of disperseing in grade recognition unit 29 evaluation regions determines grade.For example, with disperse in the zone α than the also little zone of setting as inferior grade (grade C), to the further zoning inward flange distribution β in the big zone of α.The quantification of β can for example be undertaken by above-mentioned Sobel operational symbol etc.Under the β situation littler, this zone is considered as having than being write the more independently zonule at edge, body border, as moderate grade (grade B) than setting.When β increases a certain degree, estimate connectivity γ, when γ is big, be categorized as most important grade (grade A).
Behind the grade separation, in mobility computing unit 16, computational activity degree 17 in cutting apart judging unit 31, at first carries out the threshold decision relevant with mobility (S28).Here, to being judged as the zone that to cut apart, judge whether to allow to cut apart (S29) according to grade 30.Therefore, cut apart judging unit 31 and the Region Segmentation of each grade can be become the zone of a certain degree size, or have predetermined standard.If allow cut apart relevant, just cut apart this zone (S30) with grade.Zone Full is carried out this dividing processing, further cut apart,, also carry out identical dividing processing (S33~S38) newly-generated zone.
According to present embodiment, the outline line of can consider to stride across the feature of the image in a plurality of districts, particularly being write body carries out picture coding.Slightly encoded in the zone that degree of watching attentively is low, reduce quantity of information, can carry out this part is useed as the control of the quantity of information in the high zone of degree of watching attentively.
Embodiment 4
In embodiment 3, end user's degree of watching attentively in grade is determined.Use in the present embodiment, the characteristic quantity of image.In the present embodiment, should keep the characteristic quantity of certain known image, determine grade according to it with by the consistent degree of each regional characteristic quantity that calculates.
For example, facial image has been carried out many researchs so far, motion the various methodologies that the structure of face is quantized with characteristic quantity.If keep this characteristic quantity, then can from image, detect people's face (briefly saying the importance degree height).Also have, other are write body, can be also a lot of according to the situation that brightness and texture information utilize characteristic quantity to be described.If show people's face brightly, the zone that then will have the characteristic quantity that is consistent with the face characteristic amount is as most important grade A, with in addition zone as common importance degree grade B or the like.
Figure 18 is the structural drawing of the grade recognition unit 29 of present embodiment.Other parts are identical with embodiment 3.Among Figure 18, the 32nd, characteristic quantity storer, the 33rd, feature unanimity degree computing unit, the 34th, grade determining unit.
With with write the relevant characteristic quantity of body by grade separation after, according to being remained in the characteristic quantity storer 32 by being write body.Feature unanimity degree computing unit 33 is the consistent degree of calculating input image 1 and the characteristic quantity of writing body by the quilt of each grade separation respectively.The unanimity degree is for example as the error of the characteristic quantity of input picture 1 and the characteristic quantity in the characteristic quantity storer 32 and try to achieve.Then, grade determining unit 34 detects the highest quilt of consistent degree and writes body, and this territorial classification is write grade under the body for this quilt.
More than, according to present embodiment, we can say, can utilize the characteristic quantity of image to carry out being write the identification or the detection of body.And, can write body to required quilt and improve picture quality.Can carry out classification of being write body etc. according to the characteristic quantity relevant, in this case, can consider that the people encodes to the visual characteristic of image with people's degree of watching attentively.
Embodiment 5
In embodiment 1, when overall treatment, consider coding distortion.In the present embodiment, consider coding distortion in the dividing processing stage.
Figure 20 is the internal structure figure of the dividing processing unit 12 of present embodiment.Among the figure, the 35th, cut apart judging unit, the 36th, dividing processing is indicator signal repeatedly.Figure 21 is the process flow diagram of action of the dividing processing unit 12 of expression Figure 20.
The dividing processing unit 12 of present embodiment uses the expression formula 1 that imports at embodiment 1.Utilize this expression formula, the L (S in making frame k n) the direction that reduces of summation carry out initial segmentation and handle, can reduce the coding distortion when using the same-code amount.
As shown in figure 21, at first in even cutting unit 15, for example as the state of Fig. 6, carry out homogeneous blocks and cut apart (S39).This is equivalent to the 0th and cuts apart level.With the piece number scale of gained at this moment is N 0, each piece is designated as B 0 n(1≤n≤B 0 n).To each B 0 n, judge whether that further carrying out piece cuts apart.Will with B 0 nRelevant L (B 0 n) and with B 0 nBe divided into each sub-piece SB of four parts of gained 0 n(i) (1≤i≤4) relevant L (SB 0 n) summation compare, if little just permission of the latter cut apart.
When calculation code amount distortion cost, just at first in temporary transient coding unit 22, carry out B 0 nAnd SB 0 n(i) coding.Then, in decoding unit 23, by the coded data generation B of temporary transient coding unit 22 gained 0 nAnd SB 0 n(i) local decoded picture.Then, the distortion D (B that calculates between local decoded picture and original image at coding distortion computing unit 24 0 n), D (SB 0 n(i)).Evaluation of estimate computing unit 25 is according to encoding amount R (B 0 n), R (SB 0 n(i)), coding distortion D (B 0 n), D (SB 0 n(i)) calculate L (B 0 n), L (SB 0 n(i)) (S40, S41).
Cut apart judging unit 35 with L (B 0 n) and four sub-piece L (SB 0 n(i) summation of (i=1,2,3,4) compares (S42), if the latter is little, just with B 0 nBe divided into four SB 0 n(i) (S43).This is equivalent to the 1st and cuts apart level.Will be as SB 0 nThe piece of cutting apart newly is designated as B 1N (1≤n≤N1), to B 1N carries out the same judgement (S46~S51) of cutting apart.Below, carry out the same dividing processing of stipulated number.Finally, for example realize cutting state shown in Figure 8.
More than, owing to do not carry out the computing relevant in the present embodiment with mobility, so particularly useful under the situation of paying attention to the reduction operand.
Embodiment 6
Other examples of the overall treatment unit 14 shown in Figure 9 of embodiment 1 are described.Figure 22 is the internal structure figure of the overall treatment unit 14 of present embodiment.Among the figure, the 37th, quantization parameter setup unit, the 38th, quantization parameter, the 39th, temporary transient coding unit.The action of this overall treatment unit 14 is identical with Figure 10 basically, only the S19 difference.
Figure 23 is the process flow diagram that expression is equivalent to the processing that the evaluation of estimate of S19 calculates.Evaluation of estimate is calculated and is undertaken by temporary transient coding unit 39, decoding unit 23, coding distortion computing unit 24 and evaluation of estimate computing unit 25.
At first, in quantization parameter setup unit 37, set initial parameter value, output to temporary transient coding unit 39 (S52).Then, in temporary transient coding unit 39, carry out region S k nCoding (S53).During coding, use the quantization parameter of setting to quantize.
At decoding unit 23, generate S from the coded data that obtains like this k nLocal decoded picture (S54).Then, the distortion D (S that calculates between local decoded picture and original image by coding distortion computing unit 24 k n) (S55).Evaluation of estimate computing unit 25 is according to encoding amount R (S k n) and coding distortion D (S d n) calculating L (S k n) (S56).The value at cost of first calculated gained is kept as Lmin, afterwards, change quantization parameter, carry out same pricing.By changing quantization parameter, because the balance of encoding amount and distortion changes, so, adopt to make encoding amount distortion cost parameter hour, as final region S k nEncoding amount distortion cost L (S k n) (S57~S60).Following processing is identical with embodiment 1.
According to present embodiment, the consideration quantization parameter is realized best overall treatment.And the method for adding quantization parameter also can be applied to the dividing processing according to embodiment 5 described encoding amount distortion costs.
Embodiment 7
In the present embodiment, further specify other examples of embodiment 6.Figure 24 is the internal structure figure of the overall treatment unit 14 of present embodiment.Among the figure, the 40th, moving compensation prediction pricing unit, the 41st, moving compensation prediction cost, the 42nd, temporary transient coding unit.
Temporary transient coding unit 42 uses based on the coding of moving compensation prediction determines kinematic parameter.At this moment, use embodiment 1 described moving compensation prediction cost (formula 2).That is, the kinematic parameter when determining temporary transient the coding is so that by the moving encoding amount balance that realizes making coupling distortion and kinematic parameter that compensates, cost is minimum.Specifically, in the coding that utilizes temporary transient coding unit 42, determine kinematic parameter according to the value at cost that moving compensation prediction pricing unit 40 calculates.Following processing is identical with embodiment 6.
According to present embodiment, the limit makes the driven encoding amount distortion cost minimization that compensates between coding by given constant λ, and region shape can be determined in the limit.As a result, can reduce coding distortion by the encoding amount of regulation.
Embodiment 8
In the present embodiment, the animated image decoding device of the explanation coding stream decoding that will generate by described various moving picture encoding device so far.Figure 25 shows the formation of decoding device.Among the figure, the 43rd, bit stream resolution unit, the 44th, region shape decoding unit, the 45th, attribute information decoding unit, the 46th, the image data decoding unit, the 47th, movable information decoding unit, the 48th, kinematic parameter, the 49th, moving compensating unit, the 50th, predicted picture, the 51st, image restoration unit, the 52nd, external memory storage, the 53rd, reproduced picture.
The decoding of this decoding device comprise the Region Segmentation state that expression is relevant with the parts of images (calling " picture frame etc. " in the following text) in picture frame or the picture frame region shape information, utilize each regional view data that the method for regulation encodes, attribute information that each is regional and the coding stream of the movable information that each is regional, the recovery district area image, reproduced picture frame etc.
The situation of present embodiment is owing to produce zone beyond the square, so the describing method of region shape information is with that general method is arranged earlier is different in cataloged procedure.The method that adopts at present embodiment has: (i) write each regional apex coordinate exactly, (ii) write exactly when coding and the zone to be cut apart and the processing procedure when comprehensive etc.Under method situation (ii),, be described in that i is cut apart the numbering in the zone that level cuts apart and in the numbering in the comprehensive zone of the comprehensive level of j for example to i, j arbitrarily.In decoding device, identical with code device, at first carry out as shown in Figure 6 the 0th cut apart level, after, can according to the final cutting state of the identical order recovery of code device.Under method situation (ii), data volume is generally also lacked than directly describing coordinate data.
Figure 26 is the process flow diagram of the action of expression decoding device.At first coding stream 11 is input to bit stream resolution unit 43, carries out from the conversion (S61) of bit string to coded data.In coded data, in region shape decoding unit 44,, recover the Region Segmentation state (S62) of picture frame etc. with said method with sign indicating number region shape information decoding.By recovering the zone, the coded sequence of the area information of encoding in definite bit stream below.With each zone as S n
Then, according to coded sequence, from bit stream each regional data of decoding successively.At first, by attribute information decoding unit 45 decoding region S nAttribute information, (S63) such as coded system information in decoding zone.Here, if inner mode (inter-frame encoding), promptly if the mode (S64) of predictive error signal coding, just decoding moving parameter 48 (S65) in movable information decoding unit 47.Kinematic parameter 48 is delivered to moving compensating unit 49.Moving compensating unit 49 calculates the storage address that is equivalent to the predicted picture in the reference image stored in the external memory storage 52 in view of the above, takes out predicted picture 50 (S66) from external memory storage 52.Then, the region S of in view data 46, decoding nView data (S67).When inner mode,, obtain final region S by with decoded image data and predicted picture 50 additions nReproduced picture.
On the other hand, under the situation of inner mode (intraframe coding method), with decoded image data still as final region S nReproduced picture 53.S nReproduced picture uses as the reference picture of predicted picture generation later on, so, write external memory storage 52.Carry out the recovery (S68) of these judgements and reproduced picture in image restoration unit 51.
Finish a series of processing in the moment that contained whole zones such as picture frame are carried out.To after other picture frames etc., also can carry out identical processing.
According to moving picture encoding method of the present invention owing to not only carry out Region Segmentation, also can carry out regional complex, so, realized can with the flexible corresponding codes of picture structure.
Under the situation of the criterion that could cut apart of the comparative result associated of the coding quality when using, can on the direction of well encoded, reliably carry out required cutting apart with cut zone not.
The comparative result associated of the coding quality when using with comprehensive zone could be comprehensive the situation of criterion under, can on the direction of well encoded, reliably carry out required comprehensive.
On the other hand, moving picture encoding device of the present invention comprises Region Segmentation unit and coding unit, and the Region Segmentation unit comprises dividing processing unit and overall treatment unit.As a result, owing to not only carry out Region Segmentation, and can carry out comprehensively, so, realized and the flexible corresponding codes of picture structure.
Comprise in the overall treatment unit under the situation of temporary transient coding unit, decoding unit, coding distortion computing unit, evaluation of estimate computing unit, under the encoding amount restriction that provides, can make the distortion minimization of coding.
Use under the situation of predicated error power as mobility in the dividing processing unit, can further cut apart the big zone of predicated error, the big zone of promptly moving in general.
Use as mobility in the dividing processing unit under the situation of edge strength of each regional original signal, can obtain the region shape corresponding with the edge of image structure, the zone that the profile portion of being write body can be graded, influenced by subjective picture quality is further cut apart.
The dividing processing unit use as mobility the presentation video characteristic and the situation of linearity of a plurality of numerical value under, can be according to a plurality of viewpoints or the further cut zone of standard.
Also have, if above-mentioned a plurality of numerical value comprise the encoding amount and the predicated error power of each regional kinematic parameter of following moving compensation prediction, then can carry out Region Segmentation, so that the coding cost of title of the quantity of information of the quantity of information of the complexity that comprises image motion and kinematic parameter is reduced, if equal distortion then can be encoded with few quantity of information.
Also have, if above-mentioned a plurality of numerical value comprise the size of dispersion value, edge strength and the kinematic parameter that each is regional of the encoding amount of the kinematic parameter that each is regional, the predicated error power of following moving compensation, original signal, then obtained taking all factors into consideration the best region shape of various standards.
Comprise in the dividing processing unit under the situation of grade recognition unit, increase regional importance degree and cut apart and just become easy.
Landmarks identification unit is write under the situation of body structure watching the quilt that strides across a plurality of zones attentively, is fit to be write the Region Segmentation of the shape of body easily.
Also have, judged according to the connection degree at the edge of original signal dispersion, edge strength and the adjacent domain in zone, then can access with the complexity of original image signal and write the region shape that body structure adapts if write body structure.Particularly, detailed segmentation can be carried out in the zone relevant with the contour structure of being write body.
Under the situation of the characteristic quantity of watching image attentively, the grade recognition unit is for example write the different detailed segmentation of carrying out of body because of quilts such as people's faces with the zone easily.Thereby, can obtain with the people corresponding AD HOC such as degree of watching attentively and with the corresponding region shapes such as importance degree of image-region with pattern.
At this moment, if determine the rank that each is regional according to the consistent degree according to the characteristic quantity of the characteristic quantity of being write the image that body keeps and real image, the discrimination of being write body improves, and realization is Region Segmentation more reliably.
Comprise in the dividing processing unit under the situation of temporary transient coding unit, decoding unit, coding distortion computing unit, evaluation of estimate computing unit, can under the restriction of the encoding amount that provides, make the coding distortion minimum.
Change the quantization parameter limit on evaluation of estimate computing unit limit and calculate under the situation of evaluation of estimate, can realize the optimization of the quantization parameter in the regional code and the optimization of region shape simultaneously, can improve code efficiency.
Prime at temporary transient coding unit is provided with and will follows encoding amount and the linearity of predicated error power and the evaluation of estimate computing unit that calculates as evaluation of estimate of each regional kinematic parameter of moving compensation prediction, can select to make the kinematic parameter limit of coding cost minimum to make Region Segmentation the best in the limit, can make the Region Segmentation of total coding cost reduction in the zone that comprises quantization parameter the best.
On the other hand, because animated image decoding device of the present invention comprises region shape decoding unit and image data decoding unit, so, even also can be corresponding in the zone of code device generation different shape.Thereby, become easy with the combination of moving picture encoding device of the present invention.
Comprise and the time cut apart at coding and during information that processing procedure during comprehensive zone is relevant, can a spot of quantity of information recover region shape in region shape information.

Claims (1)

1. animated image decoding device, after coded data input, decode passive compensation prediction coding, it is characterized in that: comprise region shape lsb decoder, multidate information lsb decoder and moving compensation section, above-mentioned zone decoded shape portion decodes from coded data has been assigned with the zone of common motion vector to expression region shape information, obtain the apex coordinate of the above-mentioned zone of the unit that becomes moving compensation prediction based on this region shape information, thereby determine the shape and the position of unit area; The above-mentioned multidate information lsb decoder pair dynamic parameter corresponding with the unit area that is determined shape and position by above-mentioned zone decoded shape portion decoded; Above-mentioned moving compensation section is moved compensation prediction according to the dynamic parameter pair unit area corresponding with this dynamic parameter by above-mentioned multidate information lsb decoder decoding, and from reference picture generation forecast image.
CNB2003101161093A 1997-04-24 1998-02-26 Decoding method for encoding image data Expired - Lifetime CN1324533C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP10707297 1997-04-24
JP107072/1997 1997-04-24
JP107072/97 1997-04-24

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB981052363A Division CN1153175C (en) 1997-04-24 1998-02-26 Method and apparatus for region-based moving image encoding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN2006101002103A Division CN1968414B (en) 1997-04-24 1998-02-26 Method for decoding coded image data

Publications (2)

Publication Number Publication Date
CN1523538A CN1523538A (en) 2004-08-25
CN1324533C true CN1324533C (en) 2007-07-04

Family

ID=14449781

Family Applications (3)

Application Number Title Priority Date Filing Date
CNB981052363A Expired - Lifetime CN1153175C (en) 1997-04-24 1998-02-26 Method and apparatus for region-based moving image encoding
CN2006101002103A Expired - Lifetime CN1968414B (en) 1997-04-24 1998-02-26 Method for decoding coded image data
CNB2003101161093A Expired - Lifetime CN1324533C (en) 1997-04-24 1998-02-26 Decoding method for encoding image data

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CNB981052363A Expired - Lifetime CN1153175C (en) 1997-04-24 1998-02-26 Method and apparatus for region-based moving image encoding
CN2006101002103A Expired - Lifetime CN1968414B (en) 1997-04-24 1998-02-26 Method for decoding coded image data

Country Status (4)

Country Link
KR (1) KR100257175B1 (en)
CN (3) CN1153175C (en)
HK (1) HK1105262A1 (en)
TW (1) TW388843B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7006097B2 (en) 2000-11-23 2006-02-28 Samsung Electronic Co., Ltd. Method and apparatus for compression and reconstruction of animation path using linear approximation
KR101455578B1 (en) * 2005-09-26 2014-10-29 미쓰비시덴키 가부시키가이샤 Dynamic image encoding device and dynamic image decoding device
EP2395755A4 (en) 2009-02-09 2015-01-07 Samsung Electronics Co Ltd Video encoding method and apparatus using low-complexity frequency transformation, and video decoding method and apparatus
US8494290B2 (en) * 2011-05-05 2013-07-23 Mitsubishi Electric Research Laboratories, Inc. Method for coding pictures using hierarchical transform units
CN102843556B (en) * 2011-06-20 2015-04-15 富士通株式会社 Video coding method and video coding system
WO2014155471A1 (en) * 2013-03-25 2014-10-02 日立マクセル株式会社 Coding method and coding device
CN113314063B (en) * 2021-05-31 2023-08-08 北京京东方光电科技有限公司 Display panel driving method and device and display equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453787A (en) * 1993-12-10 1995-09-26 International Business Machines Corporation Variable spatial frequency chrominance encoding in software motion video compression
JPH0879753A (en) * 1994-09-05 1996-03-22 Sharp Corp Moving image encoding device and moving image decoding device
JPH08223581A (en) * 1995-02-15 1996-08-30 Nec Corp Moving image coding and decoding system
US5594504A (en) * 1994-07-06 1997-01-14 Lucent Technologies Inc. Predictive video coding using a motion vector updating routine
JPH09102953A (en) * 1995-10-04 1997-04-15 Matsushita Electric Ind Co Ltd Method, device for encoding digital image and decoding device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0159559B1 (en) * 1994-10-31 1999-01-15 배순훈 Adaptive postprocessing method of a digital image data
KR100249028B1 (en) * 1995-03-20 2000-03-15 전주범 Apparatus for effectively encoding/decoding video signals having stationary object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453787A (en) * 1993-12-10 1995-09-26 International Business Machines Corporation Variable spatial frequency chrominance encoding in software motion video compression
US5594504A (en) * 1994-07-06 1997-01-14 Lucent Technologies Inc. Predictive video coding using a motion vector updating routine
JPH0879753A (en) * 1994-09-05 1996-03-22 Sharp Corp Moving image encoding device and moving image decoding device
JPH08223581A (en) * 1995-02-15 1996-08-30 Nec Corp Moving image coding and decoding system
JPH09102953A (en) * 1995-10-04 1997-04-15 Matsushita Electric Ind Co Ltd Method, device for encoding digital image and decoding device

Also Published As

Publication number Publication date
CN1968414A (en) 2007-05-23
CN1153175C (en) 2004-06-09
CN1197250A (en) 1998-10-28
CN1523538A (en) 2004-08-25
HK1105262A1 (en) 2008-02-06
KR19980079547A (en) 1998-11-25
KR100257175B1 (en) 2000-05-15
CN1968414B (en) 2010-12-22
TW388843B (en) 2000-05-01

Similar Documents

Publication Publication Date Title
CN1220391C (en) Image encoder, image decoder, image encoding method, and image decoding method
CN1257614C (en) Signal encoding method and apparatus and decording method and apparatus
CN1193620C (en) Motion estimation method and system for video coder
CN1280709C (en) Parameterization for fading compensation
CN103024383B (en) A kind of based on lossless compression-encoding method in the frame of HEVC framework
CN1183488C (en) Image coded data re-encoding apparatus
CN1285216C (en) Image encoding method, image decoding method, image encoder, image decode, program, computer data signal, and image transmission system
CN1274158C (en) Method for encoding and decoding video information, motion compensated video encoder and corresponding decoder
CN1155259C (en) Bit rate variable coding device and method, coding program recording medium
CN1581982A (en) Pattern analysis-based motion vector compensation apparatus and method
CN1926875A (en) Motion compensation method
CN1658673A (en) Video compression coding-decoding method
US20070009046A1 (en) Method and apparatus for region-based moving image encoding and decoding
CN1535024A (en) Video encoding device, method and program and video decoding device, method and program
CN1835594A (en) Motion vector detection method, motion vector detection apparatus, computer program for executing motion vector detection process on computer
CN1625902A (en) Moving picture coding method and decoding method, and apparatus and program using the same
CN1495674A (en) Interpolation device for motion vector compensation and method
CN1288914C (en) Image coding and decoding method, corresponding devices and application
CN1148069C (en) Motion vector detecting method and device
CN1843039A (en) System and method for encoding and decoding enhancement layer data using descriptive model parameters
CN1358028A (en) Image codec method, image coder and image decoder
CN1495603A (en) Computer reading medium using operation instruction to code
CN1324533C (en) Decoding method for encoding image data
CN1245028C (en) Non-uniform multilayer hexaploid lattice full pixel kinematic search method
CN1149850C (en) Method and device for coding moving image and meidum for recording corresponding program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20070704