US20020021234A1 - Encoding, decoding, and probability estimation method - Google Patents

Encoding, decoding, and probability estimation method Download PDF

Info

Publication number
US20020021234A1
US20020021234A1 US09/275,006 US27500699A US2002021234A1 US 20020021234 A1 US20020021234 A1 US 20020021234A1 US 27500699 A US27500699 A US 27500699A US 2002021234 A1 US2002021234 A1 US 2002021234A1
Authority
US
United States
Prior art keywords
probability
values
threshold
symbols
symbol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/275,006
Other versions
US6411231B1 (en
Inventor
Taichi Yanagiya
Tomohiro Kimura
Ikuro Ueno
Masayuki Yoshida
Fumitaka Ono
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Assigned to MITSUBISHI DENKI KABUSHIKI KAISHA reassignment MITSUBISHI DENKI KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ONO, FUMITAKA, UENO, IKURO, YOSHIDA, MASAYUKI, YANAGIYA, TAICHI, KIMURA, TOMOHIRO
Publication of US20020021234A1 publication Critical patent/US20020021234A1/en
Application granted granted Critical
Publication of US6411231B1 publication Critical patent/US6411231B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code

Definitions

  • This invention relates to an encoder and a decoder for data signals and, more particularly, to entropy encoding and decoding.
  • Adaptive encoding is a known method of efficiently encoding data signals.
  • An adaptive encoding device encodes a data signal while studying the occurrence probability of the object of encoding or decoding. Therefore, adaptive coding avoids decreased coding efficiency.
  • FIG. 1 of that patent reproduced here as FIG. 12, illustrates an example in which the binary symbol 001(sequence length 3) is encoded by an arithmetic coder. That encoding is described in the following paragraphs.
  • FIG. 12 is a conceptual diagram of the number line representation system. For simplicity, a bi-level memoryless information source is shown. The occurrence probability for “1” is set at r and the occurrence probability for “0” is set at 1 ⁇ r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111), represented as a binary expression, is truncated at the digit that allows distinction from the other, and is defined as its respective code word. Decoding is possible at a receiving side by performing the same procedure as at the transmission side.
  • mapping interval A i the lower-end coordinates C i of the symbol sequence at time i are given as follows:
  • a i ⁇ 1 is slightly over 0.5, A i is very small when ai is an MPS, and is even smaller than the area given when ai is an LPS. That is, in spite of the fact that the occurrence probability of the MPS is high, the area allocated to the MPS is smaller than that allocated to the LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to the MPS is always larger than that allocated to the LPS, since A i ⁇ 1 >0.5, S must be 0.25 or smaller.
  • U.S. Pat. No. 5,025,258 describes a method of calculation of an occurrence probability (Qe) based on the number of times of occurrence.
  • Qe occurrence probability
  • U.S. Pat. No. 5,059,976 uses learning in the probability estimation means, synchronized with renormalization in the entropy coding means, which is fundamentally independent of the probability estimation means. That is, the adaptability to a change of the information source depends on chance, as indicated in FIG. 13.
  • FIG. 11 is a block diagram of an adaptive encoding device and an adaptive decoding device.
  • a probability estimation means 30 presumes an occurrence probability of a data value for encoding, and produces a predicted value as a data value with a high occurrence probability.
  • a modeling means 33 analyzes the input signal and classifies it as to context. In the coding device, the modeling means 33 converts the input multi-value data signal into a binary data signal.
  • a modeling means 34 analyzes an output signal and classifies it as to context. When a multi-value data (not binary data) signal is output, a modeling means 34 converts the input binary data signal into a multi-value data signal.
  • a symbol judgment means 38 converts the input data signal into a binary symbol to show agreement or disagreement with the data value for encoding based on the binary data and a predicted value received from a part 36 of a memory as described below.
  • An entropy encoding means 31 encodes the data value output by the symbol judgment means, based on the probability established separately and supplied from the Qe memory 37 described below.
  • a data judgment means 39 converts a binary symbol received from an entropy decoding means 32 , into binary data based on the binary symbol and a predicted value received from a part 36 of a memory in the decoding device.
  • the entropy decoding is based on the probability separately established and stored in a Qe memory in decoding device.
  • the structure of FIG. 11 has separate modeling means 33 and modeling means 34 in the encoding and decoding devices. These modeling means may include generally known probability estimation means 30 including data and symbol conversion and inversion function. In the described structure, no conversion and inversion functions in the modeling means 33 and 34 are needed if the modeling means receive binary signals.
  • a state number is stored into a part 35 of a memory as an index for selecting an estimation probability value (MPS or LPS) for the Qe memory 37 .
  • An arithmetic coder and an arithmetic decoder are included in the entropy encoder means 31 and the entropy decoder means 32 , respectively.
  • the state number memory part 35 receives the context from the modeling means 33 .
  • the memory part 36 stores a predicted value 36 , based on the context and state number.
  • the Qe memory 37 detects a probability representation value (MPS or LPS).
  • the symbol judgment means 38 produces a binary symbol to establish agreement or disagreement of the data value for encoding based on the binary data and the predicted value.
  • the probability representation value (LPS or MPS) and the binary symbol are sent to the entropy encoding means 31 and the entropy encoding means 31 produces a code in response.
  • the entropy encoding means 31 sends the code to a entropy decoding means 32 .
  • the entropy decoding means 32 receives a probability representation value (LPS or MPS) from the Qe memory 37 and the input code.
  • the entropy decoding means 32 produces a binary symbol.
  • a data judgment means 39 receives a predicted value from a part 36 of a memory and the binary symbol from the entropy decoding means 32 and detects binary data based on the binary symbol and the predicted value.
  • the modeling means 34 receives binary data from the data judgment means 39 and detects a data signal based on the binary data. Moreover, the modeling means 34 converts a multi-value data signal into a binary data signal.
  • Modeling means 34 converts the binary data signal to decode it.
  • the memory including state number part 35 and predicted value memory part 36 and the Qe memory 37 of the encoding device are the same as on the decoding side.
  • the memory including parts 35 and 36 , the Qe memory 37 , symbol judgment means 38 , and data judgment means 39 are described in one figure and a flow chart in the articles first mentioned and in TU-T Recommendation T. 82, “Information Technology-coded Representation of Picture and Audio information-Progressive Bi-Level Image Compression”, pp. 23-45, 1993.
  • An object of the present invention is to provide an encoding method and a decoding method that determine an index to select an appropriate coding parameter according to an occurrence probability and producing a smaller operational load.
  • a coding method comprises determining a symbol from an input data signal, setting a threshold for probability values that determine a probability interval corresponding to an index based on an occurrence probability of symbols estimated from occurrence counts of the symbols, determining a probability representation value as a coding parameter using the probability interval of the threshold, and coding the symbol determined from the input data signal, based on the probability representation value.
  • a decoding method comprises setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols, determining a probability representation value as a coding parameter using the probability interval of the threshold, decoding an input code based on the probability representation value and outputting a symbol, and determining output data based on the symbol output in decoding of the input code.
  • a probability estimation method includes determining a symbol based on an input data signal, setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols, determining the probability representation value for a calculation parameter using the probability interval of the threshold, and outputting the probability representation value.
  • FIG. 1 is a block diagram of an encoding method according to the present invention.
  • FIG. 2 is a block diagram of a symbol counting means according to the present invention.
  • FIG. 3 is a flow chart of an encoding process according to the present invention.
  • FIG. 4 shows a threshold for probability values and probability representation values for selecting a probability representation value according to an embodiment of the present invention
  • FIG. 5 is a flow chart of selecting a probability representation value according to an embodiment of the present invention.
  • FIG. 6 shows a threshold for probability values and probability representation values for selecting a probability representation value according to an embodiment of the present invention
  • FIG. 7 is a flow chart of selecting a probability representation value according to an embodiment of the present invention.
  • FIG. 8 is a block diagram of a decoding method according to the present invention.
  • FIG. 9 is a flow chart of a decoding method according to the present invention.
  • FIG. 10 is a flow chart of a process for correcting the probability representation value of a probability interval according to the invention.
  • FIG. 11 is a block diagram of coding and decoding methods
  • FIG. 12 is a diagram illustrating the concept of arithmetic entropy encoding and decoding.
  • FIG. 13 is a diagram illustrating probability estimation in encoding and decoding.
  • FIG. 1 is a block diagram showing the structure of a coding apparatus using arithmetic coding according to the invention. Coding of binary information sources is described for ease of understanding.
  • binary sources are encoded by comparing a symbol to be coded with a predicted symbol and determining whether the symbol is more likely to occur than the another symbol (MPS: More Probable Symbol) or a symbol that is less likely to occur than the other symbol (LPS: Less Probable Symbol).
  • the symbol judgment means 10 determines whether an input symbol is an MPS or LPS.
  • the symbol counting means 11 counts the occurrences of LPS and the occurrences of both the binary symbols, in addition to storing a predicted value.
  • the probability estimation means 12 estimates an LPS occurrence probability according to the accumulated count of LPS occurrences and both the symbols.
  • the coding means 13 arithmetically codes coding input sequences of symbols and outputs coded data. In this coding process an operation of subdividing the numeric line recursively, according to the LPS occurrence probability, and selecting a divided interval that corresponds to the symbol to be coded, is iterated.
  • the symbol counting means 11 can be decomposed into the elements shown in FIG. 2.
  • the total symbol counting means 14 counts occurrences of both the binary symbols and the total occurrence count.
  • the LPS judgment means 15 determines whether the input symbol is LPS or MPS. If LPS, the occurrence will be counted by the LPS counting means 16 .
  • the predicted value memory 17 stores the predicted value and, when the LPS occurrence count exceeds half of the total occurrence count, the prediction value is reversed and the LPS occurrence count and MPS occurrence count (the total occurrence count—the LPS occurrence count) are exchanged.
  • an arithmetic coding method using adaptive probability estimation is illustrated in FIG. 3.
  • whether the input symbol to be coded is an MPS or LPS is determined at the symbol judgment means 10 by referring to the predicted value stored at the predicted value memory (Step 102 ).
  • one of the plural probability representation values prepared in advance is selected in the probability estimation method (explained in detail later) by referring to the LPS occurrence count and the total occurrence count (Steps 103 and 106 ).
  • An MPS or LPS, determined by the symbol judgment means 10 is encoded by the coding means 13 , using the probability representation value selected at the probability estimation means (Steps 104 and 107 ).
  • the LPS occurrence count and total occurrence count are updated by the symbol counting means 11 .
  • the symbol is determined to be an MPS
  • the total occurrence count is incremented (Step 105 ).
  • both the LPS occurrence count and total occurrence count are incremented (Step 108 ) and, if the LPS occurrence count exceeds half of the total occurrence count, the predicted value is reversed.
  • the LPS occurrence count and the MPS occurrence count are exchanged (Step 111 ), since the LPS occurrence count is larger than the MPS occurrence count (the total occurrence count—the LPS occurrence count). This exchange keeps the LPS occurrence count smaller than half of the total occurrence count.
  • FIG. 4 shows thresholds for probability values (TO, TI, T2) and probability representation values (AO, Al, A2, A3) which belong to corresponding probability intervals determined by the thresholds for probability values.
  • the thresholds are set to be powers of two as an example.
  • Each probability representation value can be set to an arbitrary value between two neighboring thresholds for probability values. In this example, each probability representation value is set at the center of two neighboring thresholds of probability values, located between the edges of a respective probability interval.
  • Table 1 shows the thresholds for probability values with the three least significant bits in their binary representation.
  • thresholds for probability values are set to powers of two, whether an LPS probability is larger than a threshold probability value can be determined by comparing the LPS occurrence count, shifted by some bits, with the total occurrence count.
  • FIG. 5 shows a procedure for estimating probability.
  • the LPS occurrence count and the total occurrence count are stored in the register L and the register N, respectively (Step 113 ).
  • a comparison to the threshold for probability values TO is made, that is, the register L shifted to the left by two bits (Step 114 ) is compared with the register N (Step 115 ).
  • the probability interval is determined, which means AO will be selected as a probability representation value (Step 116 ). If L is less than N, a comparison to the threshold for probability values T1 is made, that is, the register L shifted to the left by one bit more (Step 117 ), is compared with the register N (Step 118 ). If L is greater than or equal to N (LPS occurrence probability >T1), the probability interval is determined, which means Al will be selected as a probability representation value (Step 119 ). If L is less than N, the comparison to the threshold for probability values T2 is the same as the comparison to the threshold for probability values T1 (Steps 120 and 121 ).
  • the probability interval is determined, which means A2 will be selected as a probability representation value (Step 122 ). If L is less than N (the LPS occurrence probability ⁇ T2), the probability interval is determined, which means A3 will be selected as a probability representation value (Step 123 ).
  • FIG. 6 shows thresholds for probability values (TO, TI, . . . T7) and probability representation values (AO, Al, . . . , A8) which belong to the corresponding probability intervals determined by the thresholds for probability values.
  • the thresholds are set to be powers of two or values obtained by dividing an interval determined by powers of two into two equal parts, recursively. For instance, the interval between 1 ⁇ 2 and 1 ⁇ 4 is divided into four parts (halving the interval twice). The intervals between 1 ⁇ 4 and 1 ⁇ 8 and between 1 ⁇ 8 and ⁇ fraction (1/16) ⁇ are divided into two parts due to the restriction of binary digit length in this example.
  • Each probability representation value can be set to be an arbitrary value between two neighboring thresholds of probability values.
  • each probability representation value is set in the center of two neighboring thresholds for probability values which are located at the edges of each probability interval.
  • Table 2 shows the thresholds for probability values with the four least significant bits in their binary representation.
  • FIG. 7 shows the procedure for estimating a probability.
  • the LPS occurrence count and the total occurrence count are stored in the register L and the register N, respectively (Step 124 ).
  • the bit b 0 in the LPS occurrence count is determined by comparing the register L shifted to the left by two bits (Step 125 ) with the register N (Step 126 ).
  • the value of the register N is subtracted from the value of the register L (the bit b 0 of the LPS occurrence probability is set to zero: Step 128 ), and the next bit b 1 will be determined. If L is less than N (the bit b 0 of the LPS probability is zero: NO at Step 126 ) and the probability interval is not determined (NO at Step 129 ), the next bit b 1 will be determined.
  • Step 131 The procedure for determining the bit b 1 in the LPS occurrence probability is described.
  • the register L shifted to the left by one bit (Step 131 ) is compared with the register N (Step 126 ). Determining whether each bit is one or zero is the same as the determination of the bit b 1 . Determination of bits b 2 and b 3 is in the same manner as the determination of b 1 .
  • each bit in the LPS occurrence probability is determined in sequence, and when a probability interval is decided (YES at Step 127 or 129 ), the iterative judgment is discontinued and the probability representation value corresponding to the probability interval is selected (Step 130 ).
  • the probability interval has been decided, that is, A2 will be selected as a probability representation value.
  • the thresholds for probability values are powers of two or values obtained by dividing an interval determined by powers of two into two equal parts, recursively, a fast search of the probability representation value corresponding to a probability interval, determined by the thresholds for probability values, with no division step and a smaller operation load, is achieved.
  • Embodiment 3 concerns a decoding apparatus corresponding to the encoding apparatus introduced in embodiment 1.
  • FIG. 8 is a block diagram showing the structure of a decoding apparatus using arithmetic decoding in accordance with the invention. An explanation of the symbol counting means 11 and the probability estimation means 12 is omitted, since they have been described in connection with the encoding apparatus of embodiment 1.
  • the decoding means 20 outputs symbols by dividing the numeric line according to the LPS occurrence probabilities and determining which symbol the interval indicated by the input code data corresponds to.
  • the data judgment means 21 converts a decoded MPS or LPS into binary data by referring to the predicted value and outputs binary data.
  • FIG. 9 a procedure of arithmetic decoding using the adaptive probability estimation according to this invention is illustrated in FIG. 9.
  • One of the plural probability representation values prepared in advance is selected in the probability estimation means 12 by referring to the LPS occurrence count and the total occurrence count (Step 201 ).
  • an MPS or LPS is decoded by the decoding means 20 , using the probability representation value selected by the probability estimation means (Step 202 ).
  • the decoded symbol is an MPS
  • the predicted value stored in the predicted value memory 17 is output by the data judgment means 21 (Step 204 ).
  • the decoded symbol is an LPS
  • the inverse of the predicted value stored in the predicted value memory 17 is output by the data judgment means 21 (Step 206 ).
  • the LPS occurrence count and total occurrence count are updated by the symbol counting means 11 .
  • the procedure for selecting probability representation values in this embodiment is the same as in embodiment 1.
  • embodiment 1 by setting the thresholds for probability values to be powers of two, a fast search of the probability representation values corresponding to probability intervals determined by the thresholds for probability values with no division and a smaller operational load is made possible.
  • Embodiment 4 concerns a decoding apparatus corresponding to the encoding apparatus of embodiment 2.
  • the structure and procedure of operation of the decoding apparatus using arithmetic decoding according to this invention are the same as those of embodiment 3. Therefore, a description of the procedure for selecting probability representation values is the same as embodiment 2. Therefore, duplicate explanation is omitted.
  • embodiment 2 by setting the thresholds for probability values to be powers of two or values given by dividing an interval determined by powers of two into two equal parts, recursively, a fast search of the probability representation value corresponding to a probability interval determined by the thresholds for probability values with no division and smaller operation load, is made possible.
  • a method for determining one of the coding parameters, a probability representation value, according to two neighboring thresholds for probability values located at both of the edges of a probability interval is described.
  • the probability representation values are determined from the viewpoint of coding efficiency.
  • coding efficiency for information sources having occurrence probabilities around the center of a probability interval is higher than when the probabilities are close to the thresholds for probability values. That is, the larger the difference from the center of a probability interval, the lower the coding efficiency.
  • the coding efficiency with arithmetic coding that does not use multiplication is lower than arithmetic coding using multiplication, because a fixed-length area is assigned to a symbol to be coded, regardless of the length of the area, which ranges from 0.5 to 1.0 (only the initial length is 1.0). However it allows practical and simple implementation. In this case, at least probability representation values should be corrected. When an occurrence probability of an information source is assumed, the coding efficiency can be measured only by one factor, code length (numerator), since the other factor, entropy (denominator), depends on the occurrence probability.
  • FIG. 10 illustrates an example of a procedure for calculating the correcting values for the probability representation value.
  • thresholds for probability values are set (Step 301 ).
  • a tentative probability representation value is set (Step 302 ). For instance, the initial value can be set to the center of two neighboring thresholds for probability values.
  • code lengths for two information sources with occurrence probabilities at the thresholds for probability values are calculated (Step 303 ), and the magnitude of the error between the two code lengths is compared with a predefined error tolerance e (Step 304 ). If the error magnitude is larger than e (NO), the tentative probability representation value is set again. As the tentative probability representation value changes, the code lengths for the two information sources also change.
  • Step 302 in which the tentative probability representation value is set, Step 304 and Step 305 are iterated, and if the error magnitude is smaller than e (YES), the probability representation value is set to the tentative probability representation value, which is supposed to provide the best coding efficiency.
  • the top of the coding efficiency curve can be a candidate of the value.
  • the probability representation values for all probability intervals can be determined.
  • the embodiments described include coding methods in which plural probability representation values are prepared in advance, and one of the probability representation values is selected according to the accumulated occurrence counts as an estimated occurrence probability of the symbol to be coded.
  • the thresholds for probability values i.e., boundaries for the selection of a probability representation value
  • the probability representation value corresponding to a probability interval determined by the thresholds for probability values is selected by shifting, subtraction, and comparison with the accumulated occurrence counts of symbols, and occurrence probabilities of symbols are estimated according to accumulated occurrence counts.
  • a fast search of the probability representation value with no division and smaller operation load is thus made possible.
  • each probability interval can be regarded as a state.
  • a probability representation value is determined by selecting the corresponding probability interval.
  • an index referring to a probability interval regarded as a state may be selected by a fast search using the occurrence probability with no division and smaller operational load, and the probability representation value is obtained from the index.
  • the index obtained can be easily used not only for selection of a probability representation value, but also for selecting other parameters.
  • the invention reduces the operational load for coding, which is a reduction in multiplication in the case of a state transition and a reduction of division in the case using accumulated occurrence counts. Also, this invention increases probability estimation fidelity and coding efficiency in a state transition because of the improvement of adaptability to a change of the occurrence probability.
  • the probability estimation performance will be more independent and improved.
  • learning in the probability estimation means is synchronized with renormalization in the entropy coding means, which is fundamentally independent of the probability estimation means. That is, the adaptability to the change in the information source depends on chance.
  • the probability representation values can be closer to the optimum values due to the way in which the probability representation values are set, so that the coding efficiencies at both edges of a probability interval, which provide the worst coding efficiency in the range, provide the same coding efficiency.
  • An encoding apparatus and a decoding apparatus according to the invention can be implemented either separately or together. Any medium such as wireless transmission, wire transmission, storage devices using discs, tape, semiconductors memories, and the like can be used as a medium over which code data output from the encoding apparatus are transmitted to the decoding apparatus, or a medium in which code data output from the encoding apparatus are stored.
  • the transmission or storage can be realized electrically, magnetically, or optically.
  • the probability estimation is used in an encoder structure and a decoder structure.
  • the probability estimation method is used in a calculation means instead of an encoder structure or a decoder structure.
  • the calculation means is connected to the probability estimation structure that determines the probability representation value.
  • the calculation means receives the probability representation value and a binary symbol, makes further calculations based on the probability representation value, and outputs a result sent to another structure.
  • the calculation means can send the calculated result to another structure very quickly because the estimation method is very fast.

Abstract

In an adaptive probability estimation method, an index referring to coding parameters is determined according to occurrence probabilities of symbols from estimated occurrence counts of symbols, thresholds for probability values that determine the probability intervals corresponding to the indexes are set to values that are examined with small operational load, and an index referring to the corresponding occurrence probability is selected without division, using the probability intervals determined by the thresholds for probability values.

Description

    FIELD OF THE INVENTION
  • This invention relates to an encoder and a decoder for data signals and, more particularly, to entropy encoding and decoding. [0001]
  • BACKGROUND
  • Adaptive encoding is a known method of efficiently encoding data signals. An adaptive encoding device encodes a data signal while studying the occurrence probability of the object of encoding or decoding. Therefore, adaptive coding avoids decreased coding efficiency. [0002]
  • An adaptive encoding and decoding device is described in five articles concerning “Q-coder adaptive binary arithmetic coder”, and appearing in IBM Journal of Research and Development, Vol. 32, No. 6, Nov. 1998, pp. 717-774. In addition, the principle of an arithmetic coder and decoder having an entropy encoding and decoding means is described in U.S. Pat. No. 5,059,976. FIG. 1 of that patent, reproduced here as FIG. 12, illustrates an example in which the binary symbol 001(sequence length 3) is encoded by an arithmetic coder. That encoding is described in the following paragraphs. [0003]
  • In coding a Markov information source, a number line representation coding system is used. In that system a sequence of symbols is mapped on number lines from 0.00 to 1.0 and having coordinates coded as code words which are, for example, represented in a binary expression. FIG. 12 is a conceptual diagram of the number line representation system. For simplicity, a bi-level memoryless information source is shown. The occurrence probability for “1” is set at r and the occurrence probability for “0” is set at 1−r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111), represented as a binary expression, is truncated at the digit that allows distinction from the other, and is defined as its respective code word. Decoding is possible at a receiving side by performing the same procedure as at the transmission side. [0004]
  • In such a sequence, the mapping interval A[0005] i, and the lower-end coordinates Ci of the symbol sequence at time i are given as follows:
  • When the output symbol ai is 0 (More Probable Symbol: hereinafter called MPS), [0006]
  • A i=(1−r)A i−1 and
  • C i =C i−1.
  • When the output symbol ai is 1 (Less Probable Symbol: hereinafter called LPS), [0007]
  • A i =rA i−1 and
  • C i =C i−1+(1−r)A i−1.
  • As described in “An overview of the basic principles of the Q-Coder adaptive binary arithmetic coder”, IBM Journal of Research and Development, Vol. 32, No. 6, November 1988, pp. 717-736, in order to reduce the number of calculations, such as multiplication, a set of fixed values are prepared and a certain value is selected from among them, not necessarily calculating rA[0008] i−1.
  • That is, if rA[0009] i−1 of the foregoing expression is set at S,
  • when ai=0, [0010]
  • A i =A i−1 −S
  • C i =C i−1
  • when ai=1, [0011]
  • A I =SA S
  • C i =C i−1+(A i−1 −S)
  • However, as A[0012] i−1 becomes successively smaller, S also needs to be smaller, in this instance. To maintain calculation accuracy, it is necessary to multiply Ai−1 by the second power (hereinafter called normalization). In an actual code word, the fixed value is assumed to be the same at all times and is multiplied by powers of ½ at the time of calculation (namely, shifted by a bit).
  • If a constant value is used for S, as described above, a problem arises when, in particular, S is large and a normalized A[0013] i−1 is relatively small. An example follows.
  • If A[0014] i−1is slightly over 0.5, Ai is very small when ai is an MPS, and is even smaller than the area given when ai is an LPS. That is, in spite of the fact that the occurrence probability of the MPS is high, the area allocated to the MPS is smaller than that allocated to the LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to the MPS is always larger than that allocated to the LPS, since Ai−1>0.5, S must be 0.25 or smaller. Therefore, when Ai−1is 1.0, r=0.25, and when Ai−1is close to 0.5, r=0.5, with the result that the occurrence probability of the LPS is considered to vary between ¼ and ½ during coding. If this variation can be made smaller, an area proportional to an occurrence probability can be allocated and an improvement in coding efficiency can be expected.
  • U.S. Pat. No. 5,025,258 describes a method of calculation of an occurrence probability (Qe) based on the number of times of occurrence. In order to presume the Qe of [0015] symbol 1, U.S. Pat. No. 5,059,976 uses learning in the probability estimation means, synchronized with renormalization in the entropy coding means, which is fundamentally independent of the probability estimation means. That is, the adaptability to a change of the information source depends on chance, as indicated in FIG. 13.
  • Arithmetic coding and decoding are described in the following references: [0016]
  • (1) Langdon et al., “Compression of Black-White Images with Arithmetic coding”, IEEE Transactions, Vol. Com-29, No. 6, June 1981, pp. 858-867, [0017]
  • (2) U.S. Pat. No. 4,633,490, [0018]
  • (3) Witten et al., “Arithmetic coding for Data Compression”, Communications of the ACM, Vol. 30, No. 6, June 1987, pp. 520-540. [0019]
  • FIG. 11 is a block diagram of an adaptive encoding device and an adaptive decoding device. In FIG. 11, a probability estimation means [0020] 30 presumes an occurrence probability of a data value for encoding, and produces a predicted value as a data value with a high occurrence probability. When a multi-value data (not binary data) signal is input, a modeling means 33 analyzes the input signal and classifies it as to context. In the coding device, the modeling means 33 converts the input multi-value data signal into a binary data signal.
  • In the decoding device, a modeling means [0021] 34 analyzes an output signal and classifies it as to context. When a multi-value data (not binary data) signal is output, a modeling means 34 converts the input binary data signal into a multi-value data signal.
  • In the coding device, a symbol judgment means [0022] 38 converts the input data signal into a binary symbol to show agreement or disagreement with the data value for encoding based on the binary data and a predicted value received from a part 36 of a memory as described below. An entropy encoding means 31 encodes the data value output by the symbol judgment means, based on the probability established separately and supplied from the Qe memory 37 described below.
  • In the decoding device, a data judgment means [0023] 39 converts a binary symbol received from an entropy decoding means 32, into binary data based on the binary symbol and a predicted value received from a part 36 of a memory in the decoding device. The entropy decoding is based on the probability separately established and stored in a Qe memory in decoding device.
  • The structure of FIG. 11 has separate modeling means [0024] 33 and modeling means 34 in the encoding and decoding devices. These modeling means may include generally known probability estimation means 30 including data and symbol conversion and inversion function. In the described structure, no conversion and inversion functions in the modeling means 33 and 34 are needed if the modeling means receive binary signals.
  • A state number is stored into a [0025] part 35 of a memory as an index for selecting an estimation probability value (MPS or LPS) for the Qe memory 37. An arithmetic coder and an arithmetic decoder are included in the entropy encoder means 31 and the entropy decoder means 32, respectively. In the encoding device of FIG. 11, the state number memory part 35 receives the context from the modeling means 33. The memory part 36 stores a predicted value 36, based on the context and state number. The Qe memory 37 detects a probability representation value (MPS or LPS). The symbol judgment means 38 produces a binary symbol to establish agreement or disagreement of the data value for encoding based on the binary data and the predicted value. The probability representation value (LPS or MPS) and the binary symbol are sent to the entropy encoding means 31 and the entropy encoding means 31 produces a code in response.
  • For decoding, the entropy encoding means [0026] 31 sends the code to a entropy decoding means 32. The entropy decoding means 32 receives a probability representation value (LPS or MPS) from the Qe memory 37 and the input code. The entropy decoding means 32 produces a binary symbol. A data judgment means 39 receives a predicted value from a part 36 of a memory and the binary symbol from the entropy decoding means 32 and detects binary data based on the binary symbol and the predicted value.
  • The modeling means [0027] 34 receives binary data from the data judgment means 39 and detects a data signal based on the binary data. Moreover, the modeling means 34 converts a multi-value data signal into a binary data signal.
  • When a multi-value data signal is output, the output data signal is analyzed, and classified as to context, and a multi-value data signal is output. Modeling means [0028] 34 converts the binary data signal to decode it. The memory including state number part 35 and predicted value memory part 36 and the Qe memory 37 of the encoding device are the same as on the decoding side. Moreover the memory including parts 35 and 36, the Qe memory 37, symbol judgment means 38, and data judgment means 39 are described in one figure and a flow chart in the articles first mentioned and in TU-T Recommendation T. 82, “Information Technology-coded Representation of Picture and Audio information-Progressive Bi-Level Image Compression”, pp. 23-45, 1993.
  • Conventional adaptive probability estimation methods using state transitions have a problem in that the probability estimation performance is not sufficient because learning in the probability estimation means is synchronized with renormalization in the entropy coding means. The entropy coding means is fundamentally independent of the probability estimation means, so the adaptability to a change in the information source depends on chance. [0029]
  • Conventional adaptive probability estimation methods that estimate occurrence probabilities from occurrence counts of data or symbols have a problem in that division to calculate a probability, and multiplication, to subdivide a numeric line in arithmetic coding, is necessary and causes a heavy operational load. [0030]
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide an encoding method and a decoding method that determine an index to select an appropriate coding parameter according to an occurrence probability and producing a smaller operational load. [0031]
  • According to one aspect of the invention, a coding method comprises determining a symbol from an input data signal, setting a threshold for probability values that determine a probability interval corresponding to an index based on an occurrence probability of symbols estimated from occurrence counts of the symbols, determining a probability representation value as a coding parameter using the probability interval of the threshold, and coding the symbol determined from the input data signal, based on the probability representation value. [0032]
  • According to another aspect of the invention, a decoding method comprises setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols, determining a probability representation value as a coding parameter using the probability interval of the threshold, decoding an input code based on the probability representation value and outputting a symbol, and determining output data based on the symbol output in decoding of the input code. [0033]
  • According to yet another aspect of the invention, a probability estimation method includes determining a symbol based on an input data signal, setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols, determining the probability representation value for a calculation parameter using the probability interval of the threshold, and outputting the probability representation value.[0034]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The objects and novel features of the invention will more fully appear from the following detailed description when the same is read in connection with the accompanying drawing figures. It is to be expressly understood, however, that the drawing is for purpose of illustration only and is not intended as a definition of the limits of the invention. [0035]
  • FIG. 1 is a block diagram of an encoding method according to the present invention; [0036]
  • FIG. 2 is a block diagram of a symbol counting means according to the present invention; [0037]
  • FIG. 3 is a flow chart of an encoding process according to the present invention; [0038]
  • FIG. 4 shows a threshold for probability values and probability representation values for selecting a probability representation value according to an embodiment of the present invention; [0039]
  • FIG. 5 is a flow chart of selecting a probability representation value according to an embodiment of the present invention; [0040]
  • FIG. 6 shows a threshold for probability values and probability representation values for selecting a probability representation value according to an embodiment of the present invention; [0041]
  • FIG. 7 is a flow chart of selecting a probability representation value according to an embodiment of the present invention; [0042]
  • FIG. 8 is a block diagram of a decoding method according to the present invention; [0043]
  • FIG. 9 is a flow chart of a decoding method according to the present invention; [0044]
  • FIG. 10 is a flow chart of a process for correcting the probability representation value of a probability interval according to the invention; [0045]
  • FIG. 11 is a block diagram of coding and decoding methods; [0046]
  • FIG. 12 is a diagram illustrating the concept of arithmetic entropy encoding and decoding; and [0047]
  • FIG. 13 is a diagram illustrating probability estimation in encoding and decoding.[0048]
  • DETAILED DESCRIPTION
  • [0049] Embodiment 1.
  • In the invention, adaptive probability estimation is applied to arithmetic coding. FIG. 1 is a block diagram showing the structure of a coding apparatus using arithmetic coding according to the invention. Coding of binary information sources is described for ease of understanding. As in the conventional examples, in arithmetic coding, binary sources are encoded by comparing a symbol to be coded with a predicted symbol and determining whether the symbol is more likely to occur than the another symbol (MPS: More Probable Symbol) or a symbol that is less likely to occur than the other symbol (LPS: Less Probable Symbol). The symbol judgment means [0050] 10 determines whether an input symbol is an MPS or LPS. The symbol counting means 11 counts the occurrences of LPS and the occurrences of both the binary symbols, in addition to storing a predicted value. The probability estimation means 12 estimates an LPS occurrence probability according to the accumulated count of LPS occurrences and both the symbols. The coding means 13 arithmetically codes coding input sequences of symbols and outputs coded data. In this coding process an operation of subdividing the numeric line recursively, according to the LPS occurrence probability, and selecting a divided interval that corresponds to the symbol to be coded, is iterated. The symbol counting means 11 can be decomposed into the elements shown in FIG. 2.
  • In FIG. 2, the total symbol counting means [0051] 14 counts occurrences of both the binary symbols and the total occurrence count. The LPS judgment means 15 determines whether the input symbol is LPS or MPS. If LPS, the occurrence will be counted by the LPS counting means 16. The predicted value memory 17 stores the predicted value and, when the LPS occurrence count exceeds half of the total occurrence count, the prediction value is reversed and the LPS occurrence count and MPS occurrence count (the total occurrence count—the LPS occurrence count) are exchanged.
  • Based on the structure described above, an arithmetic coding method using adaptive probability estimation, according to the invention, is illustrated in FIG. 3. In this method, whether the input symbol to be coded is an MPS or LPS is determined at the symbol judgment means [0052] 10 by referring to the predicted value stored at the predicted value memory (Step 102). Then, one of the plural probability representation values prepared in advance is selected in the probability estimation method (explained in detail later) by referring to the LPS occurrence count and the total occurrence count (Steps 103 and 106). An MPS or LPS, determined by the symbol judgment means 10, is encoded by the coding means 13, using the probability representation value selected at the probability estimation means (Steps 104 and 107). After the encoding, the LPS occurrence count and total occurrence count are updated by the symbol counting means 11. When the symbol is determined to be an MPS, the total occurrence count is incremented (Step 105). When the symbol is determined to be an LPS, both the LPS occurrence count and total occurrence count are incremented (Step 108) and, if the LPS occurrence count exceeds half of the total occurrence count, the predicted value is reversed. At the same time, the LPS occurrence count and the MPS occurrence count are exchanged (Step 111), since the LPS occurrence count is larger than the MPS occurrence count (the total occurrence count—the LPS occurrence count). This exchange keeps the LPS occurrence count smaller than half of the total occurrence count.
  • The procedure for selecting a probability representation value is now described. FIG. 4 shows thresholds for probability values (TO, TI, T2) and probability representation values (AO, Al, A2, A3) which belong to corresponding probability intervals determined by the thresholds for probability values. Here, the thresholds are set to be powers of two as an example. Each probability representation value can be set to an arbitrary value between two neighboring thresholds for probability values. In this example, each probability representation value is set at the center of two neighboring thresholds of probability values, located between the edges of a respective probability interval. Table 1 shows the thresholds for probability values with the three least significant bits in their binary representation. [0053]
    TABLE 1
    Threshold for Bit
    Probability Binary Arrangement
    Values Representation b0 b1 b2
    T0 = ¼ 0.0100 1 0 0
    T1 = 0.0010 0 1 0
    T2 = {fraction (1/16)} 0.0001 0 0 1
  • Since the thresholds for probability values are set to powers of two, whether an LPS probability is larger than a threshold probability value can be determined by comparing the LPS occurrence count, shifted by some bits, with the total occurrence count. [0054]
  • FIG. 5 shows a procedure for estimating probability. First, the LPS occurrence count and the total occurrence count are stored in the register L and the register N, respectively (Step [0055] 113). Then, a comparison to the threshold for probability values TO is made, that is, the register L shifted to the left by two bits (Step 114) is compared with the register N (Step 115).
  • If L is greater than or equal to N (the LPS occurrence probability >TO), the probability interval is determined, which means AO will be selected as a probability representation value (Step [0056] 116). If L is less than N, a comparison to the threshold for probability values T1 is made, that is, the register L shifted to the left by one bit more (Step 117), is compared with the register N (Step 118). If L is greater than or equal to N (LPS occurrence probability >T1), the probability interval is determined, which means Al will be selected as a probability representation value (Step 119). If L is less than N, the comparison to the threshold for probability values T2 is the same as the comparison to the threshold for probability values T1 (Steps 120 and 121). If L is greater than or equal to N (the LPS occurrence probability >T2), the probability interval is determined, which means A2 will be selected as a probability representation value (Step 122). If L is less than N (the LPS occurrence probability <T2), the probability interval is determined, which means A3 will be selected as a probability representation value (Step 123). Thus, by setting the thresholds for probability values to be powers of two, a fast search of the probability representation value corresponding to a probability interval determined by the thresholds for probability values with no division and smaller operational load is made possible.
  • [0057] Embodiment 2.
  • An explanation of the structure and procedure of encoding apparatus using this invention with arithmetic coding is omitted since it is the same as [0058] embodiment 1. The procedure for selecting a probability representation value is described below.
  • FIG. 6 shows thresholds for probability values (TO, TI, . . . T7) and probability representation values (AO, Al, . . . , A8) which belong to the corresponding probability intervals determined by the thresholds for probability values. Here, as an example, the thresholds are set to be powers of two or values obtained by dividing an interval determined by powers of two into two equal parts, recursively. For instance, the interval between ½ and ¼ is divided into four parts (halving the interval twice). The intervals between ¼ and ⅛ and between ⅛ and {fraction (1/16)} are divided into two parts due to the restriction of binary digit length in this example. Each probability representation value can be set to be an arbitrary value between two neighboring thresholds of probability values. [0059]
  • In this example, each probability representation value is set in the center of two neighboring thresholds for probability values which are located at the edges of each probability interval. Table 2 shows the thresholds for probability values with the four least significant bits in their binary representation. [0060]
    TABLE 2
    Threshold
    for
    Probability Binary Bit Arrangement
    values Representation b0 b1 b2 b3
    T0 = {fraction (7/16)} 0.01110 1 1 1 0
    T1 = 0.01100 1 1 0 0
    T2 = {fraction (5/16)} 0.01010 1 0 1 0
    T3 = ¼ 0.01000 1 0 0 0
    T4 = {fraction (3/16)} 0.00110 0 1 1 0
    T5 = 0.00100 0 1 0 0
    T6 = {fraction (3/32)} 0.00011 0 0 1 1
    T7 = {fraction (1/16)} 0.00010 0 0 1 0
  • Whether an LPS probability is larger than a threshold for probability values can be determined by determining whether each bit is one or zero from the most significant bit to the least significant bit (in the order from b[0061] 0 to b3), in a binary representation, repeatedly. Whether each bit is one or zero can be determined by comparing the total occurrence count with a value obtained by appropriate shifting or by subtraction (subtraction of the total occurrence count from the LPS occurrence count) of the LPS occurrence count.
  • FIG. 7 shows the procedure for estimating a probability. First, the LPS occurrence count and the total occurrence count are stored in the register L and the register N, respectively (Step [0062] 124). Then, the bit b0 in the LPS occurrence count is determined by comparing the register L shifted to the left by two bits (Step 125) with the register N (Step 126). If L is greater than or equal to N (the bit b0 of the LPS probability is one: YES at Step 126) and the probability interval is not determined (NO at Step 128), the value of the register N is subtracted from the value of the register L (the bit b0 of the LPS occurrence probability is set to zero: Step 128), and the next bit b1 will be determined. If L is less than N (the bit b0 of the LPS probability is zero: NO at Step 126) and the probability interval is not determined (NO at Step 129), the next bit b1 will be determined.
  • The procedure for determining the bit b[0063] 1 in the LPS occurrence probability is described. The register L shifted to the left by one bit (Step 131) is compared with the register N (Step 126). Determining whether each bit is one or zero is the same as the determination of the bit b1. Determination of bits b2 and b3 is in the same manner as the determination of b1. Thus, each bit in the LPS occurrence probability is determined in sequence, and when a probability interval is decided (YES at Step 127 or 129), the iterative judgment is discontinued and the probability representation value corresponding to the probability interval is selected (Step 130).
  • The following is a practical example in which the LPS occurrence count L=21 and the total occurrence count N=60. First, the [0064] bit 0 in the LPS occurrence count turns out to be one, since the comparison between the register L shifted to the left by two bits (L=84) and the register N shows that L≧N. The next bit b1 in the LPS occurrence count turns out to be zero, since the comparison between the register L from which the value of the register N was subtracted (L=24) and shifted to the left by one bit (L=48) and the register N shows that L<N. Then, the bit b2 in the LPS occurrence probability turns out to be 1, since the comparison between the register N shifted to the left by one bit (L=96) and the register N shows that L≧N.
  • After the operations above, the probability interval has been decided, that is, A2 will be selected as a probability representation value. Thus, by setting the thresholds for probability values to be powers of two or values obtained by dividing an interval determined by powers of two into two equal parts, recursively, a fast search of the probability representation value corresponding to a probability interval, determined by the thresholds for probability values, with no division step and a smaller operation load, is achieved. [0065]
  • [0066] Embodiment 3.
  • [0067] Embodiment 3 concerns a decoding apparatus corresponding to the encoding apparatus introduced in embodiment 1. FIG. 8 is a block diagram showing the structure of a decoding apparatus using arithmetic decoding in accordance with the invention. An explanation of the symbol counting means 11 and the probability estimation means 12 is omitted, since they have been described in connection with the encoding apparatus of embodiment 1. The decoding means 20 outputs symbols by dividing the numeric line according to the LPS occurrence probabilities and determining which symbol the interval indicated by the input code data corresponds to. The data judgment means 21 converts a decoded MPS or LPS into binary data by referring to the predicted value and outputs binary data.
  • Based on the structure described, a procedure of arithmetic decoding using the adaptive probability estimation according to this invention is illustrated in FIG. 9. One of the plural probability representation values prepared in advance is selected in the probability estimation means [0068] 12 by referring to the LPS occurrence count and the total occurrence count (Step 201). Then, an MPS or LPS is decoded by the decoding means 20, using the probability representation value selected by the probability estimation means (Step 202). If the decoded symbol is an MPS, the predicted value stored in the predicted value memory 17 is output by the data judgment means 21 (Step 204). If the decoded symbol is an LPS, the inverse of the predicted value stored in the predicted value memory 17 is output by the data judgment means 21 (Step 206). After outputting the data, the LPS occurrence count and total occurrence count are updated by the symbol counting means 11.
  • The procedure for selecting probability representation values in this embodiment is the same as in [0069] embodiment 1. As in embodiment 1, by setting the thresholds for probability values to be powers of two, a fast search of the probability representation values corresponding to probability intervals determined by the thresholds for probability values with no division and a smaller operational load is made possible.
  • [0070] Embodiment 4.
  • [0071] Embodiment 4 concerns a decoding apparatus corresponding to the encoding apparatus of embodiment 2. The structure and procedure of operation of the decoding apparatus using arithmetic decoding according to this invention are the same as those of embodiment 3. Therefore, a description of the procedure for selecting probability representation values is the same as embodiment 2. Therefore, duplicate explanation is omitted. As in embodiment 2, by setting the thresholds for probability values to be powers of two or values given by dividing an interval determined by powers of two into two equal parts, recursively, a fast search of the probability representation value corresponding to a probability interval determined by the thresholds for probability values with no division and smaller operation load, is made possible.
  • [0072] Embodiment 5.
  • In this embodiment, a method for determining one of the coding parameters, a probability representation value, according to two neighboring thresholds for probability values located at both of the edges of a probability interval is described. Here, the probability representation values are determined from the viewpoint of coding efficiency. Generally, coding efficiency for information sources having occurrence probabilities around the center of a probability interval is higher than when the probabilities are close to the thresholds for probability values. That is, the larger the difference from the center of a probability interval, the lower the coding efficiency. The coding efficiency with arithmetic coding that does not use multiplication is lower than arithmetic coding using multiplication, because a fixed-length area is assigned to a symbol to be coded, regardless of the length of the area, which ranges from 0.5 to 1.0 (only the initial length is 1.0). However it allows practical and simple implementation. In this case, at least probability representation values should be corrected. When an occurrence probability of an information source is assumed, the coding efficiency can be measured only by one factor, code length (numerator), since the other factor, entropy (denominator), depends on the occurrence probability. [0073]
  • FIG. 10 illustrates an example of a procedure for calculating the correcting values for the probability representation value. First, thresholds for probability values are set (Step [0074] 301). At the next step, a tentative probability representation value is set (Step 302). For instance, the initial value can be set to the center of two neighboring thresholds for probability values. Then, code lengths for two information sources with occurrence probabilities at the thresholds for probability values are calculated (Step 303), and the magnitude of the error between the two code lengths is compared with a predefined error tolerance e (Step 304). If the error magnitude is larger than e (NO), the tentative probability representation value is set again. As the tentative probability representation value changes, the code lengths for the two information sources also change. Therefore, in Step 302, in which the tentative probability representation value is set, Step 304 and Step 305 are iterated, and if the error magnitude is smaller than e (YES), the probability representation value is set to the tentative probability representation value, which is supposed to provide the best coding efficiency. For instance, the top of the coding efficiency curve can be a candidate of the value. Using the same technique, the probability representation values for all probability intervals can be determined.
  • Embodiment 6. [0075]
  • As described above, division was necessary to estimate the occurrence probabilities of symbols to be coded with accumulated occurrence counts of symbols. Therefore, the problem was increased complexity upon hardware implementation, and an increased operation load for the coding. The embodiments described include coding methods in which plural probability representation values are prepared in advance, and one of the probability representation values is selected according to the accumulated occurrence counts as an estimated occurrence probability of the symbol to be coded. In the coding method, the thresholds for probability values, i.e., boundaries for the selection of a probability representation value, are determined carefully (for example, powers of two or values obtained by dividing an interval determined by powers of two into two equal parts, recursively), the probability representation value corresponding to a probability interval determined by the thresholds for probability values is selected by shifting, subtraction, and comparison with the accumulated occurrence counts of symbols, and occurrence probabilities of symbols are estimated according to accumulated occurrence counts. A fast search of the probability representation value with no division and smaller operation load is thus made possible. Although a state transition is not used in this probability estimation, each probability interval can be regarded as a state. In the embodiments described, a probability representation value is determined by selecting the corresponding probability interval. This means that an index referring to a probability interval regarded as a state may be selected by a fast search using the occurrence probability with no division and smaller operational load, and the probability representation value is obtained from the index. The index obtained can be easily used not only for selection of a probability representation value, but also for selecting other parameters. [0076]
  • The invention reduces the operational load for coding, which is a reduction in multiplication in the case of a state transition and a reduction of division in the case using accumulated occurrence counts. Also, this invention increases probability estimation fidelity and coding efficiency in a state transition because of the improvement of adaptability to a change of the occurrence probability. [0077]
  • If a state transition is not used, the probability estimation performance will be more independent and improved. In a state transition, learning in the probability estimation means is synchronized with renormalization in the entropy coding means, which is fundamentally independent of the probability estimation means. That is, the adaptability to the change in the information source depends on chance. The probability representation values can be closer to the optimum values due to the way in which the probability representation values are set, so that the coding efficiencies at both edges of a probability interval, which provide the worst coding efficiency in the range, provide the same coding efficiency. [0078]
  • An encoding apparatus and a decoding apparatus according to the invention can be implemented either separately or together. Any medium such as wireless transmission, wire transmission, storage devices using discs, tape, semiconductors memories, and the like can be used as a medium over which code data output from the encoding apparatus are transmitted to the decoding apparatus, or a medium in which code data output from the encoding apparatus are stored. The transmission or storage can be realized electrically, magnetically, or optically. [0079]
  • [0080] Embodiment 7.
  • In foregoing embodiments, the probability estimation is used in an encoder structure and a decoder structure. The probability estimation method is used in a calculation means instead of an encoder structure or a decoder structure. The calculation means is connected to the probability estimation structure that determines the probability representation value. The calculation means receives the probability representation value and a binary symbol, makes further calculations based on the probability representation value, and outputs a result sent to another structure. The calculation means can send the calculated result to another structure very quickly because the estimation method is very fast. [0081]

Claims (17)

What is claimed is:
1. A coding method comprising:
determining a symbol from an input data signal,
setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols,
determining a probability representation value as a coding parameter using the probability interval of the threshold, and
coding the symbol determined from the input data signal, based on the probability representation value.
2. The method of claim 1 in which the threshold values are integer powers of one half (2−N).
3. The method of claim 2 including using a midpoint of two adjacent thresholds values as another threshold value.
4. The method of claim 2 using thresholds obtained by dividing two adjacent thresholds (2−N and 2−(N+1)) into 2i equal parts, where i is a positive integer.
5. The method of claim 1 including setting the threshold values so that computational cost for determining the threshold is reduced.
6. The method of claim 1 wherein the symbols are binary symbols.
7. A decoding method comprising:
setting a threshold for a probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols,
determining a probability representation value as a coding parameter using the probability interval of the threshold,
decoding an input code based on the probability representation value and outputting a symbol, and
determining output data based on the symbol output in decoding of the input code.
8. The method of claim 7 in which the threshold values are integer powers of one half (2−N).
9. The method of claim 8 including using a midpoint of two adjacent thresholds values as another threshold value.
10. The method of claim 8 including using thresholds obtained by dividing two adjacent thresholds (2−N and 2−(N+1)) into 2i equal parts, where i is a positive integer.
11. The method of claim 7 including setting the threshold values so that computational cost for determining the threshold is reduced.
12. The method of claim 7 wherein the symbols are binary symbols.
13. A probability estimation method comprising:
determining a symbol based on an input data signal,
setting a threshold for probability values that determine a probability interval corresponding to an index based on occurrence probabilities of symbols estimated from occurrence counts of the symbols,
determining the probability representation value for a calculation parameter using the probability interval of the threshold, and
outputting the probability representation value.
14. The method of claim 13 in which the threshold values are integer powers of one half (2−N).
15. The method of claim 14 including using thresholds obtained by dividing two adjacent thresholds (2−N and 2−(N+1)) into 2 i equal parts, where i is a positive integer.
16. The method of claim 13 including setting the threshold values so that computational cost for determining the threshold is reduced.
17. The method of claim 13 wherein the symbols are binary symbols.
US09/275,006 1998-03-25 1999-03-24 Encoding, decoding, and probability estimation method Expired - Lifetime US6411231B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP10-077248 1998-03-25
JP07724898A JP3391251B2 (en) 1998-03-25 1998-03-25 Adaptive probability estimation method, adaptive encoding method, and adaptive decoding method

Publications (2)

Publication Number Publication Date
US20020021234A1 true US20020021234A1 (en) 2002-02-21
US6411231B1 US6411231B1 (en) 2002-06-25

Family

ID=13628567

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/275,006 Expired - Lifetime US6411231B1 (en) 1998-03-25 1999-03-24 Encoding, decoding, and probability estimation method

Country Status (6)

Country Link
US (1) US6411231B1 (en)
EP (1) EP0945988B1 (en)
JP (1) JP3391251B2 (en)
KR (1) KR100340828B1 (en)
CN (1) CN100459437C (en)
DE (1) DE69925774T2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040013311A1 (en) * 2002-07-15 2004-01-22 Koichiro Hirao Image encoding apparatus, image encoding method and program
US20040151252A1 (en) * 2002-04-25 2004-08-05 Shunichi Sekiguchi Digital signal encoding device, digital signal decoding device, digital signal arithmetic encoding method and digital signal arithmetic decoding method
US20050146451A1 (en) * 2002-04-26 2005-07-07 Ntt Docomo, Inc. Signal encoding method, signal decoding method, signal encoding apparatus, signal decoding apparatus, signal encoding program, and signal decoding program
US20080240253A1 (en) * 2007-03-29 2008-10-02 James Au Intra-macroblock video processing
US20080240233A1 (en) * 2007-03-29 2008-10-02 James Au Entropy coding for video processing applications
US20080240228A1 (en) * 2007-03-29 2008-10-02 Kenn Heinrich Video processing architecture
US20080240254A1 (en) * 2007-03-29 2008-10-02 James Au Parallel or pipelined macroblock processing
US20090100251A1 (en) * 2007-10-16 2009-04-16 Pei-Wei Hsu Parallel context adaptive binary arithmetic coding
US20160004769A1 (en) * 2014-07-03 2016-01-07 Ca, Inc. Estimating full text search results of log records
RU2598817C2 (en) * 2011-03-07 2016-09-27 Долби Интернэшнл Аб Image encoding and decoding method, device for encoding and decoding and corresponding software
US9654783B2 (en) 2011-06-24 2017-05-16 Dolby International Ab Method for encoding and decoding images, encoding and decoding device, and corresponding computer programs

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100405819B1 (en) * 2001-01-15 2003-11-14 한국과학기술원 The image compression and restoring method for binary images
EP1322117A1 (en) * 2001-12-06 2003-06-25 Koninklijke Philips Electronics N.V. Arithmetic coder and decoder
ES2552696T3 (en) * 2002-04-23 2015-12-01 Ntt Docomo, Inc. System and method for arithmetic coding and decoding
US9577667B2 (en) 2002-04-23 2017-02-21 Ntt Docomo, Inc. System and method for arithmetic encoding and decoding
US6850175B1 (en) * 2003-09-18 2005-02-01 Ntt Docomo, Inc. Method and apparatus for arithmetic coding
EP1540962B1 (en) * 2002-09-20 2016-05-11 NTT DoCoMo, Inc. Method and apparatus for arithmetic coding and decoding
US6906647B2 (en) 2002-09-20 2005-06-14 Ntt Docomo, Inc. Method and apparatus for arithmetic coding, including probability estimation state table creation
US6825782B2 (en) * 2002-09-20 2004-11-30 Ntt Docomo, Inc. Method and apparatus for arithmetic coding and termination
US6900748B2 (en) * 2003-07-17 2005-05-31 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and apparatus for binarization and arithmetic coding of a data value
US7659839B2 (en) * 2007-08-15 2010-02-09 Ternarylogic Llc Methods and systems for modifying the statistical distribution of symbols in a coded message
US8577026B2 (en) 2010-12-29 2013-11-05 Ternarylogic Llc Methods and apparatus in alternate finite field based coders and decoders
US20110064214A1 (en) * 2003-09-09 2011-03-17 Ternarylogic Llc Methods and Apparatus in Alternate Finite Field Based Coders and Decoders
US7447235B2 (en) * 2003-10-08 2008-11-04 Digital Fountain, Inc. FEC-based reliability control protocols
US7379608B2 (en) * 2003-12-04 2008-05-27 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung, E.V. Arithmetic coding for transforming video and picture data units
US7599435B2 (en) * 2004-01-30 2009-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
US7586924B2 (en) * 2004-02-27 2009-09-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US7064684B2 (en) * 2004-06-01 2006-06-20 Peter Lablans Sequence detection by multi-valued coding and creation of multi-code sequences
KR100609697B1 (en) * 2004-06-02 2006-08-08 한국전자통신연구원 Optical time-domain reflectometer system with gain-clamped optical amplifiers
CN101502123B (en) * 2006-11-30 2011-08-17 松下电器产业株式会社 Coder
JP5746811B2 (en) * 2006-12-21 2015-07-08 味の素株式会社 Colorectal cancer evaluation method, colorectal cancer evaluation device, colorectal cancer evaluation method, colorectal cancer evaluation system, colorectal cancer evaluation program, and recording medium
US7895171B2 (en) * 2008-03-27 2011-02-22 International Business Machines Corporation Compressibility estimation of non-unique indexes in a database management system
CN102264099B (en) * 2010-05-28 2014-09-10 中兴通讯股份有限公司 Adaptive Modulation and Coding (AMC) apparatus and method thereof
CN102438140B (en) * 2011-06-02 2013-07-31 东南大学 Arithmetic encoder sequence renormalization method used for image compression
CN103299307B (en) 2011-08-23 2016-08-03 华为技术有限公司 For estimating method of estimation and the estimator of the probability distribution of quantization index
US9558109B2 (en) 2014-04-04 2017-01-31 Samsung Israel Research Corporation Method and apparatus for flash memory arithmetic encoding and decoding
CN109474281B (en) * 2018-09-30 2022-07-08 湖南瑞利德信息科技有限公司 Data encoding and decoding method and device
CN111488927B (en) * 2020-04-08 2023-07-21 中国医学科学院肿瘤医院 Classification threshold determining method, device, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4168513A (en) * 1977-09-12 1979-09-18 Xerox Corporation Regenerative decoding of binary data using minimum redundancy codes
US4633490A (en) 1984-03-15 1986-12-30 International Business Machines Corporation Symmetrical optimized adaptive data compression/transfer/decompression system
US4933883A (en) * 1985-12-04 1990-06-12 International Business Machines Corporation Probability adaptation for arithmetic coders
US4652856A (en) * 1986-02-04 1987-03-24 International Business Machines Corporation Multiplication-free multi-alphabet arithmetic code
JPH0834432B2 (en) * 1989-01-31 1996-03-29 三菱電機株式会社 Encoding device and encoding method
US5025258A (en) 1989-06-01 1991-06-18 At&T Bell Laboratories Adaptive probability estimator for entropy encoding/decoding
US5272478A (en) * 1992-08-17 1993-12-21 Ricoh Corporation Method and apparatus for entropy coding
US5546080A (en) * 1994-01-03 1996-08-13 International Business Machines Corporation Order-preserving, fast-decoding arithmetic coding arithmetic coding and compression method and apparatus
US6081213A (en) * 1997-03-26 2000-06-27 Sony Corporation Method and apparatus for arithmetic coding, method and apparatus for arithmetic decoding, and storage medium

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102213A1 (en) * 2002-04-25 2011-05-05 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20040151252A1 (en) * 2002-04-25 2004-08-05 Shunichi Sekiguchi Digital signal encoding device, digital signal decoding device, digital signal arithmetic encoding method and digital signal arithmetic decoding method
US20110148674A1 (en) * 2002-04-25 2011-06-23 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20060109149A1 (en) * 2002-04-25 2006-05-25 Shunichi Sekiguchi Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7095344B2 (en) * 2002-04-25 2006-08-22 Mitsubishi Denki Kabushiki Kaisha Digital signal encoding device, digital signal decoding device, digital signal arithmetic encoding method and digital signal arithmetic decoding method
US7859438B2 (en) 2002-04-25 2010-12-28 Mitsubihsi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus; digital signal arithmetic coding method, and digital signal arithmetic decoding method
US20110102210A1 (en) * 2002-04-25 2011-05-05 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, ditigal signal arithmetic coding method and digital signal arithmetic decoding method
US20070205927A1 (en) * 2002-04-25 2007-09-06 Shunichi Sekiguchi Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20070263723A1 (en) * 2002-04-25 2007-11-15 Shunichi Sekiguchi Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7928869B2 (en) 2002-04-25 2011-04-19 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, ditigal signal arithmetic coding method and digital signal arithmetic decoding method
US20110095922A1 (en) * 2002-04-25 2011-04-28 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7321323B2 (en) 2002-04-25 2008-01-22 Mitsubishi Denki Kabushiki Kaisha Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7388526B2 (en) 2002-04-25 2008-06-17 Mitsubishi Denki Kabushiki Kaisha Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20080158027A1 (en) * 2002-04-25 2008-07-03 Shunichi Sekiguchi Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7408488B2 (en) 2002-04-25 2008-08-05 Mitsubishi Denki Kabushiki Kaisha Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7994951B2 (en) 2002-04-25 2011-08-09 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US8604950B2 (en) 2002-04-25 2013-12-10 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal arithmetic coding method and apparatus
US8354946B2 (en) 2002-04-25 2013-01-15 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US8203470B2 (en) 2002-04-25 2012-06-19 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7518537B2 (en) 2002-04-25 2009-04-14 Mitsubishi Electric Corporation Decoding apparatus and decoding method
US8188895B2 (en) 2002-04-25 2012-05-29 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US8094049B2 (en) 2002-04-25 2012-01-10 Mitsubishi Denki Kabushiki Kaisha Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20090153378A1 (en) * 2002-04-25 2009-06-18 Shunichi Sekiguchi Digital signal coding apparatus, digital signal decoding apparatus; digital signal arithmetic coding method, and digital signal arithmetic decoding method
USRE41729E1 (en) 2002-04-25 2010-09-21 Mitsubishi Denki Kabushiki Kaisha Digital signal coding apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US20100315270A1 (en) * 2002-04-25 2010-12-16 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, ditigal signal arithmetic coding method and digital signal arithmetic decoding method
US20110115656A1 (en) * 2002-04-25 2011-05-19 Shunichi Sekiguchi Digital signal coding method and apparatus, digital signal decoding apparatus, digital signal arithmetic coding method and digital signal arithmetic decoding method
US7298303B2 (en) 2002-04-26 2007-11-20 Ntt Docomo, Inc. Signal encoding method, signal decoding method, signal encoding apparatus, signal decoding apparatus, signal encoding program, and signal decoding program
US7190289B2 (en) * 2002-04-26 2007-03-13 Ntt Docomo, Inc. Signal encoding method, signal decoding method, signal encoding apparatus, signal decoding apparatus, signal encoding program, and signal decoding program
US20060202872A1 (en) * 2002-04-26 2006-09-14 Ntt Docomo, Inc. Signal encoding method, signal decoding method, signal encoding apparatus, signal decoding apparatus, signal encoding program, and signal decoding program
US20050146451A1 (en) * 2002-04-26 2005-07-07 Ntt Docomo, Inc. Signal encoding method, signal decoding method, signal encoding apparatus, signal decoding apparatus, signal encoding program, and signal decoding program
US20040013311A1 (en) * 2002-07-15 2004-01-22 Koichiro Hirao Image encoding apparatus, image encoding method and program
US7305138B2 (en) * 2002-07-15 2007-12-04 Nec Corporation Image encoding apparatus, image encoding method and program
US20080240253A1 (en) * 2007-03-29 2008-10-02 James Au Intra-macroblock video processing
US20080240254A1 (en) * 2007-03-29 2008-10-02 James Au Parallel or pipelined macroblock processing
US20080240228A1 (en) * 2007-03-29 2008-10-02 Kenn Heinrich Video processing architecture
US8369411B2 (en) 2007-03-29 2013-02-05 James Au Intra-macroblock video processing
US8416857B2 (en) 2007-03-29 2013-04-09 James Au Parallel or pipelined macroblock processing
US8422552B2 (en) * 2007-03-29 2013-04-16 James Au Entropy coding for video processing applications
US20080240233A1 (en) * 2007-03-29 2008-10-02 James Au Entropy coding for video processing applications
US8837575B2 (en) 2007-03-29 2014-09-16 Cisco Technology, Inc. Video processing architecture
US7522076B1 (en) * 2007-10-16 2009-04-21 Mediatek Inc. Parallel context adaptive binary arithmetic coding
US20090100251A1 (en) * 2007-10-16 2009-04-16 Pei-Wei Hsu Parallel context adaptive binary arithmetic coding
US9628818B2 (en) 2011-03-07 2017-04-18 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US10382784B2 (en) 2011-03-07 2019-08-13 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US9560380B2 (en) 2011-03-07 2017-01-31 Dolby International Ab Coding and decoding images using probability data
US11736723B2 (en) 2011-03-07 2023-08-22 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US11343535B2 (en) 2011-03-07 2022-05-24 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
RU2598817C2 (en) * 2011-03-07 2016-09-27 Долби Интернэшнл Аб Image encoding and decoding method, device for encoding and decoding and corresponding software
US10681376B2 (en) 2011-03-07 2020-06-09 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
RU2651426C1 (en) * 2011-03-07 2018-04-19 Долби Интернэшнл Аб Image encoding and decoding method, device for encoding and decoding and corresponding software
RU2715522C2 (en) * 2011-03-07 2020-02-28 Долби Интернэшнл Аб Method of encoding and decoding images, encoding and decoding device and corresponding computer programs
US9661335B2 (en) 2011-06-24 2017-05-23 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US10362311B2 (en) 2011-06-24 2019-07-23 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US10033999B2 (en) 2011-06-24 2018-07-24 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US9848196B2 (en) 2011-06-24 2017-12-19 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US10694186B2 (en) 2011-06-24 2020-06-23 Dolby International Ab Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US9654783B2 (en) 2011-06-24 2017-05-16 Dolby International Ab Method for encoding and decoding images, encoding and decoding device, and corresponding computer programs
US9965550B2 (en) * 2014-07-03 2018-05-08 Ca, Inc. Estimating full text search results of log records
US20160004769A1 (en) * 2014-07-03 2016-01-07 Ca, Inc. Estimating full text search results of log records

Also Published As

Publication number Publication date
KR19990078037A (en) 1999-10-25
CN1230054A (en) 1999-09-29
EP0945988B1 (en) 2005-06-15
CN100459437C (en) 2009-02-04
KR100340828B1 (en) 2002-06-15
DE69925774D1 (en) 2005-07-21
EP0945988A2 (en) 1999-09-29
JPH11274938A (en) 1999-10-08
DE69925774T2 (en) 2006-05-11
US6411231B1 (en) 2002-06-25
EP0945988A3 (en) 2002-06-26
JP3391251B2 (en) 2003-03-31

Similar Documents

Publication Publication Date Title
US6411231B1 (en) Encoding, decoding, and probability estimation method
US5045852A (en) Dynamic model selection during data compression
US6654503B1 (en) Block-based, adaptive, lossless image coder
US6825782B2 (en) Method and apparatus for arithmetic coding and termination
JP2021044809A (en) Entropy coding of motion vector differences
US7079057B2 (en) Context-based adaptive binary arithmetic coding method and apparatus
US6967601B2 (en) Method and apparatus for arithmetic coding, including probability estimation state table creation
US7304590B2 (en) Arithmetic decoding apparatus and method
EP0826275B1 (en) Method of and device for coding a digital information signal
US8711019B1 (en) Context-based adaptive binary arithmetic coding engine
US8588540B2 (en) Arithmetic encoding apparatus executing normalization and control method
US20060171533A1 (en) Method and apparatus for encoding and decoding key data
EP0369682B1 (en) Efficient coding method and its decoding method
JPH06224777A (en) Coding method, coder, decoding method, decoder, data compressor, bit stream generating method and transition machine generating method
WO1997034375A1 (en) Method for reducing storage requirements for digital data
EP1187338A2 (en) Method and apparatus for performing variable-size vector entropy coding
JP3990464B2 (en) Data efficient quantization table for digital video signal processor
US20070143118A1 (en) Apparatus and method for lossless audio signal compression/decompression through entropy coding
US6850175B1 (en) Method and apparatus for arithmetic coding
US7421138B2 (en) Data compression and expansion of a digital information signal
Ortega et al. Adaptive quantization without side information
Wang High order joint source channel decoding and conditional entropy encoding: Novel bridging techniques between performance and complexity
JP2001086513A (en) Coder, decoder, code conversion table generating method, coding method and decoding method
Bradley Coding the binary symmetric source with a fidelity criterion (Corresp.)
Leonardi et al. New Techniques for Signal Representation and Coding

Legal Events

Date Code Title Description
AS Assignment

Owner name: MITSUBISHI DENKI KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANAGIYA, TAICHI;KIMURA, TOMOHIRO;UENO, IKURO;AND OTHERS;REEL/FRAME:009952/0962;SIGNING DATES FROM 19990315 TO 19990325

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12