WO1995035538A1 - Method and system for encoding and decoding signals using a fast algebraic error correcting code - Google Patents

Method and system for encoding and decoding signals using a fast algebraic error correcting code Download PDF

Info

Publication number
WO1995035538A1
WO1995035538A1 PCT/US1995/007846 US9507846W WO9535538A1 WO 1995035538 A1 WO1995035538 A1 WO 1995035538A1 US 9507846 W US9507846 W US 9507846W WO 9535538 A1 WO9535538 A1 WO 9535538A1
Authority
WO
WIPO (PCT)
Prior art keywords
error
code
decoder
level
signal
Prior art date
Application number
PCT/US1995/007846
Other languages
French (fr)
Inventor
Michael J. Seo
Original Assignee
Seo Michael J
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seo Michael J filed Critical Seo Michael J
Priority to AU29056/95A priority Critical patent/AU2905695A/en
Publication of WO1995035538A1 publication Critical patent/WO1995035538A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/01Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/1515Reed-Solomon codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/152Bose-Chaudhuri-Hocquenghem [BCH] codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/154Error and erasure correction, e.g. by using the error and erasure locator or Forney polynomial
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/159Remainder calculation, e.g. for encoding and syndrome calculation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/25Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2903Methods and arrangements specifically for encoding, e.g. parallel encoding of a plurality of constituent codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
    • H03M13/2927Decoding strategies
    • H03M13/293Decoding strategies with erasure setting
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/373Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with erasure correction and erasure determination, e.g. for packet loss recovery or setting of erasures for the decoding of Reed-Solomon codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/63Joint error correction and other techniques
    • H03M13/635Error control coding in combination with rate matching
    • H03M13/6356Error control coding in combination with rate matching by repetition or insertion of dummy data, i.e. rate reduction
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing

Definitions

  • This invention is drawn to a method and a system for a high-speed error-correcting code encoder/decoder for encoding and decoding high-speed communication signals.
  • RS Reed-Solomon
  • VLSI Very Large Scale Integration
  • RS encoders and decoders operate by converting a message signal having m bits per message character into a transmission signal.
  • the transmission signal in general, has one block of k message characters per codeword.
  • the message signal is encoded into the transmission signal by converting k message characters into an information polynomial I, where I is equal to:
  • the coefficients of P are determined to ensure that C is a multiple of the generator polynomial g. Therefore, the check polynomial P is found by dividing the information polynomial I by the generator polynomial g. The negative value of the remainder polynomial r from this division is therefore the check polynomial P.
  • the received codeword R must first be corrected by subtracting the error polynomial E, to find the transmitted codeword C. Since, at the receiving end of the communications channel both C and E are unknown, the decoding process is not trivial.
  • Figure 1 shows the conventional RS encoding and decoding system 100.
  • a signal I to be transmitted is received from a binary data stream source 110 and input to a conventional encoder 120.
  • the encoder 120 encodes the signal from the binary data stream source 110 and outputs the encoded codewords C to the line encoder 130 which outputs line encoded vector codewords C' to the data channel 140.
  • the line encoder converts binary signals of the codewords C' into analog signals.
  • the receiver line decoder 150 receives the received line encoded vector codewords R' from the data channel 140 and converts them to received codewords R. In other words, the line decoder translates the received analog signals back to binary signals which are modeled as the sum of an error word and a codeword.
  • the received codewords R are output to a conventional decoder 160 where they are decoded by deleting the errors E and then reconverting the derived information polynomial I' to the information signal I.
  • the decoder 160 then outputs the original binary message I to the binary data stream sink 170.
  • Figures 2 and 3 show generalized components of the conventional encoder 120 and the conventional decoder 160.
  • the information signal I received from the binary data stream source 110 is input in parallel to Z data buffers 121, 123, 125 and 127, where Z is the number of levels of encoding to be applied to the information signal I to generate the codeword C.
  • Z is the number of levels of encoding to be applied to the information signal I to generate the codeword C.
  • Each of the Z data buffers outputs the information signal to a corresponding one of the Z encoders 122, 124, 126 and 128.
  • Each of the Z encoders generates a different level of encoding and outputs one of the 1st-Zth codewords C 1 -C Z to the transmitter line encoder 130 which combines them and outputs the line encoded codeword C'.
  • the receiver line decoder 150 inputs the -received line encoded signal R' and outputs the received codewords R 1 -R Z , respectively, to the lst-Zth level decoders 161, 163, 165 and 167, respectively.
  • the Z decoders After decoding the respective codewords, the Z decoders output the decoded information signal I to the Z data buffers 162, 164, 166 and 168, respectively.
  • the lst-Zth data buffers then output the respective stored information signals I 1 -I Z to the binary data stream sink, where they are combined to recreate the original information signal I.
  • the conventional encoder 120 and the conventional decoder 160 generally only use a single level of encoding and a single level of decoding. Accordingly, as shown in Figure 4, in the conventional single level encoder 120, the binary data stream source outputs the information signal I to a single level RS encoder 122.
  • the single level RS encoder 122 outputs the codeword C to the transmitter line encoder 130 which outputs the line encoded transmitted codeword C'. Because only a single level of encoding/decoding is used, the single-level RS code must be used for both error location and error correction.
  • the received line encoded codeword R' is input by the transmitter line decoder 150, which outputs the received codeword R to the single level RS decoder 190.
  • the single level RS decoder 190 decodes the received codeword R and outputs it directly to the binary data stream sink 170.
  • RS VLSI decoders are set forth in the following references: "Architecture for VLSI Design of Read-Solomon Decoders," K.Y. Liu, IEEE Transactions on Computers, Vol. C-33, No. 2, pp. 178-189, February 1984; "A Single Chip VLSI Read-Solomon Decoder,” H.N. Shao et al., ICASSP 86, Tokyo, pp. 2151-2153; "A Construction Method of High-Speed Decoders Using ROM's for Bose-Chaudhuri-Hocquenghem and Read-Solomon Codes," H.
  • the single level RS decoder 190 shown in Figure 5 is an errors-only decoder.
  • the received codeword R is input in parallel to a syndrome computation circuit 191 and a delay circuit 192.
  • the syndrome computation circuit generates 2t syndromes s, which are combined to form the syndrome polynomial S from the received codeword R.
  • the syndrome polynomial S is input in parallel to a modified Euclid's algorithm computation circuit 193 and a second delay circuit 194.
  • the modified Euclid's algorithm computation circuit 193 generates the error locator polynomial ⁇ .
  • the error locator polynomial ⁇ and the syndrome polynomial S are input in parallel to the errata transform computation circuit 195.
  • the errata transform computation circuit outputs M errata transforms ⁇ .
  • the M errata transforms ⁇ are input to the inverse transform circuit 196 which generates the calculated or estimated error polynomial E'.
  • the adder 197 subtracts the estimated error polynomial E' from the received codeword R to generate the transmitted codeword C. It should be appreciated that in the errors-only decoder 190 shown in Figure 6A, the estimated error polynomial E' is only an estimate of the actual error polynomial. Accordingly, the transmitted codeword C output by the adder 197 is only an estimate of the actual transmitted codeword C.
  • an erasure correction circuit 198 is inserted between the syndrome computation circuit 191 and the modified Euclid's algorithm computation circuit 193.
  • the erasure correction circuit 198 comprises an alpha generation circuit 1981.
  • the erasure location information input to the alpha generation circuit 1981 is assumed to be derived from outside the decoder 190 and is an estimate of the most likely error locations.
  • One possible way for deriving the erasure location information is from a convolution decoder. By estimating erasures, i.e. the most likely error locations, the error correction capability of the system can be increased without significantly decreasing the processing speed.
  • the output from the alpha generation circuit 1981 is then input in parallel with the syndrome polynomial S output from the syndrome computation circuit 191 to the first polynomial expansion circuit 1982.
  • the output from the alpha generation 1981 is also input to the second polynomial expansion circuit 1983.
  • the first polynomial expansion circuit 1982 outputs the error evaluator polynomial H, while the second polynomial expansion circuit 1983 outputs the error locator polynomial L.
  • the error evaluator polynomial H output by the first polynomial expansion circuit 1982 and the error locator polynomial L output by the second polynomial expansion circuit 1983 are input in parallel to the modified Euclid's algorithm computation circuit 193.
  • the modified Euclid's algorithm computation circuit 193 outputs a different errata locator polynomial ⁇ , which is input to the errata transform computation circuit 195 along with the syndrome polynomial S output from the delay circuit 194. From this point the single level RS erasures-errors decoder 190 operates as in the single level RS errors-only decoder 190 described above with reference to Figure 6A.
  • Figure 7 shows the conventional process for generating the corrected transmitted codeword C from the received codeword R using an errors-only decoder.
  • the decoding process starts in step S30.
  • step S31 the received incoming code signals are input from the data channel 140 and are converted from vector form to standard form by the receiver line decoder 150.
  • step S32 the 2t syndromes s M syndromes are derived from the received code symbols and the syndrome polynomial S is generated by combining the 2t syndromes s M .
  • the syndrome polynomial S is then used to generate the error locator polynomial L from the syndrome polynomial S.
  • any erasure location data is available, it is also used with the syndrome polynomial S to generate the error locator polynomial L in step S33.
  • the syndrome polynomial S and the error locator polynomial L are used to compute the error evaluator polynomial H.
  • the error locator polynomial L is not available, such as in an errors-only decoder, the error evaluator polynomial H is computed from only the syndrome polynomial S.
  • step S35 the error locator polynomial L is factored to locate its roots P M .
  • step S36 the error evaluator polynomial H is evaluated for each root P of the error locator polynomial L.
  • step S37 the derivative L' of the error locator polynomial L is generated for each root P.
  • step S38 the error magnitudes are generated from the roots P, the derivatives L' generated in step S37 and the evaluated error evaluator polynomial H generated in step S36.
  • step S39 the received code symbols from step S31 are corrected using the error magnitudes generated in step S38. Once the corrected received code symbols are available, the code symbols can be decoded to generate the original information signal. It should be appreciated that due to the limited error correction capability of the conventional code, as described above, the correct information signal output in step S40 is only an estimate of the original information signal I input to the encoder 120.
  • This invention satisfies this need by providing a novel method and system for high speed encoding and decoding of signals which eliminates (or reduces) the most complicated parts of the decoding algorithm: producing the error-locator polynomial and factoring the error-locator polynomial.
  • an information signal to be transmitted is encoded using a known encoder, such a Reed- Solomon encoder. Then, an error locating code is appended to each codeword output by the encoder.
  • a first transform circuit remaps the codeword from the error locating code and the signal encoder to symbols in a second finite field. The output from the first transform circuit is then multiplied with an element in a second finite field by a second transform circuit.
  • the second finite field used in the second transform circuit is different from the first finite field used by the signal encoder.
  • the second finite field used in the second transform circuit operates in a finite field which is one dimension higher than the first finite field used for the signal encoder.
  • the output of the second transform circuit is put through a third transform circuit using a lookup table embedded in a ROM.
  • the output from the transform circuit is then sent along a binary channel as the communication signal.
  • the communication signal is defined as the combination of the line encoder, the analog data channel, and line decoder.
  • the communication signal is a binary signal stream.
  • the communication signal is received from the binary channel and decoded by a first inverse transform circuit using a second ROM.
  • the second ROM embodies the inverse of the map embodied in the first ROM.
  • the output from the second ROM is again remapped in a second inverse transform circuit, by multiplying with the corresponding inverse element of the second finite field and converted back to a vector.
  • a third remapping which is an inverse remapping from the first remapping of the encoding system remaps the received signal into two portions: the error locating code and the encoded signal .
  • the error locating code is decoded and used to correct errors in the received encoded signal, while the received encoded signal is used to find the magnitude of the error at each indicated error location.
  • the located error positions and magnitudes are used to convert the received encoded signal into a corrected encoded signal, which should be the same as the original transmitted encoded signal.
  • the corrected encoded signal is then decoded to form the original information signal.
  • the error locating code of this invention is a single-bit null code.
  • the error locating code is a multiple-bit null code.
  • a counter is added to count the number of errors detected by the error-locating code. This third preferred system is able to detect when too many errors have occurred in the block of received data, such that it is not possible to correctly decode the error magnitudes in the second level code. If this event is detected in a communication system where the receiver can communicate with the transmitter (e.g., automatic repeat request (ARQ) systems), then the receiver can request the transmitter to resend the last codeword block.
  • the information signal is preferably encoded/decoded using conventional errors-erasures codes such as the RS code.
  • Figure 1 shows a conventional encoding-decoding system
  • Figure 2 shows a generalized conventional encoding system
  • Figure 3 shows a generalized conventional decoding system
  • Figure 4 shows a conventional implementation of the encoding system of Figure 2;
  • Figure 5 shows a conventional implementation of the decoding system shown in Figure 3;
  • Figure 6A shows a conventional errors-only decoder;
  • Figure 6B shows a conventional erasures-errors decoder
  • FIG. 7 shows the conventional decoding process
  • Figure 8 shows the encoding-decoding system of the present invention
  • Figure 9 shows a generalized form of the null code encoder of the present invention.
  • Figure 10 shows a generalized form of the null code decoder of the present invention
  • Figure 11 shows a first preferred embodiment of the encoding system of this invention
  • Figure 12 shows a first preferred embodiment of the decoding system of this invention
  • Figure 13 shows a second preferred embodiment of the encoding system of this invention
  • Figure 14 shows a second preferred embodiment of the decoding system of this invention
  • Figure 15 shows a symbolic representation of the generalized two-level error locating-error correcting encoding-decoding system
  • Figure 16 shows a symbolic representation of a first preferred embodiment of the null-code used in the two-level null-code encoding-decoding system
  • Figure 17 shows a symbolic representation of a second preferred embodiment of the null-code used in the two-level null-code encoding-decoding system
  • Figure 18 shows a block diagram of the second preferred embodiment of the decoding system as an erasures-errors decoder
  • Figure 19 shows a block diagram of the second preferred embodiment of the decoding system as an erasures-only decoder
  • Figure 20 shows the encoding process for the generalized two-level error-locating, error-correcting encoding of this invention
  • Figure 21 shows the process for erasures-only decoding of this invention
  • Figure 22 shows the process for erasures-errors decoding of this invention
  • Figure 23 shows a VLSI implementation of the multibit null code encoder and an erasures-errors decoder
  • Figure 24 shows a VLSI implementation of the multibit null code erasures-only decoder
  • Figure 25 is a Multilevel RS code encoder/decoder
  • Figure 26 shows a Multilevel RS encoder of Fig.
  • each codeword contains M symbols.
  • Each of the M symbols contains m bits.
  • M 2 m -1 symbols in each codeword.
  • Figure 8 shows a first preferred embodiment of the error correction encoding-decoding system of this invention.
  • a binary data stream source 210 outputs an information signal I to a two-level error locating, erasure-errors correction code encoder 220. That is, the encoder 220 encodes the information signal I using two independent codes.
  • the second-level, erasures-errors correcting code is used primarily to generate the error magnitudes once the error locations have been identified, although using the erasures-errors correcting code to correct an additional number of errors is also within the scope of this invention.
  • the first-level, error-locating code is used solely to obtain the error positions of the received signal R.
  • the encoder 220 converts the information signal I to a stream of encoded codewords C, which are output to the transmitter line encoder 210, which outputs the line encoded codeword C' to the data channel 240.
  • the codewords C' are transmitted through the data channel 240 to the receiver line decoder 250, errors from the error source 280 are additively imposed onto the transmitted codewords such that the received line encoded codewords R' differ from the transmitted codewords C' due to the errors.
  • the received line encoded codewords R' are then converted from line encoded form to standard form codewords R by the receiver line decoder 250, which outputs the standard received codewords R to the two-level error-locating, error-correction code decoder 260.
  • the decoder 260 uses the two independent codes to high-speed generate the original information signal I from the stream of received codewords R.
  • the information signal I is then output to the binary data stream sink 270.
  • the decoder 260 first reorganizes the received codeword R to separate the error-locating portion N from the received codeword portion R*. Since the error- locating code is of a predefined form, any deviations from this predetermined form can be readily obtained and used to positively identify any error locations. Thus, the error locating code acts as the source of the erasure locator data, with the added benefit that every location indicated by the error-locating code positively identifies an erroneous codeword symbol.
  • the error locating code N is repeatedly appended by the line encoder 230 to each codeword symbol CM of the transmitted codeword C as it is output from the second level encoder.
  • each receive codeword portion R* corresponds to a single symbol of the transmitted codeword C.
  • the error-locating portion N* indicates whether any bits are in error in the corresppnding received codeword portion.
  • the received codeword portions R* are stored into a buffer until all of the M symbols of the transmitted codeword C are received.
  • the erroneous codeword positions P indicated by the error locating code are also stored.
  • the received codeword symbols R* and the corresponding error positions P determined from the error locating code portion N* are input to a conventional decoder for determination of the error magnitudes, correction of the received codeword symbols R* into the estimated transmitted codeword C*, and decoding of the codeword C* into the estimated information signal I*, which, ideally, is the same as the original information signal I.
  • the estimated information signal I* is equivalent to the original information signal I in the two-level, erasures-errors decoders.
  • the error-locating code is a true null code and comprises an all-zeros binary signal.
  • the error-locating code is not necessarily a null code, but can be any desired more- sophisticated code.
  • any code having additional sophistication will require increased complexity in the decoder 260 necessary to deal with the added sophistication. While this will increase the amount of information being transmitted, it is also likely to significantly decrease the transmission speed of the transmission system 200.
  • the received codeword R* is now processed using conventional decoding systems to determine the magnitude of the errors for each of the error positions located by the null code.
  • the received codeword R* can also be used to locate an additional number of errors. If only a few additional (less than four) errors are to be located and corrected, fast algorithms exist which can quickly locate these additional errors.
  • the binary data stream source 210 inputs the information signal I to a second level data buffer 223 of the encoder 220.
  • the information signal I is output by the second level data buffer 223 to a second level encoder 224, which uses any known conventional encoding scheme, such as RS encoding, to generate the encoded codeword C.
  • the null code generator 222 of the encoder 220 outputs the null code N.
  • Both the null code N and the encoded codeword C are input to the transmitter line encoder 230 which combines the null code N and the codeword C into the line encoded vector codeword C'.
  • the form and content of the null code N can be preselected based upon the types of errors most likely to occur based on the data channel 240 and the form of the codeword C.
  • the preferred error-locating code is a null code.
  • the received vector codeword R' is converted back to standard form received codeword R.
  • the received codeword R is output by the receiver line decoder 250 to the null code decoder 261 and the second level data buffer 264 of the decoder 260.
  • the null code decoder 261 decomposes the received signal R into a received null code portion N* and a received codeword portion R*.
  • the received null code N* indicates if the received codeword portion R* contains an error.
  • the error positions P indicating the erroneous received codewords R* and the received codeword portions R* are output from the null code decoder 261 and the second level buffer 264, respectively, to the second level decoder 263.
  • the second level decoder 263 then uses the error positions P and the received codeword portion R* to determine the error magnitudes for each of the error locations P.
  • the second level decoder uses the error locations p and the determined error magnitudes to generate the estimated codeword C*.
  • the estimated codeword C* is then decoded to generate the estimated information signal I*.
  • the decoder 220 then outputs the estimated information signal I* to the binary data stream sink 270.
  • the second level decoder can also be used to find a smaller number, preferably less than 4, of additional errors not indicated by the received null code portions N*. This generally ensures the estimated information signal I* is identical to the original information signal I.
  • FIG. 11 A first preferred embodiment of an error-locating encoder 320 corresponding to the encoder 220 of Figure 8 is shown in Figure 11.
  • the binary data stream source 210 outputs the information signal I to an RS encoder 322.
  • the RS encoder 322 converts the information signal I to the encoded codeword C and outputs it to the S transform generator 323.
  • the error-locating code generator 321 outputs the error locating code N to the S transform generator 322, along with the information signal I'.
  • the S transform generator 322 takes these inputs and combines and transforms them to the S transform codeword S.
  • the S transformed codeword S is then input to the T transform generator 324.
  • the a input 326 is also input to the T transformed generator 324.
  • the T transform generator 324 combines the a input 326 and the S codeword to form the T transform codeword T.
  • the T transform codeword T is input to the U transform generator 325, which converts and outputs the T transformed codeword T to the U transformed codeword U.
  • the U transformed codeword U is then input to the transmitter line encoder 230 and line encoded to vector codeword U'.
  • the receiver line decoder As shown in Figure 12, the receiver line decoder
  • the 250 receives the line encoded vector codeword R' and decodes it to the received codeword R.
  • the received codeword R is then input to the inverse U transform generator 361 of the error-locating decoder 360.
  • the inverse U transform generator 361 applies the inverse of the U transform applied by the U transform generator 325.
  • the inverse U transform generator converts the received codeword R to the intermediate form T -1 , which is input to the inverse T transform generator 362.
  • the inverse T transform generator applies the inverse of the T transform applied by the T transform generator 324.
  • a second input to the inverse T transform generator comes from the inverse-alpha input 367.
  • the inverse T transform generator applies ⁇ -1 to the T -1 signal output from the inverse U transform generator, and outputs an S -1 signal to the inverse S transform generator 363.
  • the inverse S transform generator applies the inverse transform of the transform applied by the S transform generator 323.
  • the inverse S transform generator outputs the modified received error-locating code portion N* to the error-locating decoder 364 and the delay circuit 365 outputs the received codeword portion R*.
  • the error- locating decoder 364 uses its code portion N* to locate the error positions P.
  • the error-locating decoder 364 outputs the error positions P to the erasures-errors decoder 366.
  • the delay circuit 365 outputs the received codeword portions R* to the erasures- errors decoder 366.
  • the erasures-errors decoder 366 uses the received codeword portions R* to generate the error magnitudes for the positions P indicated by the error-locating decoder 364. Since the erasures-errors decoder 366 does not need to generate the error-location polynomial L, nor need to factor the error-location polynomial L to determine the error positions P, the most time consuming, computationally heavy, and hardware intensive portions of the erasures-errors decoder 366 can be deleted. Also, the error-locating decoder produces its estimate of the error positions P which are output to the binary data sink 270.
  • the erasures-errors decoder 366 uses the error positions P and the error magnitude information to recreate, at high speed and at high accuracy, an estimate I* of the original information signal I, which is output to the binary data stream sink 270. Since the error-locating code portion N* indicate essentially all of the errors occurring in the modified received codeword R*, it is not necessary to generate or factor the error location polynomial L. Thus, the erasures-error decoder 366 needs merely to determine the error magnitudes at the error positions P.
  • the erasures-errors decoder 366 also uses the second level received codeword R* to locate a few additional errors.
  • R* the second level received codeword
  • the error-locating decoder 364 can be simply modified so that the erasures-errors decoder 366 will only try to locate at most a few errors. For example, in one embodiment, suppose that up to one additional error is to be located in the erasures-errors decoder which can correct up to E erasures.
  • This embodiment is thus an erasures bounded-single error decoder, which can be optimized to operate extremely fast. Then error-locating code will produce either E-2 or E erasures. So, if the error-locating code finds P ⁇ E-2 definite error locations, then the error-locating code will produce E-2-P arbitrary error positions so that the erasures-errors decoder will have E-2 erasure positions and will only find up to one additional error.
  • the 2t syndromes s M can be used to directly generate and factor the error locator polynomial L. The need to correct only a few errors results in simple algorithms, in contrast to the generally complex algorithms required to locate many errors. In addition, the additional locations for which error magnitude calculations are required does not add significant additional overhead. Accordingly, the error locating decoder 360 has the error locating speed of a small Hamming distance codes with the error correction capabilities of slower, large Hamming distance codes.
  • Figures 13 and 14 show a second preferred embodiment of the error locating encoder 420 and the error locating decoder 460.
  • the S, T and U transform generators 323, 324 and 325 can be replace by encoding the S, T and U transforms into a read only memory (ROM) 423.
  • the inverse transform generators 361, 362 and 363 can be replaced by encoding the S -1 , T -1 and U -1 transforms into a ROM 461.
  • the first and second preferred embodiments of the error locating encoders 320 and 420 and the error locating decoders 360 and 460 described above provide for increased transmission speeds and decreased computational loads while maintaining essentially equivalent bit error rates on the following basis.
  • MBM multilevel block modulation
  • bit error likelihoods For set partitioning to work, bit error likelihoods must be organizable such that the error likelihoods decrease in order from the most significant bit to the least significant bit. Then each level of the MBM code corresponds to a level of bit error. If the bit is more likely to be an error, the code corresponding to this bit has more error correction capability. Hence, the MBM code increases bit error rate performances by taking advantage of likelihood information of bit error.
  • MBM codes divide the decoding effort into several identical smaller processes of producing the error locations and computing the error magnitudes. This invention essentially divides the decoding effort into two separate processes, one process specialized for locating the errors; and the second process specialized in determining the error magnitudes.
  • the preferred embodiments of this invention are set forth next. They use a null code, which has not heretofore been used, as a practical use for a code which conveys no data information was not previously known.
  • a null code is used as the first level code.
  • a null code has only one codeword, which comprises all zeros.
  • the advantage of a null code is that it can trivialize locating errors.
  • the null code does not need to produce and factor the error locator polynomial to identify the error locations. That is, the null code itself has all the information necessary to produce the factors and the error locator polynomial.
  • the two most expensive operations in terms of computational load and also in terms of hardware circuit requirements, generating the error locator polynomial and factoring the error locator polynomial, are not needed when using the null code.
  • null code by itself is not sufficient to provide the needed bit error rate performance. Accordingly, in the preferred embodiments of this invention an extra process, the "set remapping" process must be added which conditions the error bit likelihoods to provide sufficient bit error rate performance.
  • the error locations identified by the null code will have to be used by other codes.
  • the other codes it is necessary to use the other codes as erasure-only codes.
  • RS and BCH errors/erasure codes are used because these use the same codes as error-only codes when encoding.
  • the decoding of these types of codes for erasures-only is different from decoding for errors-only or for erasures-errors decoding.
  • Erasures-only decoding is the simplest type of decoding, as all errors are anticipated, such that no processing is needed for locating any further errors. Thus, to complete the decoding procedure, only the computation of the error magnitudes is necessary. Importantly, most of the information for computing the error magnitudes can be obtained in a relatively short time. Thus, compared to the conventional RS decoding, as shown in Figures 1-7, error location is obtained for free in terms of processing computation loads and transmission speeds.
  • the first level null code must be a sufficiently accurate indicator that a symbol has one of its bits in error. That is, the first level null code is an error locator code. If this first level code cannot reliably locate the errors, the other codes cannot be used to correct the errors. The reliability of the first level code's error location capability relies on the ability of the channel to yield errors. Thus, the error locating encoders and decoders are also dependent on the different encoding levels representing different bit reliabilities.
  • the error locating encoder/decoder of this invention requires only that the first level be highly likely to be in error, and is indifferent to the likelihood that any of the other levels are in error.
  • the only information needed about the bit error distributions for the error locating decoder is the conditional probability that the first level indicates an error given that the error symbol is non-zero, since if the error symbol is zero, it is not an error.
  • the Euclidian distance information for the channel symbols is not needed for the error locating encoder/decoder, a more sophisticated code beyond the null code can be used if the Euclidean distance information is available.
  • the "set remapping" technique is used to recondition the symbols such that the bit error likelihood for the first level is high and the bit error likelihood of the rest of the levels is irrelevant.
  • Set remapping is an active process, done during encoding/decoding. The extra computational and hardware overhead for implementing set remapping is quite small.
  • set remapping techniques can be applied to other types of channels.
  • MBM codes cannot be used on binary symmetric channels (BSC) that have the bit error rates of each level that are statistically independent from each other.
  • BSC binary symmetric channels
  • the extra error location processing is accomplished by the second level RS code. If the null code is able to provide ideal bit rate error positions, the second level code can be used only for error magnitude calculations. In the first and second embodiments, the second level code can also be used to locate extra errors. However, the error locating requirements of the second level code have been found to be modest. Thus, only a few extra (preferably, less than four) errors must be located by the second level code.
  • any one of the many variations of erasures bounded-single error correcting RS or BCH codes can be used. As indicated above, these codes are very fast because only a few errors are to be located.
  • Set remapping is an algebraic process which utilizes linear maps (vector homomorphisms).
  • the set remapping technique treats the data channel as a binary channel such that the bit error patterns produced by the data channel are not necessarily independent.
  • the set remapping technique is appropriate for any binary data channel whose bit errors may or may not be independent.
  • the set remapping technique depends only on the fact that not all error patterns are equally likely.
  • concatenated codes For example, consider a concatenated coding scheme, i.e., a code which comprises two codes used in tandem.
  • the advantage of concatenated codes is that they have good bit error rates for low decoding complexities.
  • One common known concatenated code uses an RS outer code with a trellis code utilizing the Viterbi algorithm as the inner code.
  • the null code with set remapping would replace the RS outer code.
  • the binary data channel can be treated as including the inner trellis code as part of the data channel. Trellis codes which are used in this manner are designed to produce patterned error symbols.
  • the set remapping technique identifies the most common error patterns and ignores the least common error patterns.
  • Set remapping may thus be viewed as reshaping the error patterns.
  • the original codeword C is unaltered after having been mapped and unmapped by the set remapping transforms.
  • the set remapping technique consists of applying linear maps to generate the transmitted codewords and to process the received codewords.
  • the maps S and S -1 map the MBM code symbols to or from, respectively, the finite field symbols.
  • the maps T and T -1 remap the finite field symbols to or from, respectively, themselves.
  • the maps U and U -1 which are not necessarily linear, map the finite field symbols to or from, respectively, another set of binary vector symbols.
  • a single map of S, T and U or S -1 , T -1 , and U -1 can be implemented together on a single ROM.
  • the following is one preferred form of the set remapping technique. It should be appreciated that there are an unlimited number of forms which the set remapping technique can take. Assume there are k bits in a symbol instead of m. The binary vector space is partitioned into two subspaces V and 0, where V is the set of even Hamming weight vectors and 0 is the set of odd Hamming weight vectors. Assuming that good estimates of the frequency of error for each type of binary vector symbol error are available, the number of times in X observances of errors each error symbol occurs is counted. The frequency for an error symbol is the number of times it occurs divided by X.
  • each set V and O are each divided into two equally sized sets where V; contains the least most frequent even Hamming weight error vectors and V 0 contains the most frequent error vectors.
  • O i contains the most frequent odd Hamming weight error vectors and O 0 contains the least most frequent vectors.
  • any symbol in V 0 has a higher occurrence of error than any symbol in V i
  • any symbol in O i has a higher occurrence of error than any symbol in O 0 .
  • the finite field for the RS code be GF(2 k ) for some k>0, so that V has dimension k.
  • K c is a kernel of C and is the compliment
  • Figures 15, 16 and 17 show symbolic representations of the transmitted code symbols and the received code symbols, the error locating encoders and decoders, and the second level encoders and erasures- errors decoders.
  • the received code symbols have already been remapped using the set remapping technique described above.
  • Figure 15 shows a generalized form of the transmitted code symbol coefficients and the received code symbol coefficients.
  • the coefficients a 1 -a m are generated by the error locating code encoder 121 while the coefficients b 1 -b n are generated by the RS encoder 122.
  • the coefficients are set remapped by the ROM 423 and transmitted to the data channel 350.
  • the data channel 350 includes the line encoder and the line decoder, and thus acts as a binary data channel where the error source 280 adds errors.
  • the coefficients are inverse set remapped by the ROM 461 to generate the error correcting code coefficients a' 1 -a' m and the modified received codeword coefficients b' 1 -b' n .
  • the error correcting code coefficients are input to the error locating decoder 162 to generate the error positions P. Simultaneously, the received modified codewords input to the delay 161 and then into the RS erasures-errors decoder 163. The error positions P are also input from the error locating decoder 162 to the RS erasures-decoder 163 to finally decode the received modified codeword and generate the original information signal I. Additionally, the error-locating decoder 162 generates the information signal I'.
  • Figure 16 shows a one-bit null-code embodiment 560 of the multi-bit error-locating encoding/decoding system 120/160 of Figure 15.
  • the null code since the null code is used, no error locating code encoder or decoder is necessary. Rather, the 1-bit null code is immediately appended to the bits of each codeword symbol, and used directly to indicate if the corresponding codeword symbol has an error.
  • Figure 17 shows a multi-bit null-code embodiment 660 of the multi-bit error-locating encoding/decoding system 120/160 of Figures 15.
  • a logic circuit 662 is used to decode the received null-code.
  • the null code consists of a single bit.
  • the single bit will indicate either that an error has or has not occurred somewhere within the received modified codeword.
  • the received modified codeword can be directly decoded by the RS erasures-errors decoder 562 to generate the information signal I.
  • the 1-bit null code is equal to 1, indicating that an error has occurred in the received modified codeword, the standard RS erasures-errors decoding process must be performed by the decoder 562.
  • the null code contains m bits.
  • the received null code is then input to a multiple input OR gate 662 to generate the error positions for the RS erasures-errors decoders 663.
  • Figures 18 and 19 show block diagrams for an erasures-errors decoder 460 and an erasures-only decoder 460' which incorporate the ROM storing the inverse map transform in place of the independent inverse transform generators shown in Figure 12.
  • Figure 20 shows the generalized process for generating, transmitting and decoding the transmission signal using the multilevel encoding system of this invention.
  • the information signal I received from the data source is first converted, using known codes such as RS or BCH codes, to a second level encoded signal in step S32.
  • step S32 when using, for example an RS code as the second level code, the information signal I is broken up into a number of blocks of k message characters. Then each block of k message characters is converted to one information polynomial I(x). Then the check polynomial P(x) is added to the information polynomial I(x) to generate the codeword C.
  • the first level error correcting code is appended to the encoded codeword C.
  • the first level error correcting code is appended independently to each one of the M symbols of the encoded codeword C.
  • the bits of the error correcting or null codes are appended to the n bits of each of the M symbols of the encoded codeword C.
  • step S36 the transmission signal comprising the combined first level error correcting code and second level codeword C are line encoded.
  • the line encoded transmission signal is output, in step S38, to the data channel.
  • the remapping subsystem comprising the S transform generator 323, the T transform generator 324 and the U transform generator 325 can be modeled as part of the data channel 350.
  • the inverse remapping subsystem can also be modeled as part of the data channel 350.
  • the remapping subsystem and the inverse remapping subsystem can be implemented in a ROM.
  • step S40 the received signal output by the data channel is input to the decoding system and the line decoder of the decoding system line decodes the received signal.
  • the decoding process inversely remaps the line encoded received signal, remapping on a symbol-by-symbol basis each of the M codewords of each received signal R.
  • the inverse U transform generator 361, the inverse T transform generator 362 and the inverse S transform generator 363 can be embodied in a ROM and modeled as part of the data channel 350 as shown in Figures 15-17.
  • the first level error correcting code of each of the M symbols of each received codeword R is separated from the corresponding symbol and checked to determine if the corresponding symbol contains an error. It should be appreciated that if none of the M symbols of the received codeword R contain any errors, the error correcting processing of the received codeword R can be skipped and the received codeword R directly decoded.
  • step S42 control flows to step S44, where the error magnitude for each of the error containing symbols is determined. Once the error magnitude for each such symbol is determined, the received codeword R can be corrected by subtracting out the determined error magnitude for each error containing symbol.
  • step S46 the correct (if the received codeword R had no errors) or corrected received codeword is converted from the second level encoded signal back into a portion of the information signal. It should be appreciated that each inversely converted received codeword R provides k message characters of the information signal. Once all of the received codewords R are checked, corrected, and converted from the second level encoded signal to form the information signal, the information signal I is output to the data sink. Control then flows to step S48 where the process stops.
  • Figure 21 shows the decoding process in greater detail.
  • the decoding process shown in Figure 21 is an erasures-only process.
  • the received codeword signal R is input from the data channel and inversely remapped by passing the received codeword signal R through a ROM.
  • the received codeword R thus contains M symbols each having a transmitted null code appended to it.
  • step S54 the null codes for each of the
  • M symbols of the received codeword R are checked to determine if any of the null codes are non-zero.
  • step S56 if all of the null codes for the M symbols are zero, control flows from step S56 to step S58, where the received codeword R is decoded and the k message characters of the information signal I are output.
  • step S56 control flows from step S56 to step S60.
  • step S60 the 2t syndromes s M are generated from the M symbols of the received codeword R.
  • step S62 the error locator polynomial L is generated from the non-zero null codes for the M symbols of the received codeword R and the M symbols.
  • step S68 the error positions P are noted from the non-zero null codes of the M symbols of the received codeword R.
  • step S60 The 2t syndromes generated in step S60 and the error locator polynomial L generated in step S62 are then used to generate the error evaluator polynomial H in step S64.
  • step S64 Once the error evaluator polynomial H is generated in step S64, it is evaluated in step S70 for each error position P noted in step S68.
  • step S72 the first derivative L' of the error locator polynomial L is generated for each of the error positions P noted in step S68.
  • step S74 the error magnitudes J for the error containing symbols of the received codeword R are generated from the evaluated error polynomial values H(P) from step S70, the error positions P noted in step S68 and the first derivatives L' (P) generated in step S72. Then, the generated error magnitudes J generated in step S74 are used to correct the received codeword R in step S76. Once the achieved codeword R is corrected, it is decoded and the k message symbols encoded by the received codeword R are output. This process is then repeated for each received codeword R and the blocks of k message symbols are combined to form the information signal I. Once the full information signal I has been transmitted, it is output in step S76 to the data sink then control continues to step S78 where the process stops.
  • Figure 22 shows the errors-erasures correcting process. As described above, additional correction can be obtained beyond the erasures indicated by the first level error correcting code, by using the high speed single-, double-, and triple-correcting decoder available for decoding RS codes when high probability can be assigned to a received codeword that the number of remaining unidentified errors is three or less.
  • steps S80-S110 correspond to steps S50-S78 in Figure 21, except for the addition of step S98. It should also be appreciated that steps S70-S74 of Figure 21 have been replaced by steps S102-S106 of Figure 22, which operate on a larger set of error locations than steps S70-S74.
  • step S98 the errors- erasures decoder is used to produce the augmented error locator polynomial L and the augmented error evaluator polynomial H.
  • the augmented error locator polynomial L locates additional errors beyond those located by the null code
  • augmented error evaluator polynomial H includes not only the error positions notes in step S96 from the null code, but also includes o-t additional errors located by the errors-erasures decoder.
  • the fast Euclid's Algorithm calculation circuit includes an erasures counter, which counts down from 2t once for each erasure that is located. Then, if the counter has not reached zero, additional errors are corrected, with each additional corrected error causing the counter to count down twice. The additional error correction continues until the counter has counted below 2. That is, if only 1 count remains on the counter, there is not enough error correction capability left to correct one more error.
  • the number of additional errors that can be corrected for depends on the number of located erasures, and can range from 0 to t additional errors.
  • step S98 the augmented error locator polynomial L is factored to produce the o-t additional error positions, which are added to the erasures positions P to get the full set of error positions P.
  • step S102 the first derivative L' of the augmented error locator polynomial L is evaluated for the full set of error position P.
  • step S104 the augmented error evaluator polynomial H is evaluated for the full set of error positions P.
  • the error magnitudes J are determined in step S106 from the full set of error positions P determined from the errors-erasers decoder, the first derivatives of the augmented error locator polynomial L' (P), and the values for the evaluated augmented error evaluator polynomial H(P).
  • the determined magnitudes J are used to correct the received codeword R in step S108. Once the received codeword R is corrected, it is decoded to generate the k message characters encoded by the received codeword R.
  • step S108 of Figure 22 the blocks of k message symbols are combined, once all of the received codewords R are received, corrected and decoded, to form the information signal I, which is then output to the data sink. Then, control continues to step S110, where the process stops.
  • the encoders and decoders shown in Figs. 9-19, and especially Figs. 13 and 14, are formed on a semiconductor chip using VLSI circuit integrating techniques, as shown in Figures 23 and 24.
  • the multibit null code encoding/decoding system 700 comprises a multibit null code encoding system 710 and a multi bit null code decoding system 750.
  • the null code decoding system 750 is an erasures-errors decoder.
  • the null code encoder 710 comprises a conventional RS encoder 720 and a ROM 730.
  • the RS encoder 720 encodes blocks of message characters into 2 k codeword symbols of k bits each and outputs the codeword symbols of a block one at a time to the ROM 730.
  • the ROM 730 inputs the k codeword bits of the next symbol and the j null code bits and remaps them.
  • the ROM 730 the outputs the remapped codeword symbol to the data channel 740.
  • the data channel 740 includes the line encoders and line decoders, and thus acts as a binary data channel.
  • the binary data channel 740 outputs the transmitted codeword symbol to the ROM 760 of the null code decoder 750.
  • the ROM 760 inversely remaps the received codeword symbol to separate the null code portion from the RS codeword symbol portion.
  • the j bits of the null code are input to a j-bit OR gate 770.
  • the k RS codeword bits of the next symbol output from the ROM 760 and the single bit from the OR gate 770 are then input to the single level RS erasures-error decoder 190', which is generally similar to the conventional decoder 190 of Figure 6B.
  • the output of the OR gate 770 is input to the ⁇ k generation circuit 1981, while the k bits from the ROM are input to the syndrome computation circuit 191 and the delay circuit 192.
  • the decoder 190 implements the fast error correcting systems for correcting only a few additional errors.
  • the null code decoder 750 is an erasures-only decoder.
  • the modified Euclid's algorithm computation circuit 193 and the first polynomial expansion circuit 1982 are deleted from the null code decoder 760 of Figure 23.
  • the output from the syndrome computation circuit 191 is connected only to the second delay circuit 194.
  • both the second delay circuit 194 and the second polynomial expansion circuit 1983 are connected directly to the errata transform computation circuit 195. In this case, while the decoder determines the error magnitudes for the located error positions, no additional error locations are located or corrected.
  • the decoder 750 shown in Figure 24 may also include a resend request system.
  • the resend request system generates a resend signal, which is sent back through the data channel 740 to the encoder 710 when more erasures have occurred than are correctable by the decoder.
  • the decoder 750 can correct for 2t erasures.
  • the OR gate 770 is connected to a counter 772. The counter counts each time the OR gate outputs a "1", which indicates an error containing symbol of the current codeword.
  • the counter 772 is connected to a comparator 774, which is also connected to a preset number 778, which is set to the maximum number of correctable errors 2t. If, during a current codeword, the counter exceeds the preset number 778, the comparator outputs an overflow signal to the resend request generator 776.
  • the resend request generator 776 resets the counter 772, causes the current decoded codeword to be purged from memory, and sends a resend request signal back to the encoder, in order to have the current codeword resent from the beginning.
  • the counter 772 is reset before the next codeword is received.
  • FIG 25 shows another embodiment of the multilevel code encoding/decoding system 800, which uses a novel code.
  • This novel code is a Multilevel Reed-Solomon (MRS) Code.
  • MRS Multilevel Reed-Solomon
  • the MRS code uses a frequency domain, rather than time domain, encoding of multiple levels. In the finite field frequency domain, vector homomorphic maps will map the MRS code into different level codes. So for a simple example, if the MRS code is encoded with the binary trace in mind, then applying the binary trace the MRS code will result in a single bit null code. Hence, the binary trace acts as the S 1 map in the embodiments described above, where information is extracted from the T- 1 map. However there are slight differences which must be appreciated.
  • FIG. 25 depicts the use of the MRS code in the K-bit null code embodiment in place of the K-bit null code.
  • a binary data stream inputs information bits into the MRS encoder 810.
  • the output of the MRS encoder 810 are finite field codeword symbols of dimension K+J.
  • the first and second level codes are encompassed by the MRS encoder 810.
  • These codewords are mapped by the T map 820 and the U map 830, and then put through the combined line encoder, binary data channel, and line decoder 350.
  • the line decoder 350 are the U -1 inverse map 840 and then the T -1 inverse map 850. Note that the output of the T -1 map is a K+J bit finite field symbol.
  • the Multilevel Reed Solomon (MRS) code is a frequency domain implementation of MBM codes, so that conventional MBM codes can be considered as time domain MBM codes.
  • Frequency domain encoding/decoding uses the Galois field transform equivalence.
  • M the additive linear map, M.
  • the map M is based upon vector homomorphisms that map elements in the Galois Field GF(p q ) into itself; note that p is assumed to be a prime number.
  • Equation (15) can also be constrained to zero for consecutive components i which would require satisfying a system of equations.
  • MRS codes are designed by choosing several vector homomorphisms, and for each homomorphism, a set of consecutive frequency components are chosen to constrain this equation to zero.
  • a code exists for each level of mapping as defined by the chosen vector homomorphisms.
  • equation (15) is a further relaxation on the frequency component constraints.
  • these frequency components must satisfy the equation that these components must be equal to zero.
  • MRS codes these frequency components must satisfy a more flexible system of equations.
  • MRS codes are a further generalization of BCH and RS codes.
  • the mapped code is the one bit null code.
  • the resulting encoded codeword is a two level MBM code with a null code for the first level and a single erasures correcting code in the second level.
  • the second level erasures-errors correcting code can also be the conventional multilevel code shown in Figs. 2 and 3. That is, multilevel erasures-errors decoders can be used in place of the second level erasures-errors decoder.

Abstract

A high-speed, low computational load error correction code is disclosed comprising a first level error locating code such as a single or multiple bit null code and a second level error correction code such as a Reed-Solomon (RS) code. By using the null code to identify erasure locations, it is no longer necessary to use the second level RS code to both locate and correct the error positions. By eliminating the Euclid's algorithm circuit of conventional decoders, decoding speed is increased and complexity and cost is decreased. The transmission side of the system contains a multi-level encoder (220) comprising a null code generator (222) and a second level encoder (224). The reception side of the system contains a null code detector (261) and a second level decoder (263).

Description

METHOD AND SYSTEM FOR ENCODING AND DECODING
SIGNALS USING A FAST ALGEBRAIC ERROR CORRECTING CODE
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention is drawn to a method and a system for a high-speed error-correcting code encoder/decoder for encoding and decoding high-speed communication signals. 2. Related Art
There has always been a search for powerful and/or high-speed error-correcting codes. However, there is a tradeoff between the speed of the encoding/decoding process and the error correction capability or power of the code. That is, an increase in the error correction capability of a code generally results in an increase in the amount of computations necessary to correct for those errors. Since computational speeds are essentially fixed, increasing the computational loads for error detection and correction (decoding) reduces the communication speed.
Since known encoding of a communication signal is generally trivial to compute, the decoding process, rather than the encoding process, is the initial bottleneck in increasing the communication speed. The communications industry however demands both high speed and reliable communications.
One code which has been favored by the communications industry is the Reed-Solomon (RS) code, which is in the class of algebraic block codes. The RS code has found its way into high speed communications, due to Very Large Scale Integration (VLSI) hardware implementations of its encoding/decoding processes. However, today's even higher speed requirements (for example, the information super highway) have put a strain on the error correcting capabilities of RS codes by sacrificing reliability (error correction) for speed.
RS encoders and decoders operate by converting a message signal having m bits per message character into a transmission signal. The transmission signal, in general, has one block of k message characters per codeword. Each codeword comprises M symbols, where M=2m-1. Of the M symbols per codeword, 2t symbols are check symbols , where t is the maximum number of error symbols that can be corrected, and k symbols are the information bearing symbols, where k=M-2t.
The message signal is encoded into the transmission signal by converting k message characters into an information polynomial I, where I is equal to:
I (x) =C2tx2t+C(2t-1)X(2t-1)+....+C(M-1)X(M-1) (1) where the coefficients of the information polynomial I correspond to the information symbols. To form the transmission polynomial C, a check polynomial P must be added to the information polynomial I such that the information polynomial I becomes a multiple of the RS generator polynomial g, where P is equal to: P(x)=C0 + C1x +...+ C(2t-1)X(2t-1) (2)
Thus, the coefficients of P are determined to ensure that C is a multiple of the generator polynomial g. Therefore, the check polynomial P is found by dividing the information polynomial I by the generator polynomial g. The negative value of the remainder polynomial r from this division is therefore the check polynomial P.
Implementing this process using VLSI technology is generally trivial, as indicated in "The VLSI Implementation of a Read-Solomon Encoder Using Berlekamp's Bit-Serial Multiplier Algorithm" by I.S. Hsu et al., IEEE Transactions on Computers, Vol. C-33, No. 10, pp. 906-911, October 1984.
As the transmitted codeword C is transmitted, it will experience a variety of burst-type binary errors. These binary errors can be thought of as an error polynomial E of order M-1. Thus, during transmission, the error polynomial E additively combines with the transmitted codeword C to form the received signal R: C (x) + E (x) = R (x) , where ( 3 )
E (x) = e0 + e1X+ .... e(M-1)X(M-1) (4 )
Thus, to decode the received codeword R, the received codeword R must first be corrected by subtracting the error polynomial E, to find the transmitted codeword C. Since, at the receiving end of the communications channel both C and E are unknown, the decoding process is not trivial.
Figure 1 shows the conventional RS encoding and decoding system 100. As shown in Figure 1, a signal I to be transmitted is received from a binary data stream source 110 and input to a conventional encoder 120. The encoder 120 encodes the signal from the binary data stream source 110 and outputs the encoded codewords C to the line encoder 130 which outputs line encoded vector codewords C' to the data channel 140. Note that the actual transmission signal is almost always an analog signal. Hence, the line encoder converts binary signals of the codewords C' into analog signals. While the codewords C' are being transmitted through the data channel 140 to the receiver line decoder 150, the transmitted codewords C' are converted to receive codewords R' due to the interference caused by the error source 180, which causes errors E to be added to the transmitted codeword C'.
The receiver line decoder 150 receives the received line encoded vector codewords R' from the data channel 140 and converts them to received codewords R. In other words, the line decoder translates the received analog signals back to binary signals which are modeled as the sum of an error word and a codeword. The received codewords R are output to a conventional decoder 160 where they are decoded by deleting the errors E and then reconverting the derived information polynomial I' to the information signal I. The decoder 160 then outputs the original binary message I to the binary data stream sink 170. Figures 2 and 3 show generalized components of the conventional encoder 120 and the conventional decoder 160. In general, the information signal I received from the binary data stream source 110 is input in parallel to Z data buffers 121, 123, 125 and 127, where Z is the number of levels of encoding to be applied to the information signal I to generate the codeword C. Each of the Z data buffers outputs the information signal to a corresponding one of the Z encoders 122, 124, 126 and 128. Each of the Z encoders generates a different level of encoding and outputs one of the 1st-Zth codewords C1-CZ to the transmitter line encoder 130 which combines them and outputs the line encoded codeword C'.
Likewise, as shown in Figure 3, the receiver line decoder 150 inputs the -received line encoded signal R' and outputs the received codewords R1-RZ, respectively, to the lst-Zth level decoders 161, 163, 165 and 167, respectively. After decoding the respective codewords, the Z decoders output the decoded information signal I to the Z data buffers 162, 164, 166 and 168, respectively. The lst-Zth data buffers then output the respective stored information signals I1-IZ to the binary data stream sink, where they are combined to recreate the original information signal I.
However, the computational loads required by each of the Z encoding and decoding levels in the generalized encoders 120 and 160 shown in Figures 2 and 3 is very low. Thus, while the error correction capability of this system of multiple level encoders and decoders is exceptional, the VLSI chip incorporating this conventional encoding/decoding system is very costly. Thus, in actual conventional implementations, the conventional encoder 120 and the conventional decoder 160 generally only use a single level of encoding and a single level of decoding. Accordingly, as shown in Figure 4, in the conventional single level encoder 120, the binary data stream source outputs the information signal I to a single level RS encoder 122. The single level RS encoder 122 outputs the codeword C to the transmitter line encoder 130 which outputs the line encoded transmitted codeword C'. Because only a single level of encoding/decoding is used, the single-level RS code must be used for both error location and error correction.
Likewise, as shown in Figure 5, the received line encoded codeword R' is input by the transmitter line decoder 150, which outputs the received codeword R to the single level RS decoder 190. The single level RS decoder 190 decodes the received codeword R and outputs it directly to the binary data stream sink 170.
Depending upon the accuracy required, two types of single level RS decoders are conventionally used. A variety of RS VLSI decoders are set forth in the following references: "Architecture for VLSI Design of Read-Solomon Decoders," K.Y. Liu, IEEE Transactions on Computers, Vol. C-33, No. 2, pp. 178-189, February 1984; "A Single Chip VLSI Read-Solomon Decoder," H.N. Shao et al., ICASSP 86, Tokyo, pp. 2151-2153; "A Construction Method of High-Speed Decoders Using ROM's for Bose-Chaudhuri-Hocquenghem and Read-Solomon Codes," H. Okano et al., IEEE Transactions on Computers, Vol. C-36, No. 10, pp. 1165-1171, October 1987; "A VLSI Design of a Pipeline Read-Solomon Decoder," H.N. Shao et al., IEEE Transactions on Computers, Vol. C-34, No. 5, pp. 393-403, May 1985; "On the VLSI Design of a Pipeline Read-Solomon Decoder Using Systolic Arrays," H.N. Shao et al., IEEE Transactions on Computers, Vol. 37, No. 10, pp. 1273-1280, October 1988.
As shown in Figure 6A, the single level RS decoder 190 shown in Figure 5 is an errors-only decoder. In Figure 6A, the received codeword R is input in parallel to a syndrome computation circuit 191 and a delay circuit 192. The syndrome computation circuit generates 2t syndromes s, which are combined to form the syndrome polynomial S from the received codeword R. The syndrome polynomial S is input in parallel to a modified Euclid's algorithm computation circuit 193 and a second delay circuit 194. The modified Euclid's algorithm computation circuit 193 generates the error locator polynomial σ . Then, the error locator polynomial σ and the syndrome polynomial S, delayed by the delay circuit 194, are input in parallel to the errata transform computation circuit 195. The errata transform computation circuit outputs M errata transforms ε . The M errata transforms ε are input to the inverse transform circuit 196 which generates the calculated or estimated error polynomial E'. Then, the adder 197 subtracts the estimated error polynomial E' from the received codeword R to generate the transmitted codeword C. It should be appreciated that in the errors-only decoder 190 shown in Figure 6A, the estimated error polynomial E' is only an estimate of the actual error polynomial. Accordingly, the transmitted codeword C output by the adder 197 is only an estimate of the actual transmitted codeword C.
In the single level RS erasures-errors decoder shown in Figure 6B, an erasure correction circuit 198 is inserted between the syndrome computation circuit 191 and the modified Euclid's algorithm computation circuit 193.
As shown in Figure 6B, the erasure correction circuit 198 comprises an alpha generation circuit 1981. The erasure location information input to the alpha generation circuit 1981 is assumed to be derived from outside the decoder 190 and is an estimate of the most likely error locations. One possible way for deriving the erasure location information is from a convolution decoder. By estimating erasures, i.e. the most likely error locations, the error correction capability of the system can be increased without significantly decreasing the processing speed.
However, since the conventional RS decoder only estimates the most likely errors, the system must still decode the received codeword R as in the errors-only decoder. In addition, in the conventional system, since the erasures are only estimates of error locations, some of the indicated error locations might not be actual error locations. In this case, the processing involved with these locations will have been wasted. The output from the alpha generation circuit 1981 is then input in parallel with the syndrome polynomial S output from the syndrome computation circuit 191 to the first polynomial expansion circuit 1982. The output from the alpha generation 1981 is also input to the second polynomial expansion circuit 1983. The first polynomial expansion circuit 1982 outputs the error evaluator polynomial H, while the second polynomial expansion circuit 1983 outputs the error locator polynomial L.
The error evaluator polynomial H output by the first polynomial expansion circuit 1982 and the error locator polynomial L output by the second polynomial expansion circuit 1983 are input in parallel to the modified Euclid's algorithm computation circuit 193. The modified Euclid's algorithm computation circuit 193 outputs a different errata locator polynomial σ, which is input to the errata transform computation circuit 195 along with the syndrome polynomial S output from the delay circuit 194. From this point the single level RS erasures-errors decoder 190 operates as in the single level RS errors-only decoder 190 described above with reference to Figure 6A.
Figure 7 shows the conventional process for generating the corrected transmitted codeword C from the received codeword R using an errors-only decoder. The decoding process starts in step S30. In step S31 the received incoming code signals are input from the data channel 140 and are converted from vector form to standard form by the receiver line decoder 150. Next, in step S32, the 2t syndromes sM syndromes are derived from the received code symbols and the syndrome polynomial S is generated by combining the 2t syndromes sM.
The syndrome polynomial S is then used to generate the error locator polynomial L from the syndrome polynomial S. In addition, if any erasure location data is available, it is also used with the syndrome polynomial S to generate the error locator polynomial L in step S33. Next, the syndrome polynomial S and the error locator polynomial L are used to compute the error evaluator polynomial H. However, if the error locator polynomial L is not available, such as in an errors-only decoder, the error evaluator polynomial H is computed from only the syndrome polynomial S. Next, in step S35, the error locator polynomial L is factored to locate its roots PM. Then, in step S36, the error evaluator polynomial H is evaluated for each root P of the error locator polynomial L. At the same time, in step S37, the derivative L' of the error locator polynomial L is generated for each root P. Then, in step S38, the error magnitudes are generated from the roots P, the derivatives L' generated in step S37 and the evaluated error evaluator polynomial H generated in step S36.
Next, in step S39, the received code symbols from step S31 are corrected using the error magnitudes generated in step S38. Once the corrected received code symbols are available, the code symbols can be decoded to generate the original information signal. It should be appreciated that due to the limited error correction capability of the conventional code, as described above, the correct information signal output in step S40 is only an estimate of the original information signal I input to the encoder 120.
Thus, there is a need in the communication industry for error correcting encoders/decoders for high speed communication signals which provide sufficient error correcting capability and sufficiently high transmission speeds.
SUMMARY OF THE INVENTION
This invention satisfies this need by providing a novel method and system for high speed encoding and decoding of signals which eliminates (or reduces) the most complicated parts of the decoding algorithm: producing the error-locator polynomial and factoring the error-locator polynomial.
In a first preferred embodiment of the method according to this invention, an information signal to be transmitted is encoded using a known encoder, such a Reed- Solomon encoder. Then, an error locating code is appended to each codeword output by the encoder. A first transform circuit remaps the codeword from the error locating code and the signal encoder to symbols in a second finite field. The output from the first transform circuit is then multiplied with an element in a second finite field by a second transform circuit. The second finite field used in the second transform circuit is different from the first finite field used by the signal encoder. In addition, the second finite field used in the second transform circuit operates in a finite field which is one dimension higher than the first finite field used for the signal encoder. Then, the output of the second transform circuit is put through a third transform circuit using a lookup table embedded in a ROM. The output from the transform circuit is then sent along a binary channel as the communication signal. It should be appreciated that, in this embodiment, the communication signal is defined as the combination of the line encoder, the analog data channel, and line decoder. Hence, the communication signal is a binary signal stream.
The communication signal is received from the binary channel and decoded by a first inverse transform circuit using a second ROM. The second ROM embodies the inverse of the map embodied in the first ROM. Then the output from the second ROM is again remapped in a second inverse transform circuit, by multiplying with the corresponding inverse element of the second finite field and converted back to a vector. Then, in a third inverse transform circuit, a third remapping, which is an inverse remapping from the first remapping of the encoding system remaps the received signal into two portions: the error locating code and the encoded signal . The error locating code is decoded and used to correct errors in the received encoded signal, while the received encoded signal is used to find the magnitude of the error at each indicated error location. The located error positions and magnitudes are used to convert the received encoded signal into a corrected encoded signal, which should be the same as the original transmitted encoded signal. The corrected encoded signal is then decoded to form the original information signal.
In the first preferred embodiment, the error locating code of this invention is a single-bit null code. In a second preferred embodiment of the error correcting code of this invention, the error locating code is a multiple-bit null code. In a third preferred embodiment, a counter is added to count the number of errors detected by the error-locating code. This third preferred system is able to detect when too many errors have occurred in the block of received data, such that it is not possible to correctly decode the error magnitudes in the second level code. If this event is detected in a communication system where the receiver can communicate with the transmitter (e.g., automatic repeat request (ARQ) systems), then the receiver can request the transmitter to resend the last codeword block. In all three embodiments, the information signal is preferably encoded/decoded using conventional errors-erasures codes such as the RS code.
These and other features and advantages of the invention are described in or apparent from the following detailed description of the preferred embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
The preferred embodiments are described with reference to the drawings in which:
Figure 1 shows a conventional encoding-decoding system;
Figure 2 shows a generalized conventional encoding system;
Figure 3 shows a generalized conventional decoding system;
Figure 4 shows a conventional implementation of the encoding system of Figure 2;
Figure 5 shows a conventional implementation of the decoding system shown in Figure 3; Figure 6A shows a conventional errors-only decoder;
Figure 6B shows a conventional erasures-errors decoder;
Figure 7 shows the conventional decoding process;
Figure 8 shows the encoding-decoding system of the present invention;
Figure 9 shows a generalized form of the null code encoder of the present invention;
Figure 10 shows a generalized form of the null code decoder of the present invention;
Figure 11 shows a first preferred embodiment of the encoding system of this invention;
Figure 12 shows a first preferred embodiment of the decoding system of this invention;
Figure 13 shows a second preferred embodiment of the encoding system of this invention;
Figure 14 shows a second preferred embodiment of the decoding system of this invention;
Figure 15 shows a symbolic representation of the generalized two-level error locating-error correcting encoding-decoding system;
Figure 16 shows a symbolic representation of a first preferred embodiment of the null-code used in the two-level null-code encoding-decoding system;
Figure 17 shows a symbolic representation of a second preferred embodiment of the null-code used in the two-level null-code encoding-decoding system;
Figure 18 shows a block diagram of the second preferred embodiment of the decoding system as an erasures-errors decoder;
Figure 19 shows a block diagram of the second preferred embodiment of the decoding system as an erasures-only decoder;
Figure 20 shows the encoding process for the generalized two-level error-locating, error-correcting encoding of this invention; Figure 21 shows the process for erasures-only decoding of this invention;
Figure 22 shows the process for erasures-errors decoding of this invention;
Figure 23 shows a VLSI implementation of the multibit null code encoder and an erasures-errors decoder;
Figure 24 shows a VLSI implementation of the multibit null code erasures-only decoder;
Figure 25 is a Multilevel RS code encoder/decoder; and
Figure 26 shows a Multilevel RS encoder of Fig.
25.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In a Reed-Solomon code, each codeword contains M symbols. Each of the M symbols contains m bits. Thus, there are M=2m-1 symbols in each codeword. Of these M symbols in each codeword, I symbols are message symbols and 2t symbols are error correction or check symbols, where M=I+2t. Thus, as described above, the decoding process in conventional RS VSLI decoders is not trivial and forms the most significant bottleneck in creating high speed, high-reliability communication systems.
Figure 8 shows a first preferred embodiment of the error correction encoding-decoding system of this invention. As shown in Figure 8, a binary data stream source 210 outputs an information signal I to a two-level error locating, erasure-errors correction code encoder 220. That is, the encoder 220 encodes the information signal I using two independent codes. The second-level, erasures-errors correcting code is used primarily to generate the error magnitudes once the error locations have been identified, although using the erasures-errors correcting code to correct an additional number of errors is also within the scope of this invention. In contrast, the first-level, error-locating code is used solely to obtain the error positions of the received signal R.
The encoder 220 converts the information signal I to a stream of encoded codewords C, which are output to the transmitter line encoder 210, which outputs the line encoded codeword C' to the data channel 240. As the codewords C' are transmitted through the data channel 240 to the receiver line decoder 250, errors from the error source 280 are additively imposed onto the transmitted codewords such that the received line encoded codewords R' differ from the transmitted codewords C' due to the errors.
The received line encoded codewords R' are then converted from line encoded form to standard form codewords R by the receiver line decoder 250, which outputs the standard received codewords R to the two-level error-locating, error-correction code decoder 260. The decoder 260 uses the two independent codes to high-speed generate the original information signal I from the stream of received codewords R. The information signal I is then output to the binary data stream sink 270.
In general, the decoder 260 first reorganizes the received codeword R to separate the error-locating portion N from the received codeword portion R*. Since the error- locating code is of a predefined form, any deviations from this predetermined form can be readily obtained and used to positively identify any error locations. Thus, the error locating code acts as the source of the erasure locator data, with the added benefit that every location indicated by the error-locating code positively identifies an erroneous codeword symbol.
That is, in the present invention, the error locating code N is repeatedly appended by the line encoder 230 to each codeword symbol CM of the transmitted codeword C as it is output from the second level encoder. Similarly, when the received signal R is decomposed into the error locating code portion N* and the received codeword portion R*, each receive codeword portion R* corresponds to a single symbol of the transmitted codeword C.
Thus, the error-locating portion N* indicates whether any bits are in error in the corresppnding received codeword portion. The received codeword portions R* are stored into a buffer until all of the M symbols of the transmitted codeword C are received. The erroneous codeword positions P indicated by the error locating code are also stored.
Once all of the symbols of the transmitted codewords are received and their corresponding error correcting codes are decoded, the received codeword symbols R* and the corresponding error positions P determined from the error locating code portion N* are input to a conventional decoder for determination of the error magnitudes, correction of the received codeword symbols R* into the estimated transmitted codeword C*, and decoding of the codeword C* into the estimated information signal I*, which, ideally, is the same as the original information signal I. In general, the estimated information signal I* is equivalent to the original information signal I in the two-level, erasures-errors decoders.
In the preferred embodiment of the error-locating code, the error-locating code is a true null code and comprises an all-zeros binary signal. However, it should be appreciated that the error-locating code is not necessarily a null code, but can be any desired more- sophisticated code. However, any code having additional sophistication will require increased complexity in the decoder 260 necessary to deal with the added sophistication. While this will increase the amount of information being transmitted, it is also likely to significantly decrease the transmission speed of the transmission system 200.
The received codeword R* is now processed using conventional decoding systems to determine the magnitude of the errors for each of the error positions located by the null code. In addition, the received codeword R* can also be used to locate an additional number of errors. If only a few additional (less than four) errors are to be located and corrected, fast algorithms exist which can quickly locate these additional errors.
As shown in Figure 9, the binary data stream source 210 inputs the information signal I to a second level data buffer 223 of the encoder 220. The information signal I is output by the second level data buffer 223 to a second level encoder 224, which uses any known conventional encoding scheme, such as RS encoding, to generate the encoded codeword C. At the same time, the null code generator 222 of the encoder 220 outputs the null code N. Both the null code N and the encoded codeword C are input to the transmitter line encoder 230 which combines the null code N and the codeword C into the line encoded vector codeword C'. Since the null code N is independent of the encoded codeword C, the form and content of the null code N can be preselected based upon the types of errors most likely to occur based on the data channel 240 and the form of the codeword C. As described above, while any suitable error-locating code can be used, the preferred error-locating code is a null code.
As shown in Figure 10, when the line encoded received codeword vector R' is received from the data channel 240 by the receiver line decoder 250, the received vector codeword R' is converted back to standard form received codeword R. The received codeword R is output by the receiver line decoder 250 to the null code decoder 261 and the second level data buffer 264 of the decoder 260. The null code decoder 261 decomposes the received signal R into a received null code portion N* and a received codeword portion R*. The received null code N* indicates if the received codeword portion R* contains an error. The error positions P indicating the erroneous received codewords R* and the received codeword portions R* are output from the null code decoder 261 and the second level buffer 264, respectively, to the second level decoder 263. The second level decoder 263 then uses the error positions P and the received codeword portion R* to determine the error magnitudes for each of the error locations P. The second level decoder uses the error locations p and the determined error magnitudes to generate the estimated codeword C*. The estimated codeword C* is then decoded to generate the estimated information signal I*. The decoder 220 then outputs the estimated information signal I* to the binary data stream sink 270. It should also be understood that the second level decoder can also be used to find a smaller number, preferably less than 4, of additional errors not indicated by the received null code portions N*. This generally ensures the estimated information signal I* is identical to the original information signal I.
A first preferred embodiment of an error-locating encoder 320 corresponding to the encoder 220 of Figure 8 is shown in Figure 11. As shown in Figure 11, the binary data stream source 210 outputs the information signal I to an RS encoder 322. The RS encoder 322 converts the information signal I to the encoded codeword C and outputs it to the S transform generator 323. At the same time, the error-locating code generator 321 outputs the error locating code N to the S transform generator 322, along with the information signal I'. The S transform generator 322 takes these inputs and combines and transforms them to the S transform codeword S. The S transformed codeword S is then input to the T transform generator 324. At the same time, the a input 326 is also input to the T transformed generator 324. The T transform generator 324 combines the a input 326 and the S codeword to form the T transform codeword T. The T transform codeword T is input to the U transform generator 325, which converts and outputs the T transformed codeword T to the U transformed codeword U. The U transformed codeword U is then input to the transmitter line encoder 230 and line encoded to vector codeword U'.
As shown in Figure 12, the receiver line decoder
250 receives the line encoded vector codeword R' and decodes it to the received codeword R. The received codeword R is then input to the inverse U transform generator 361 of the error-locating decoder 360. The inverse U transform generator 361 applies the inverse of the U transform applied by the U transform generator 325. Thus, the inverse U transform generator converts the received codeword R to the intermediate form T-1, which is input to the inverse T transform generator 362. Like the inverse U transform generator 361, the inverse T transform generator applies the inverse of the T transform applied by the T transform generator 324. Similarly, a second input to the inverse T transform generator comes from the inverse-alpha input 367. The inverse T transform generator applies α-1 to the T-1 signal output from the inverse U transform generator, and outputs an S-1 signal to the inverse S transform generator 363. As above, the inverse S transform generator applies the inverse transform of the transform applied by the S transform generator 323.
Thus, the inverse S transform generator outputs the modified received error-locating code portion N* to the error-locating decoder 364 and the delay circuit 365 outputs the received codeword portion R*. The error- locating decoder 364 uses its code portion N* to locate the error positions P. The error-locating decoder 364 outputs the error positions P to the erasures-errors decoder 366. Simultaneously, the delay circuit 365 outputs the received codeword portions R* to the erasures- errors decoder 366.
The erasures-errors decoder 366 uses the received codeword portions R* to generate the error magnitudes for the positions P indicated by the error-locating decoder 364. Since the erasures-errors decoder 366 does not need to generate the error-location polynomial L, nor need to factor the error-location polynomial L to determine the error positions P, the most time consuming, computationally heavy, and hardware intensive portions of the erasures-errors decoder 366 can be deleted. Also, the error-locating decoder produces its estimate of the error positions P which are output to the binary data sink 270. The erasures-errors decoder 366 uses the error positions P and the error magnitude information to recreate, at high speed and at high accuracy, an estimate I* of the original information signal I, which is output to the binary data stream sink 270. Since the error-locating code portion N* indicate essentially all of the errors occurring in the modified received codeword R*, it is not necessary to generate or factor the error location polynomial L. Thus, the erasures-error decoder 366 needs merely to determine the error magnitudes at the error positions P.
In addition, in a first variation of the first preferred embodiment, the erasures-errors decoder 366 also uses the second level received codeword R* to locate a few additional errors. Many variations of single, double, and triple error correcting RS codes (or their superclass of Bose-Chaudhuri-Hocquenghem (BCH) codes) exist. These codes are very fast because only a few errors are to be located. The error-locating decoder 364 can be simply modified so that the erasures-errors decoder 366 will only try to locate at most a few errors. For example, in one embodiment, suppose that up to one additional error is to be located in the erasures-errors decoder which can correct up to E erasures. This embodiment is thus an erasures bounded-single error decoder, which can be optimized to operate extremely fast. Then error-locating code will produce either E-2 or E erasures. So, if the error-locating code finds P<E-2 definite error locations, then the error-locating code will produce E-2-P arbitrary error positions so that the erasures-errors decoder will have E-2 erasure positions and will only find up to one additional error. Thus, the 2t syndromes sM can be used to directly generate and factor the error locator polynomial L. The need to correct only a few errors results in simple algorithms, in contrast to the generally complex algorithms required to locate many errors. In addition, the additional locations for which error magnitude calculations are required does not add significant additional overhead. Accordingly, the error locating decoder 360 has the error locating speed of a small Hamming distance codes with the error correction capabilities of slower, large Hamming distance codes.
Figures 13 and 14 show a second preferred embodiment of the error locating encoder 420 and the error locating decoder 460. As shown in Figures 13 and 14, the S, T and U transform generators 323, 324 and 325 can be replace by encoding the S, T and U transforms into a read only memory (ROM) 423. Likewise, the inverse transform generators 361, 362 and 363 can be replaced by encoding the S-1, T-1 and U-1 transforms into a ROM 461.
The first and second preferred embodiments of the error locating encoders 320 and 420 and the error locating decoders 360 and 460 described above provide for increased transmission speeds and decreased computational loads while maintaining essentially equivalent bit error rates on the following basis. In the conventional multilevel block modulation (MBM) codes, the issues of decoding complexity and bit error rates are addressed with two concepts. First, multiple codes are implemented in a parallel and/or pipeline fashion so the multiple codes can be decoded in about the same period. In other words, the first level code and the last level code will be decoded in about the same period of time. Also, each of the multiple codes is relatively fast to decode. Thus, the conventional MBM codes are fast to decode. Second, MBM codes have relatively good bit error rates due to the well known technique of set partitioning. For set partitioning to work, bit error likelihoods must be organizable such that the error likelihoods decrease in order from the most significant bit to the least significant bit. Then each level of the MBM code corresponds to a level of bit error. If the bit is more likely to be an error, the code corresponding to this bit has more error correction capability. Hence, the MBM code increases bit error rate performances by taking advantage of likelihood information of bit error. In general, MBM codes divide the decoding effort into several identical smaller processes of producing the error locations and computing the error magnitudes. This invention essentially divides the decoding effort into two separate processes, one process specialized for locating the errors; and the second process specialized in determining the error magnitudes. The preferred embodiments of this invention are set forth next. They use a null code, which has not heretofore been used, as a practical use for a code which conveys no data information was not previously known.
In the preferred embodiment of this invention, a null code is used as the first level code. A null code has only one codeword, which comprises all zeros. The advantage of a null code is that it can trivialize locating errors. The null code does not need to produce and factor the error locator polynomial to identify the error locations. That is, the null code itself has all the information necessary to produce the factors and the error locator polynomial. Thus, the two most expensive operations (in terms of computational load and also in terms of hardware circuit requirements), generating the error locator polynomial and factoring the error locator polynomial, are not needed when using the null code.
However, a null code by itself is not sufficient to provide the needed bit error rate performance. Accordingly, in the preferred embodiments of this invention an extra process, the "set remapping" process must be added which conditions the error bit likelihoods to provide sufficient bit error rate performance.
To use the null code's ease of decoding, the error locations identified by the null code will have to be used by other codes. Thus, to avoid the two expensive operations, generating the error locator polynomial and factoring the error locator polynomial, it is necessary to use the other codes as erasure-only codes. In the preferred embodiment, RS and BCH errors/erasure codes are used because these use the same codes as error-only codes when encoding. However, the decoding of these types of codes for erasures-only is different from decoding for errors-only or for erasures-errors decoding.
Erasures-only decoding is the simplest type of decoding, as all errors are anticipated, such that no processing is needed for locating any further errors. Thus, to complete the decoding procedure, only the computation of the error magnitudes is necessary. Importantly, most of the information for computing the error magnitudes can be obtained in a relatively short time. Thus, compared to the conventional RS decoding, as shown in Figures 1-7, error location is obtained for free in terms of processing computation loads and transmission speeds.
However, for this scheme to be successful, the first level null code must be a sufficiently accurate indicator that a symbol has one of its bits in error. That is, the first level null code is an error locator code. If this first level code cannot reliably locate the errors, the other codes cannot be used to correct the errors. The reliability of the first level code's error location capability relies on the ability of the channel to yield errors. Thus, the error locating encoders and decoders are also dependent on the different encoding levels representing different bit reliabilities. However, instead of the bit reliabilities needing to be ordered from the most significant bit to the least significant bit as in the known RS encoders/decoders, the error locating encoder/decoder of this invention requires only that the first level be highly likely to be in error, and is indifferent to the likelihood that any of the other levels are in error. Thus, the only information needed about the bit error distributions for the error locating decoder is the conditional probability that the first level indicates an error given that the error symbol is non-zero, since if the error symbol is zero, it is not an error. In addition, while the Euclidian distance information for the channel symbols is not needed for the error locating encoder/decoder, a more sophisticated code beyond the null code can be used if the Euclidean distance information is available.
Since no known channel assignments yield symbols having a high enough error likelihood for the first level bit, the "set remapping" technique is used to recondition the symbols such that the bit error likelihood for the first level is high and the bit error likelihood of the rest of the levels is irrelevant. Set remapping is an active process, done during encoding/decoding. The extra computational and hardware overhead for implementing set remapping is quite small.
In addition, set remapping techniques can be applied to other types of channels. For instance, MBM codes cannot be used on binary symmetric channels (BSC) that have the bit error rates of each level that are statistically independent from each other. However, for set remapping techniques, a process exists such that the error bit likelihoods of the BSC can be transformed, such that the likelihood for the first level is high.
It is important to note that for the above discussion, an ideal code was assumed such that the set remapping process was able to yield ideal bit rate error patterns. That is, the above discussion assumes that nearly all errors will be detected by the level 1 null code. Non-ideal bit error patterns means that the null code will not be able to detect all of the errors. Thus, for non-ideal bit error patterns, extra error location processing is needed. In the first and second preferred embodiments of the error locating decoders set forth in
Figures 12 and 14, the extra error location processing is accomplished by the second level RS code. If the null code is able to provide ideal bit rate error positions, the second level code can be used only for error magnitude calculations. In the first and second embodiments, the second level code can also be used to locate extra errors. However, the error locating requirements of the second level code have been found to be modest. Thus, only a few extra (preferably, less than four) errors must be located by the second level code.
Since the second level code may need to locate only three or fewer additional errors in order to provide sufficient bit error rate performance, any one of the many variations of erasures bounded-single error correcting RS or BCH codes can be used. As indicated above, these codes are very fast because only a few errors are to be located.
Set remapping is an algebraic process which utilizes linear maps (vector homomorphisms). The set remapping technique treats the data channel as a binary channel such that the bit error patterns produced by the data channel are not necessarily independent. Thus, the set remapping technique is appropriate for any binary data channel whose bit errors may or may not be independent. The set remapping technique depends only on the fact that not all error patterns are equally likely.
For example, consider a concatenated coding scheme, i.e., a code which comprises two codes used in tandem. The advantage of concatenated codes is that they have good bit error rates for low decoding complexities. One common known concatenated code uses an RS outer code with a trellis code utilizing the Viterbi algorithm as the inner code. Such a code configuration is used for example in NASA satellite communication systems. In this case, the null code with set remapping would replace the RS outer code. In addition, the binary data channel can be treated as including the inner trellis code as part of the data channel. Trellis codes which are used in this manner are designed to produce patterned error symbols. Thus, the set remapping technique identifies the most common error patterns and ignores the least common error patterns.
Set remapping may thus be viewed as reshaping the error patterns. In the first and second preferred embodiments shown in Figures 11-14, the original codeword C is unaltered after having been mapped and unmapped by the set remapping transforms. The set remapping technique consists of applying linear maps to generate the transmitted codewords and to process the received codewords. In general, there are six linear maps, with three used for encoding and three used for decoding. The maps S and S-1 map the MBM code symbols to or from, respectively, the finite field symbols. The maps T and T-1 remap the finite field symbols to or from, respectively, themselves. The maps U and U-1, which are not necessarily linear, map the finite field symbols to or from, respectively, another set of binary vector symbols. In addition, a single map of S, T and U or S-1, T-1, and U-1 can be implemented together on a single ROM.
The following is one preferred form of the set remapping technique. It should be appreciated that there are an unlimited number of forms which the set remapping technique can take. Assume there are k bits in a symbol instead of m. The binary vector space is partitioned into two subspaces V and 0, where V is the set of even Hamming weight vectors and 0 is the set of odd Hamming weight vectors. Assuming that good estimates of the frequency of error for each type of binary vector symbol error are available, the number of times in X observances of errors each error symbol occurs is counted. The frequency for an error symbol is the number of times it occurs divided by X. Then, each set V and O are each divided into two equally sized sets where V; contains the least most frequent even Hamming weight error vectors and V0 contains the most frequent error vectors. In contrast, Oi contains the most frequent odd Hamming weight error vectors and O0 contains the least most frequent vectors. Thus, any symbol in V0 has a higher occurrence of error than any symbol in Vi, and any symbol in Oi has a higher occurrence of error than any symbol in O0. Let the finite field for the RS code be GF(2k) for some k>0, so that V has dimension k. Let K be the kernel of the binary trace, τ , that maps GF(2m) onto GF (2), which is a vector homomorphism assuming that the null code has binary symbols. Since τ is the binary trace, m=k+1. Then, a linear operator is defined by multiplying field symbols by α≠ 0 or 1, and a is a member of GF(2m) as T. Then, T-1 is defined by multiplying field symbols by α-1. It should be noted that this is a 1-to-1, onto vector homomorphism. Next, the composite map C is formed, where C equal to τoT. Since τ and T are vector homomorphisms, C is a homomorphism. Then:
Figure imgf000027_0001
where Kc is a kernel of C and is the compliment
Figure imgf000027_0003
of K (or Kc). The map U is:
Figure imgf000027_0002
and the zero element is mapped to the zero binary vector.
Since all assignments are 1-to-1 and onto, U-1 also exists.
For the first and second embodiments shown in Figures 13-14, there are k bits in the second level code and j bits are used for the null code. Since the subvector defined by the second level forms a substrate embedded in the vector space of (k+j) bits, the basis vectors describing the substrates will be mapped onto a basis of M, which is the kernel of a vector homomorphism with dimension k, so that π=k+j. It is important to note that the basis describing the vector code symbol space can be the elementary bases (the vector's components are all zero except one component with the value 1). Hence, the rest of the map of code symbols to finite field symbols is determined by the bases corresponding to the null code, thus defining S. Since all of these assignments form a 1-to-1, onto mapping, S-1 therefore exists.
While the foregoing is a description of one possible form for the set remapping transforms to take, it should be understood that there are an unlimited number of valid transforms which are usable. Thus, selection of the appropriate transform should be left to the discretion of the system designer.
Figures 15, 16 and 17 show symbolic representations of the transmitted code symbols and the received code symbols, the error locating encoders and decoders, and the second level encoders and erasures- errors decoders. In Figures 15, 16 and 17, the received code symbols have already been remapped using the set remapping technique described above.
Figure 15 shows a generalized form of the transmitted code symbol coefficients and the received code symbol coefficients. In Figure 15, the coefficients a1-am are generated by the error locating code encoder 121 while the coefficients b1-bn are generated by the RS encoder 122. The coefficients are set remapped by the ROM 423 and transmitted to the data channel 350. The data channel 350 includes the line encoder and the line decoder, and thus acts as a binary data channel where the error source 280 adds errors. The coefficients are inverse set remapped by the ROM 461 to generate the error correcting code coefficients a'1-a'm and the modified received codeword coefficients b'1-b'n. The error correcting code coefficients are input to the error locating decoder 162 to generate the error positions P. Simultaneously, the received modified codewords input to the delay 161 and then into the RS erasures-errors decoder 163. The error positions P are also input from the error locating decoder 162 to the RS erasures-decoder 163 to finally decode the received modified codeword and generate the original information signal I. Additionally, the error-locating decoder 162 generates the information signal I'.
Figure 16 shows a one-bit null-code embodiment 560 of the multi-bit error-locating encoding/decoding system 120/160 of Figure 15. In Figure 16, since the null code is used, no error locating code encoder or decoder is necessary. Rather, the 1-bit null code is immediately appended to the bits of each codeword symbol, and used directly to indicate if the corresponding codeword symbol has an error.
Figure 17 shows a multi-bit null-code embodiment 660 of the multi-bit error-locating encoding/decoding system 120/160 of Figures 15. In Figure 17, while the multi-bit null-code is immediately appended to each codeword symbol, a logic circuit 662 is used to decode the received null-code.
In a third preferred embodiment of this invention, the null code consists of a single bit. In this way, the single bit will indicate either that an error has or has not occurred somewhere within the received modified codeword. Thus, if no error has occurred, no correction is necessary and the received modified codeword can be directly decoded by the RS erasures-errors decoder 562 to generate the information signal I. However, if the 1-bit null code is equal to 1, indicating that an error has occurred in the received modified codeword, the standard RS erasures-errors decoding process must be performed by the decoder 562.
In a fourth preferred embodiment of this invention, the null code contains m bits. The received null code is then input to a multiple input OR gate 662 to generate the error positions for the RS erasures-errors decoders 663.
Figures 18 and 19 show block diagrams for an erasures-errors decoder 460 and an erasures-only decoder 460' which incorporate the ROM storing the inverse map transform in place of the independent inverse transform generators shown in Figure 12. Figure 20 shows the generalized process for generating, transmitting and decoding the transmission signal using the multilevel encoding system of this invention. As shown in Figure 20, starting from step S30, the information signal I received from the data source is first converted, using known codes such as RS or BCH codes, to a second level encoded signal in step S32. In step S32, when using, for example an RS code as the second level code, the information signal I is broken up into a number of blocks of k message characters. Then each block of k message characters is converted to one information polynomial I(x). Then the check polynomial P(x) is added to the information polynomial I(x) to generate the codeword C.
Next, in step S34, the first level error correcting code is appended to the encoded codeword C. It should be appreciated that the first level error correcting code is appended independently to each one of the M symbols of the encoded codeword C. For example, as shown in Figures 15-17, in the preferred embodiments the first level error correcting code (Figure 15), the 1-bit null code (Figure 16) and the multi-bit null code (Figure 17), the bits of the error correcting or null codes are appended to the n bits of each of the M symbols of the encoded codeword C.
Then, in step S36, the transmission signal comprising the combined first level error correcting code and second level codeword C are line encoded. Next, the line encoded transmission signal is output, in step S38, to the data channel. As shown in Figures 15-17, the remapping subsystem, comprising the S transform generator 323, the T transform generator 324 and the U transform generator 325 can be modeled as part of the data channel 350. Likewise, the inverse remapping subsystem can also be modeled as part of the data channel 350. In addition, the remapping subsystem and the inverse remapping subsystem can be implemented in a ROM. Once all of the M symbols of each codeword C of the information signal I, including the appended first level error correcting code, are output to the data channel in step S38, the transmission signal flows through the data channel, where errors in the transmission signal occur.
Then, in step S40, the received signal output by the data channel is input to the decoding system and the line decoder of the decoding system line decodes the received signal. Similarly to the encoding process, the decoding process inversely remaps the line encoded received signal, remapping on a symbol-by-symbol basis each of the M codewords of each received signal R. Likewise, the inverse U transform generator 361, the inverse T transform generator 362 and the inverse S transform generator 363 can be embodied in a ROM and modeled as part of the data channel 350 as shown in Figures 15-17.
Once the received signal is inversely remapped and line decoded in step S40, the first level error correcting code of each of the M symbols of each received codeword R is separated from the corresponding symbol and checked to determine if the corresponding symbol contains an error. It should be appreciated that if none of the M symbols of the received codeword R contain any errors, the error correcting processing of the received codeword R can be skipped and the received codeword R directly decoded.
However, if even one of the M symbols of the received codeword R is determined to have an error in step S42, control flows to step S44, where the error magnitude for each of the error containing symbols is determined. Once the error magnitude for each such symbol is determined, the received codeword R can be corrected by subtracting out the determined error magnitude for each error containing symbol.
Then, in step S46, the correct (if the received codeword R had no errors) or corrected received codeword is converted from the second level encoded signal back into a portion of the information signal. It should be appreciated that each inversely converted received codeword R provides k message characters of the information signal. Once all of the received codewords R are checked, corrected, and converted from the second level encoded signal to form the information signal, the information signal I is output to the data sink. Control then flows to step S48 where the process stops.
Figure 21 shows the decoding process in greater detail. The decoding process shown in Figure 21 is an erasures-only process. Starting from step S50, the received codeword signal R is input from the data channel and inversely remapped by passing the received codeword signal R through a ROM. The received codeword R thus contains M symbols each having a transmitted null code appended to it.
Then, in step S54, the null codes for each of the
M symbols of the received codeword R are checked to determine if any of the null codes are non-zero.
Then, in step S56, if all of the null codes for the M symbols are zero, control flows from step S56 to step S58, where the received codeword R is decoded and the k message characters of the information signal I are output.
However, if even one of the null codes for the M symbols of the codeword R is non-zero, control flows from step S56 to step S60. In step S60, the 2t syndromes sM are generated from the M symbols of the received codeword R.
Then, in step S62, the error locator polynomial L is generated from the non-zero null codes for the M symbols of the received codeword R and the M symbols. In addition, in step S68, the error positions P are noted from the non-zero null codes of the M symbols of the received codeword R.
The 2t syndromes generated in step S60 and the error locator polynomial L generated in step S62 are then used to generate the error evaluator polynomial H in step S64. Once the error evaluator polynomial H is generated in step S64, it is evaluated in step S70 for each error position P noted in step S68. In addition, in step S72, the first derivative L' of the error locator polynomial L is generated for each of the error positions P noted in step S68.
Then, in step S74, the error magnitudes J for the error containing symbols of the received codeword R are generated from the evaluated error polynomial values H(P) from step S70, the error positions P noted in step S68 and the first derivatives L' (P) generated in step S72. Then, the generated error magnitudes J generated in step S74 are used to correct the received codeword R in step S76. Once the achieved codeword R is corrected, it is decoded and the k message symbols encoded by the received codeword R are output. This process is then repeated for each received codeword R and the blocks of k message symbols are combined to form the information signal I. Once the full information signal I has been transmitted, it is output in step S76 to the data sink then control continues to step S78 where the process stops.
Figure 22 shows the errors-erasures correcting process. As described above, additional correction can be obtained beyond the erasures indicated by the first level error correcting code, by using the high speed single-, double-, and triple-correcting decoder available for decoding RS codes when high probability can be assigned to a received codeword that the number of remaining unidentified errors is three or less.
As shown in Figure 22, most of steps S80-S110 correspond to steps S50-S78 in Figure 21, except for the addition of step S98. It should also be appreciated that steps S70-S74 of Figure 21 have been replaced by steps S102-S106 of Figure 22, which operate on a larger set of error locations than steps S70-S74.
As shown in Figure S22, in step S98, the errors- erasures decoder is used to produce the augmented error locator polynomial L and the augmented error evaluator polynomial H. The augmented error locator polynomial L locates additional errors beyond those located by the null code, and augmented error evaluator polynomial H includes not only the error positions notes in step S96 from the null code, but also includes o-t additional errors located by the errors-erasures decoder.
It should be appreciated that, in these erasures- errors decoders, while 2t erasures can be corrected, only t errors can be corrected. Thus, if t erasures have been located and t is even, only t/2 additional errors can be corrected. To determine how many additional errors can be corrected for the current codeword, the fast Euclid's Algorithm calculation circuit includes an erasures counter, which counts down from 2t once for each erasure that is located. Then, if the counter has not reached zero, additional errors are corrected, with each additional corrected error causing the counter to count down twice. The additional error correction continues until the counter has counted below 2. That is, if only 1 count remains on the counter, there is not enough error correction capability left to correct one more error. Thus, the number of additional errors that can be corrected for depends on the number of located erasures, and can range from 0 to t additional errors.
Then, in step S98, the augmented error locator polynomial L is factored to produce the o-t additional error positions, which are added to the erasures positions P to get the full set of error positions P. Next, in step S102, the first derivative L' of the augmented error locator polynomial L is evaluated for the full set of error position P. Then, in step S104, the augmented error evaluator polynomial H is evaluated for the full set of error positions P.
Next, the error magnitudes J are determined in step S106 from the full set of error positions P determined from the errors-erasers decoder, the first derivatives of the augmented error locator polynomial L' (P), and the values for the evaluated augmented error evaluator polynomial H(P). As in Figure 21, once the error magnitudes J are determined in step S106, the determined magnitudes J are used to correct the received codeword R in step S108. Once the received codeword R is corrected, it is decoded to generate the k message characters encoded by the received codeword R. Again, as in step S76 of Figure 21, in step S108 of Figure 22, the blocks of k message symbols are combined, once all of the received codewords R are received, corrected and decoded, to form the information signal I, which is then output to the data sink. Then, control continues to step S110, where the process stops.
In the preferred embodiments, the encoders and decoders shown in Figs. 9-19, and especially Figs. 13 and 14, are formed on a semiconductor chip using VLSI circuit integrating techniques, as shown in Figures 23 and 24.
As shown in Figure 23, the multibit null code encoding/decoding system 700 comprises a multibit null code encoding system 710 and a multi bit null code decoding system 750. In Figure 23 the null code decoding system 750 is an erasures-errors decoder.
The null code encoder 710 comprises a conventional RS encoder 720 and a ROM 730. The RS encoder 720 encodes blocks of message characters into 2k codeword symbols of k bits each and outputs the codeword symbols of a block one at a time to the ROM 730. The ROM 730 inputs the k codeword bits of the next symbol and the j null code bits and remaps them. The ROM 730 the outputs the remapped codeword symbol to the data channel 740. The data channel 740 includes the line encoders and line decoders, and thus acts as a binary data channel.
The binary data channel 740 outputs the transmitted codeword symbol to the ROM 760 of the null code decoder 750. The ROM 760 inversely remaps the received codeword symbol to separate the null code portion from the RS codeword symbol portion. The j bits of the null code are input to a j-bit OR gate 770. The k RS codeword bits of the next symbol output from the ROM 760 and the single bit from the OR gate 770 are then input to the single level RS erasures-error decoder 190', which is generally similar to the conventional decoder 190 of Figure 6B. The output of the OR gate 770 is input to the αk generation circuit 1981, while the k bits from the ROM are input to the syndrome computation circuit 191 and the delay circuit 192. In this case, the decoder 190 implements the fast error correcting systems for correcting only a few additional errors.
In Figure 24, the null code decoder 750 is an erasures-only decoder. In this case, the modified Euclid's algorithm computation circuit 193 and the first polynomial expansion circuit 1982 are deleted from the null code decoder 760 of Figure 23. The output from the syndrome computation circuit 191 is connected only to the second delay circuit 194. In addition, both the second delay circuit 194 and the second polynomial expansion circuit 1983 are connected directly to the errata transform computation circuit 195. In this case, while the decoder determines the error magnitudes for the located error positions, no additional error locations are located or corrected.
In addition, the decoder 750 shown in Figure 24 may also include a resend request system. The resend request system generates a resend signal, which is sent back through the data channel 740 to the encoder 710 when more erasures have occurred than are correctable by the decoder. As described above, the decoder 750 can correct for 2t erasures. Thus, the OR gate 770 is connected to a counter 772. The counter counts each time the OR gate outputs a "1", which indicates an error containing symbol of the current codeword.
The counter 772 is connected to a comparator 774, which is also connected to a preset number 778, which is set to the maximum number of correctable errors 2t. If, during a current codeword, the counter exceeds the preset number 778, the comparator outputs an overflow signal to the resend request generator 776. The resend request generator 776 resets the counter 772, causes the current decoded codeword to be purged from memory, and sends a resend request signal back to the encoder, in order to have the current codeword resent from the beginning.
On the other hand, if 2t or fewer erasures are counted by the counter 772, the counter 772 is reset before the next codeword is received.
Figure 25 shows another embodiment of the multilevel code encoding/decoding system 800, which uses a novel code. This novel code is a Multilevel Reed-Solomon (MRS) Code. The MRS code uses a frequency domain, rather than time domain, encoding of multiple levels. In the finite field frequency domain, vector homomorphic maps will map the MRS code into different level codes. So for a simple example, if the MRS code is encoded with the binary trace in mind, then applying the binary trace the MRS code will result in a single bit null code. Hence, the binary trace acts as the S1 map in the embodiments described above, where information is extracted from the T- 1 map. However there are slight differences which must be appreciated.
Figure 25 depicts the use of the MRS code in the K-bit null code embodiment in place of the K-bit null code. A binary data stream inputs information bits into the MRS encoder 810. The output of the MRS encoder 810 are finite field codeword symbols of dimension K+J. Hence, the first and second level codes are encompassed by the MRS encoder 810. These codewords are mapped by the T map 820 and the U map 830, and then put through the combined line encoder, binary data channel, and line decoder 350. After the line decoder 350 are the U-1 inverse map 840 and then the T-1 inverse map 850. Note that the output of the T-1 map is a K+J bit finite field symbol. These symbols are split into two streams: one for the error locating code (a null code in this case); and one for the erasures bounded-errors MRS decoder 880 which is analogous the erasures bounded-errors RS decoder in the k-bit null code embodiment of Fig. 17. The stream diverted for the error locating code is put through a homomorphic map 860 whose output is a K bit error locating codeword symbol which is then put through a k-bit OR gate of the error position indicator 870. Since the MRS code was designed to use a K-bit null code, the output of the homomorphic map 850 should always be zero except for when an error has occurred in the K+J bit symbol.
The Multilevel Reed Solomon (MRS) code is a frequency domain implementation of MBM codes, so that conventional MBM codes can be considered as time domain MBM codes. Frequency domain encoding/decoding uses the Galois field transform equivalence. Before beginning with the description of MRS codes, the additive linear map, M, is defined. The map M is based upon vector homomorphisms that map elements in the Galois Field GF(pq) into itself; note that p is assumed to be a prime number. Since the space of vector homomorphisms is the same as the space of polynomial linear maps in GF(pq) which will be denoted as L[GF(pq)], M is based upon an m in L[GF(pq)] where m(x) =
Then for any polynomial f(x) = in
Figure imgf000038_0002
Figure imgf000038_0003
GF(pq), the map M on f(x) is defined as:
Figure imgf000038_0001
where n = pq - 1. Let
Figure imgf000038_0004
(x) = M(c(x)) so that
Figure imgf000038_0006
(x) is the mapped polynomial c(x).
To put a "code structure" on the mapped word
Figure imgf000038_0007
(x), consecutive i frequency components
Figure imgf000038_0005
i are set to zero. The ith frequency component of any given polynomial f will be defined by Fi, where the frequency components are derived by the Galois field transform equivalent. Thus, Fi = f(αi) where a is a primitive element for GF(pq). A well known property of finite fields is that (β+γ) p = βpp for all β , γεGF(pq), so the. map M is linear with respect to polynomials in GF(pq). Due to the linearity of the map M, the syndromes for depend only on the error values. Therefore, the error polynomial (x) can be determined. To put a code structure on the mapped word, constrain Ci = 0 by the following:
Figure imgf000039_0001
Since βp = β for any β in GF(ρq) , [ βip ] p = βi. Using this , equation (13 ) can be written as )
Figure imgf000039_0002
Now, observe that the ith syndrome (or DFT component) of the mapped word is ) N
Figure imgf000039_0003
ote that equation (15) can also be constrained to zero for consecutive components i which would require satisfying a system of equations. Hence, MRS codes are designed by choosing several vector homomorphisms, and for each homomorphism, a set of consecutive frequency components are chosen to constrain this equation to zero. Thus, a code exists for each level of mapping as defined by the chosen vector homomorphisms.
It should also be appreciated that equation (15) is a further relaxation on the frequency component constraints. For RS codes, these frequency components must satisfy the equation that these components must be equal to zero. For MRS codes, these frequency components must satisfy a more flexible system of equations. Also, the finite field transform is not only defined on blocklengths of n=pq-1 but, also on divisors of blocklength n. Hence MRS codes are a further generalization of BCH and RS codes.
Further, if mk=1 for k=0....,q-1, then m is the binary trace and equation (15) becomes
c w w and since P ( )
Figure imgf000040_0001
If
Figure imgf000040_0005
i is set to 0, then 0 = (
Figure imgf000040_0002
i)p =
Figure imgf000040_0003
ipj = 0 for j=0,...q-1. And if this is done for a number of consecutive components i, then a code is created for the mapped code.
When
Figure imgf000040_0004
i = 0 for i = 0 , ... ,n-1 for the map in equation (16), the mapped code is the one bit null code. For a simple illustration, a hardware implementation of the encoder for the one bit first level null code is shown in Figure 26. Note that this MRS encoder implementation is done in the field GF(8). The constraints in this equation are satisfied by constraining i1=0 or 1 and by the squares and sums of squares operators. The other information symbols ij for j≠1 have values in GF(8). To add the final erasures-errors code, the encoder will use the identify map vector homomorphism which corresponds to the conventional RS code. This can be seen by observing that
Figure imgf000040_0006
i i Hence, to add the RS code constrains, frequency components are set to zero. For example, C0 could be set to zero by further constraining i1=0. Thus, with the constraint of C0=0, the resulting encoded codeword is a two level MBM code with a null code for the first level and a single erasures correcting code in the second level.
To decode this simple code, first the binary trace is applied to the received word. Any nonzero result in the mapped received word is an error indication for that position and is corrected by the second level erasures RS code. However, if more than one position is in error, then this block of symbols cannot be reliably decoded.
Finally, it should be appreciated that the second level erasures-errors correcting code can also be the conventional multilevel code shown in Figs. 2 and 3. That is, multilevel erasures-errors decoders can be used in place of the second level erasures-errors decoder.
While the present invention has been described with reference to specific embodiments, it is not confined to the specific details set forth above, but is intended to cover such modifications or changes that may come within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A multi-level error correcting data transmission system, comprising:
a multi-level error correcting data encoder converting an information signal to a transmission signal;
a data channel inputting the transmission signal and converting the transmission signal to a received signal based on zero, one or more transmission errors; and
a multi-level error correcting data decoder inputting the received signal from the data channel and converting the received signal substantially into the information signal;
wherein the multi-level error correcting data encoder comprises:
an error locating encoder, and
an error correcting encoder; and the multi-level error correcting data decoder comprises :
an error locating subsystem, and an erasures magnitude subsystem.
2. The multi-level error correcting data transmission system of claim 1, wherein the error locating encoder comprises a null-code generator.
3. The multi-level error correcting data transmission system of claim 2, wherein the null code generator generates at least a one-bit null code.
4. The multi-level error correcting data transmission system of claim 1, wherein the multi-level error correcting data decoder further comprises an erasures-error correcting subsystem.
5. The multi-level error correcting data transmission system of claim 4 , wherein the erasures-error correcting subsystem comprises an erasures bounded-single error correcting decoder.
6. The multi-level error correcting data transmission system of claim 1, wherein the multi-level error correcting data encoder further comprises a mapping subsystem mapping the transmission signal before it is output to the data channel, and the multi-level error correcting data decoder further comprises an inverse mapping subsystem inversely mapping the received signal before it is input to the erasures locating decoder and the erasures magnitude decoder.
7. The multi-level error correcting data transmission system of claim 6, wherein the mapping subsystem comprises a first ROM and the inverse mapping subsystem comprises a second ROM.
8. A multi-level error decoder, comprising
an input terminal receiving a received information signal containing at least one error, the received information signal comprising an error locating code portion and an information portion;
an error locating decoder decoding the error locating code portion and generating a error location signal ;
an error magnitude decoder generating an error magnitude from the information portion and the error location signal and outputting a corrected information signal.
9. The multi-level error decoder of claim 8, wherein the error locating code portion of the received information signal is a null code portion having at least one bit.
10. The multi-level error decoder of claim 9, wherein the error locating decoder comprises a logic circuit logically combining the at least one bit of the null code portion and outputting the error location signal.
11. The multi-level error decoder of claim 8, further comprising an erasures-error correcting decoder.
12. The multi-level error decoder of claim 8, wherein when the received information signal has been mapped, the error correcting decoder further comprises an inverse mapping circuit.
13. The multi-level error decoder of claim 12, wherein the inverse mapping circuit is a ROM.
14. A method for decoding a multi-level encoder received signal of a received information signal, comprising
separating the received information into an error locating code portion and a received information portion;
decoding the error locating code portion to determine error location data;
generating error magnitude data from the error location data and the received information portion; and
decoding the received information portion based on the error location data and error magnitude data to substantially determine the transmitted information signal.
15. The decoding method of claim 14, wherein the error locating portion is a null code portion having at least one bit.
16. The decoding method of claim 15, wherein the step of decoding the null code portion comprises the step of logically combining the at least one bit of the null code portion.
17. The decoding method of claim 14, wherein when the received signal has been mapped, further comprises the step of inversely mapping the received signal prior to the separating step.
18. The decoding method of claim 14, further comprising the steps of:
inputting an information signal;
encoding the information signal;
appending an error locating code to the encoded information signal to form a transmission signal;
transmitting the transmission signal on a data channel, the data channel converting the transmission signal to the received information signal by introducing zero, one or more errors; and outputting the received information signal from the data channel.
19. The decoding method of claim 18, further comprising the steps of:
mapping the transmission signal before transmitting it along the data channel; and
inversely mapping the received signal before separating it.
20. The decoding method of claim 18, further comprising the step of:
correcting the information signal using an erasures bounded-single error correcting code to improve an accuracy of the information signal.
PCT/US1995/007846 1994-06-21 1995-06-21 Method and system for encoding and decoding signals using a fast algebraic error correcting code WO1995035538A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU29056/95A AU2905695A (en) 1994-06-21 1995-06-21 Method and system for encoding and decoding signals using a fast algebraic error correcting code

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US26313894A 1994-06-21 1994-06-21
US08/263,138 1994-06-21

Publications (1)

Publication Number Publication Date
WO1995035538A1 true WO1995035538A1 (en) 1995-12-28

Family

ID=23000531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/007846 WO1995035538A1 (en) 1994-06-21 1995-06-21 Method and system for encoding and decoding signals using a fast algebraic error correcting code

Country Status (2)

Country Link
AU (1) AU2905695A (en)
WO (1) WO1995035538A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4888779A (en) * 1988-03-18 1989-12-19 International Business Machines Corporation Matched spectral null trellis codes for partial response channels
US5095484A (en) * 1989-11-13 1992-03-10 International Business Machines Company Corporation Phase invariant rate 8/10 matched spectral null code for PRML
US5224106A (en) * 1990-05-09 1993-06-29 Digital Equipment Corporation Multi-level error correction system
US5280489A (en) * 1992-04-15 1994-01-18 International Business Machines Corporation Time-varying Viterbi detector for control of error event length
US5289501A (en) * 1991-11-26 1994-02-22 At&T Bell Laboratories Coded modulation with unequal error protection for fading channels

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4888779A (en) * 1988-03-18 1989-12-19 International Business Machines Corporation Matched spectral null trellis codes for partial response channels
US5095484A (en) * 1989-11-13 1992-03-10 International Business Machines Company Corporation Phase invariant rate 8/10 matched spectral null code for PRML
US5224106A (en) * 1990-05-09 1993-06-29 Digital Equipment Corporation Multi-level error correction system
US5289501A (en) * 1991-11-26 1994-02-22 At&T Bell Laboratories Coded modulation with unequal error protection for fading channels
US5280489A (en) * 1992-04-15 1994-01-18 International Business Machines Corporation Time-varying Viterbi detector for control of error event length

Also Published As

Publication number Publication date
AU2905695A (en) 1996-01-15

Similar Documents

Publication Publication Date Title
US10243589B2 (en) Multi-bit error correction method and apparatus based on a BCH code and memory system
US5291496A (en) Fault-tolerant corrector/detector chip for high-speed data processing
US7237183B2 (en) Parallel decoding of a BCH encoded signal
US5872799A (en) Global parity symbol for interleaved reed-solomon coded data
US7502989B2 (en) Even-load software Reed-Solomon decoder
US10992416B2 (en) Forward error correction with compression coding
US7962837B2 (en) Technique for reducing parity bit-widths for check bit and syndrome generation for data blocks through the use of additional check bits to increase the number of minimum weighted codes in the hamming code H-matrix
KR19980702551A (en) Improved 3, 4 error correction systems
US5889792A (en) Method and apparatus for generating syndromes associated with a block of data that employs re-encoding the block of data
US7461329B2 (en) Channel encoding adapted to error bursts
US6823487B1 (en) Method and apparatus for enhancing correction power of reverse order error correction codes
KR101314232B1 (en) Coding and decoding method and codec of error correction code
EP1102406A2 (en) Apparatus and method for decoding digital data
Kamiya On algebraic soft-decision decoding algorithms for BCH codes
US6915478B2 (en) Method and apparatus for computing Reed-Solomon error magnitudes
US8181096B2 (en) Configurable Reed-Solomon decoder based on modified Forney syndromes
JP3734486B2 (en) Error correction apparatus and error correction method
WO1995035538A1 (en) Method and system for encoding and decoding signals using a fast algebraic error correcting code
Kamiya On acceptance criterion for efficient successive errors-and-erasures decoding of Reed-Solomon and BCH codes
US7155656B1 (en) Method and system for decoding of binary shortened cyclic code
TWI776483B (en) Encoding and decoding method of cyclic code
CN112152751B (en) Single trace calculation method and error correction method applying single trace
US8595604B2 (en) Methods and apparatus for search sphere linear block decoding
Lu et al. Fast algorithms for decoding the (23, 12) binary Golay code with four-error-correcting capability
Sahu et al. Design and implementation of encoder and decoder for cyclic redundancy check

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TT UA UG UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA