US6708149B1 - Vector fixed-lag algorithm for decoding input symbols - Google Patents

Vector fixed-lag algorithm for decoding input symbols Download PDF

Info

Publication number
US6708149B1
US6708149B1 US09/845,134 US84513401A US6708149B1 US 6708149 B1 US6708149 B1 US 6708149B1 US 84513401 A US84513401 A US 84513401A US 6708149 B1 US6708149 B1 US 6708149B1
Authority
US
United States
Prior art keywords
matrix
vector
symbol
product
lag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US09/845,134
Inventor
William Turin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
AT&T Properties LLC
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Priority to US09/845,134 priority Critical patent/US6708149B1/en
Assigned to AT&T CORP. reassignment AT&T CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TURIN, WILLIAM
Application granted granted Critical
Publication of US6708149B1 publication Critical patent/US6708149B1/en
Assigned to AT&T INTELLECTUAL PROPERTY II, L.P. reassignment AT&T INTELLECTUAL PROPERTY II, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T PROPERTIES, LLC
Assigned to AT&T PROPERTIES, LLC reassignment AT&T PROPERTIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T CORP.
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T INTELLECTUAL PROPERTY II, L.P.
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]

Definitions

  • the present invention relates generally to a method and apparatus for decoding received symbols. More particularly, the present invention discloses a vector fix-lag algorithm for determining the probabilities of transmitted symbols given received symbols.
  • FBAs Forward-backward algorithms
  • speech recognition handwriting verification
  • error correction code decoding etc.
  • FBAs are a combination of forward algorithms and backward algorithms using vector-matrix products.
  • Equipment that performs the algorithms requires large amounts of memory for storing all the matrices and intermediate matrix products needed to support the algorithms.
  • FBAs can be used to calculate the probabilities associated with the functions of Hidden Markov Models (HMMs) in voice recognition to recognize discrete and continuous speech.
  • HMMs Hidden Markov Models
  • products of sequences of probability density matrices are used to estimate the a posteriori probabilities of transmitted symbols given the received symbols.
  • mathematical models are used to estimate the probabilities of the transmitted symbol knowing the received symbol.
  • the invention provides a method and apparatus that performs a fixed-lag computation process.
  • the present invention discloses an apparatus and method of decoding information received over a noisy communications channel to determine the intended transmitted information.
  • the present invention improves upon the traditional forward-backward algorithm with a vector fixed-lag algorithm.
  • the algorithm is implemented by multiplying an initial state vector with a matrix containing information about the communications channel.
  • the product is then recursively multiplied by the matrix ⁇ times, using the new product with each recursive multiplication.
  • the new product forward information is stored in storage elements.
  • the final product is multiplied with a final state column vector yielding a probability of a possible input.
  • the estimated input is the input having the largest probability.
  • the invention may be applied to a maximum a posteriori estimation of input symbols in systems modeled by an input-output HMM such as symbols transmitted over noisy channels, to handwriting and speech recognition and other probabilistic systems.
  • the vector fixed-lag process of the invention replaces the conventional forward-backward algorithm. This eliminates the need of saving long sequences of the forward vectors. Accordingly, memory requirements and decoding delay are reduced when using the fixed-lag process to decode information transmitted over a communication channel.
  • the present invention discloses a fixed-lag method for determining the probability of a transmitted symbol at a time t, transmitted along a communications channel with bursts of errors, given a received symbol.
  • the method comprises obtaining initial state information vector about the channel and obtaining channel information matrices describing the probabilities that the transmitted symbol would be transmitted along a communications channel with and without error.
  • the method further comprises generating intermediate probabilities, each intermediate probability being the product of the initial state information vector at a time previous to time t, and a channel information matrix, storing the intermediate probabilities in storage elements, and multiplying a last intermediate probability with a final state vector to yield the probability of the transmitted symbol.
  • FIG. 1 illustrates information processing according to the invention over a wireless communication channel
  • FIG. 2 illustrates a decoder used to decode symbols transmitted according to FIG. 1;
  • FIG. 3 illustrates a decoder in another aspect
  • FIG. 4 illustrates matrix storage according to the invention
  • FIG. 5 illustrates matrix storage according to the invention in another aspect
  • FIG. 6 illustrates a fixed-lag decoding apparatus for three memory elements according to the invention
  • FIG. 7 illustrates a fixed-lag decoding apparatus according to another embodiment of the invention in which matrix inversion is used
  • FIG. 8 illustrates a flowchart according to the invention
  • FIG. 9 illustrates a decoder in accordance with another embodiment of the present invention.
  • FIG. 10 illustrates an encoder in which the present invention may be used.
  • FIG. 11 illustrates a table of data exemplifying the embodiments of FIGS. 9 and 10 .
  • the invention provides a method and apparatus to generate estimates for processing data symbols using algorithms and sequences of matrices.
  • the purpose of the algorithms is to determine the intended transmitted or input symbol from the received symbol, which has been corrupted with noise.
  • the matrices reflect the relationship between a system state variables, input sequences and output sequences.
  • the matrices may describe a HMM of a communication system representing the following probabilities: Pr(X t ,Y t ,S t
  • the matrices describe the transition from state S t ⁇ 1 (i.e., state S at a prior time of t ⁇ 1) to the next state S t (i.e., state S at a later time t) and generate the next input symbol X t and the next output symbol Y t .
  • the communication system modeled above could be a wireless radio system, fiber optical system, wired system, or other suitable system.
  • Many other systems can be analyzed using state matrix information.
  • bioelectrical signals such as electrocardiograms, seismic measurements, handwriting recognition devices, speech recognition devices, control systems and others can be modeled as machines or processes whose next state depends upon the current state, plus input information or symbols. All these systems can be described in terms of communications systems.
  • speech recognition the output sequence is what is heard, while the input sequence is the intended meaning.
  • handwriting recognition the output is the sequence of scanned handwritten symbols, while the input is the intended sequence of letters that a decoder must recognize. Therefore, in the sequel we will use the communication system terminology, but the results have a broader application.
  • the following general structure may be used to calculate the matrix product to determine the probability of the intended input.
  • ⁇ 0 is a row vector representing an initial condition
  • ⁇ T is a column vector representing a terminal condition
  • M i and W i are square matrices.
  • matrices M i can have different meanings.
  • the matrices M i and W i could be of a dimension other than square as long as the dimensions of the row and column vector correspond appropriately to permit for proper matrix multiplication.
  • the evaluation of the parameter p t according to Equation (1) above is conventionally done by the forward-backward algorithm (FBA).
  • T represents some total time period which is usually equal to the number of observed output symbols.
  • the present invention avoids the necessity of storing the complete symbol sequence and reduces processing time compared to conventional technology.
  • Equation 1 may be ignored with little penalty in accuracy.
  • FIG. 1 shows an exemplary communications system 10 as a typical application of the estimate process according to the invention.
  • an information source 100 outputs information signals to an encoder/transmitter 110 , such as a base station in a wireless cellular communications system.
  • Encoder/transmitter 110 transmits an encoded signal from antenna 120 over a communication channel 130 , which may, for instance, be the radio frequency channels according to Personal Communications Service (PCS) or other forms of communication channels.
  • the transmitted symbols 140 are received at a receiving unit 150 , which may be a mobile cellular telephone, over an antenna 180 .
  • the receiving unit 150 receives the transmitted symbols 140 and processes them in a decoder 160 to provide decoded output symbols to an input/output unit 170 .
  • the input/output unit 170 may, for instance, output voice sounds in a cellular telephone.
  • Real communication channels are characterized by the bursty nature of errors that can be modeled quite accurately by HMMs as known in the art. Therefore, communications system 10 may be modeled by an HMM, and the transmitted symbols 140 may be decoded by known methods such as maximum a posteriori (MAP) symbol estimation as briefly discussed herein.
  • MAP maximum a posteriori
  • This equation can be evaluated by the forward-backward algorithm.
  • Equation (c) If we need to calculate only one or two of the products in Equation (c), we can apply a forward algorithm, but if we need to calculate p(X t ,Y 1 T ) for many values of t, we use the forward-backward algorithm.
  • Equation (e) If we use ⁇ overscore ( ⁇ ) ⁇ (Y 1 t ⁇ 1 ) instead of ⁇ (Y 1 t ⁇ 1 ) and ⁇ overscore ( ⁇ ) ⁇ (Y t+1 T ) instead of ⁇ (Y t+1 T ) in Equation (e), we obtain:
  • c i and d i can be any numbers. However, it is convenient to choose
  • the normalized vectors can be obtained recursively using Equation (d) and normalizing the result after each recursive step:
  • One of the approaches is based on the fact that many processes have a “fading” memory: the process samples dependency is a decreasing function of the sample time separation. In this case
  • a FBA process may be applied that evaluates a probability at time t, P(X t
  • Y 1 T ), for the transmitted symbol X t and for the actually received symbols Y 1 T Y 1 ,Y 2 , . . . Y T .
  • Y t T ) is proportional to
  • ⁇ 0 is the row vector of the Markov state initial probabilities
  • M i P(Y i ) representing the matrix probabilities of receiving symbols Y i .
  • W t P(X t ,Y t ) is the matrix probability of transmitting X t and receiving Y t .
  • T is the total time period of the complete received symbols.
  • FIG. 2 shows a flow diagram of a general process for generating the estimate p t .
  • letter “R” on signal lines indicates that the corresponding matrix multiplies the matrix on the other line from the right. It is important to show, because matrix products are not commutative.
  • M t+ ⁇ +1 is input on signal line 202 , and then multiplied by a series of matrices: M t+ ⁇ stored in storage element 204 , M t+ ⁇ +1 stored in storage element 206 , . . . , M t+1 stored in storage element 208 , and M t stored in storage element 210 .
  • ⁇ t ⁇ 1 stored in storage element 226 is then right-multiplied by multiplier 210 and the result is output over signal line 236 to update ⁇ t ⁇ 1 to ⁇ t .
  • ⁇ t is output over signal line 240 for right multiplication by W t by multiplier 214 .
  • the result of the multiplier 214 is output over signal line 242 to multiplier 216 as a forward portion of the estimate p t .
  • the storage elements 204 , 206 , . . . , 208 , and 210 serve to delay the matrices M t ⁇ M t+ ⁇ to synchronize the generation of the forward pattern with the generation of a backward portion as described below.
  • the partial matrix product M t+1 t+ ⁇ stored in the storage element 235 is then right multiplied by the vector ⁇ ⁇ stored in the storage element 228 and the result is multiplied from the left by the forward portion obtained on line 242 thus producing the desired estimate p t .
  • the partial matrix products stored in the storage elements 230 , 232 , . . . , 235 may be generated in a progressive manner according to equation (5) by storing a sequence of ⁇ 1 matrix products where each member of the sequence is generated by matrix multiplying a prior member of the sequence by M t+ ⁇ +1 from the right and storing the result in a storage element of the next sequence member.
  • storage elements 230 , 232 , . . . , 234 and 235 store the sequence of matrix products.
  • M t+ ⁇ +1 the content of the storage element 235 is matrix multiplied with ⁇ ⁇ by multiplier 225 to generate the next backward portion
  • the storage element 235 is then used to store the result of the matrix product between the content of the storage element 234 and M t+ ⁇ +1 generated by multiplier 224
  • the storage element 234 is then used to store the matrix product between the content of the next storage element earlier in the sequence and M t+ ⁇ +1 generated by the multiplier 222 and so on.
  • the storage element 232 After the content of the storage element 232 is used to generate the matrix products for the following storage element in the sequence, it is used to store the output of multiplier 221 . Finally, the storage element 230 stores the product M t+ ⁇ M t+ ⁇ +1 . Thus, the storage elements 230 , 232 , 234 and 235 stores ⁇ 1 sequence of matrix products for generating the backward portion of the p t . The backward portion is multiplied by multiplier 216 with the forward portion to generate p t as the probability at time t.
  • the whole sequence of storage elements and multipliers 230 through 235 in FIG. 2 may be replaced with a single storage device, two multipliers and the matrix inversion unit.
  • the latter may be replaced with storage units if the inverse matrices are pre-computed and saved. This embodiment is described below more particularly with respect to FIG. 7 .
  • the generation of p t according to FIG. 2 can be implemented by an exemplary fixed-lag apparatus 250 shown in FIG. 3 .
  • the fixed-lag apparatus may include a controller 252 , a memory 254 , a matrix multiplier 256 , a matrix inverter 258 and an input/output device 260 .
  • the above components are coupled together via signal bus 262 .
  • While the fixed-lag apparatus 250 is shown with a common bus architecture, other structures are well known to one of ordinary skill in the art.
  • the functions performed by each of the devices could be performed by a general purpose computer, digital signal processors, application specific integrated circuits, DGA's, DLA, etc. which are well known in the art.
  • the controller 252 When generating p t , the controller 252 reads values of the matrices M i out of memory 254 for multiplication by matrix multiplier 256 or inversion by matrix inverter 258 .
  • the individual matrices M t ⁇ M t+ ⁇ are stored in memory 254 , which may be electronic random access memory or other forms of electronic or other storage appreciated by persons skilled in the art.
  • Memory 254 likewise contains the matrix products of storage elements 234 - 235 which are M t+ ⁇ 1 M t+ ⁇ , M t+ ⁇ 1 M t+ ⁇ 2 M t+ ⁇ 3 , . . . , M t+1 M t+2 . . . M t+ ⁇ .
  • the controller 252 At each time t, the controller 252 generates the matrix M t+ ⁇ +1 .
  • This matrix may be generated based on the HMM of the underlying process and received sequence of symbols for the period T (e.g., received encoded data over a communication channel,or a handwriting analysis process).
  • T e.g., received encoded data over a communication channel,or a handwriting analysis process.
  • the controller 252 directs matrix multiplier 256 to generate ⁇ t by multiplying ⁇ t ⁇ 1 stored in storage element 226 by M t , further directing the matrix multiplier 256 to multiply ⁇ t , by W t to generate the forward portion.
  • the controller 252 generates the backward portion by directing the matrix multiplier 256 to multiply ⁇ ⁇ stored in the storage element 228 with M t+1 t+ ⁇ stored in storage element 235 .
  • the controller 252 then generates p ⁇ by directing the matrix multiplier 256 to multiply the forward portion with the backward portion and outputs p t to further downstream processes.
  • the controller 252 proceeds to generate each of the matrix products to be stored in the storage element 230 , 232 , 234 and 235 by directing the matrix multiplier 256 to multiply M t+ ⁇ +1 with the contents of each respective storage element and storing the result in the next following storage element in the sequence. In this way, all the contents of the storage elements 230 , 232 , 234 and 235 are prepared for the generation of p t+1 .
  • FIG. 4 shows a FIFO 270 as an exemplary device for the storage elements 204 , 206 , 208 and 210 .
  • the FIFO 270 has ⁇ +1 locations 272 , 274 , 276 and 278 that correspond to the storage elements 204 , 206 , 208 and 210 , respectively.
  • M t is read from the FIFO 270 and a M t+ ⁇ +1 is generated and “pushed” into the FIFO 270 .
  • the FIFO 270 contains M t+ ⁇ in location 278 , M ⁇ in location 274 , M 2 in location 276 and M 1 in location 278 .
  • the FIFO 270 contains M 2+ ⁇ in location 278 , M ⁇ +1 in location 274 , M 3 in location 276 and M 2 in location 278 .
  • M 1 is consumed by vector matrix multiplication with ⁇ 0 to for ⁇ 1 now stored in storage element 226 .
  • FIG. 5 shows an exemplary memory management scheme of a memory space 280 for storage elements 230 , 232 , 234 and 235 .
  • a block of locations 284 , 286 , 288 and 290 in the memory 254 may be set aside corresponding to the storage elements 230 , 232 , 234 and 235 .
  • location 284 contains M ⁇ M 1+ ⁇
  • location 286 contains M ⁇ 1 M ⁇ M 1+ ⁇
  • location 288 contains M 3 M 4 . . . M 1+ ⁇
  • location 290 contains M 2 M 3 . . . M 1+ ⁇ .
  • the pointer 282 is pointing at location 290 in preparation for generating the backward portion of p 1 .
  • the controller 252 reads the contents of the location pointed to by the pointer 282 and obtains M 2 M 3 . . . M 1+ ⁇ and sends this matrix product to the matrix multiplier 256 to generate the first backward portion M 2 1+ ⁇ ⁇ ⁇ . Then the controller 252 directs the matrix multiplier 256 to multiply M 1+ ⁇ with M 2+ ⁇ and stores the product in the location pointed to by the pointer 256 which is location 290 thus overwriting M 2 M 3 . . . M 1+ ⁇ . The controller 252 then updates the pointer 256 to point to location 288 by decrementing the pointer 256 by M, for example, where M is the number of elements in each of the matrix product. In this regard, each of the locations 284 , 286 , 288 and 290 actually is a block of memory space sufficient to store one of the matrix products.
  • the controller 252 directs the matrix multiplier 256 to matrix multiply the contents of each of the remaining locations 284 , 286 and 288 with M 2+ ⁇ .
  • the memory space 280 is ready for the next cycle to generate the backward portion for p t+1 .
  • the pointer 282 After ⁇ 2 cycles, the pointer 282 would be pointing to location 284 . During the ⁇ 1 cycle the pointer 256 would be incremented by ⁇ 2 to again point to location 290 which essentially permits the memory space 280 to be a circular buffer of ⁇ 1 locations 284 , 286 , 288 and 290 .
  • ⁇ t ⁇ 1 is output over signal line 340 to right-multiplier 314 for right-multiplication by W t , that product in turn being output to multiplier 316 .
  • the storage elements 304 , 306 , 308 and 310 also updated by shifting their contents thus preparing for the next cycle.
  • decoder 160 the received value p t which is used to decode the current transmitted symbol. The following illustrates the calculation of several initial values of p t .
  • the conventional products of Markov matrices are truncated according to the persistence of memory in communications channel 130 , reducing storage and computation significantly.
  • decoder 160 as illustrated in FIG. 2 outputs to input/output unit 170 a probability value p t that a symbol X t was transmitted at time t.
  • the encoder/transmitter 110 may be transmitting wireless voice or data signals over communications channel 130 , and input/output unit 170 may output a voice output over receiving unit 150 , such as a voice sound.
  • Equation 7 The general form of the equation for calculating the partial matrix products according to the invention is shown in Equation 7 above for arbitrary ⁇ . As can be seen from that expression, in the invention it is only necessary to compute matrix products of matrices modeling the communication channel, whether wireless, radio frequency, optical or otherwise, over the period of time ⁇ representing channel memory.
  • the transmitted information symbols 140 are illustrated in FIG. 1 as being cellular wireless voice or data symbols, however, it will be understood that the invention can be applied to any information signals that can be modeled by an HMM.
  • Such information signals could also be, for instance, voice recognition information, handwriting information, bioelectrical signals such as electrocardiographs, seismic signals, and others.
  • each letter would represent an information symbol which is modeled by an HMM, whose states are composed of preceding and succeeding letters and some hidden states representing a particular style of writing, for example, which would be reflected in matrices drawn to that model.
  • the system and method of the invention achieves information decoding in a streamlined manner.
  • it is possible among other things to avoid having to store all forward (as well as backward) vectors in an HMM, and moreover to look forward through the chain by only a fixed lag, rather than through the entire sequence.
  • This reflects the realization that time delays or fades which create a memory effect and distort a channel are of finite duration. Those distortions could only influence the present information signal as long as those time delays, fades or other distortions are still propagated.
  • the invention capitalizes on these and other characteristics of non-Gaussian channels to achieve improved processing efficiency, while placing much reduced demands on processor bandwidth and storage capacity. Further efficiencies are gained when coefficients are recovered using an inverse matrix as described above.
  • FIG. 7 An embodiment of the invention is illustrated in FIG. 7, in which advantage is taken of the property of matrix inversion to realize storage gains in the backward portion of the algorithm. Specifically, when dealing with products of matrices necessary to compute backward portion, it is possible to avoid the chain multiplication over the complete time period t to t+ ⁇ , when the intermediate matrix can be inverted.
  • the matrices for the forward portion of the algorithm are stored similarly to the apparatus of FIG. 2, with M t being stored in storage element 708 , M t+1 being stored in storage element 706 , and so forth with the last M t+ ⁇ being stored in storage element 700 .
  • M t is multiplied by ⁇ t ⁇ 1 stored in storage element 710 by multiplier 712 and the result is stored in storage element 710 thus generating the forward portion ⁇ t .
  • ⁇ t ⁇ 1 is sent over signal line 720 for multiplication by W t and that result is then multiplied by multiplier 716 by the product of M t+1 t+ ⁇ stored in storage element 726 and ⁇ ⁇ stored in storage element 724 to crate p t , generally as in the other described embodiment.
  • M t+2 t+ ⁇ +1 can be generated by inverting M t+1 in the matrix inverter 258 , and multiplying M t+2 t+ ⁇ +1 or by that inverted matrix in multiplier 730 to generate M t+2 t+ ⁇ and then multiplying M t+2 t+ ⁇ by M t+ ⁇ +1 in multiplier 728 to generate M t+2 t+ ⁇ +1 .
  • This has the effect of removing the earliest term from the matrix product, while adding the next multiplicative term at time t+ ⁇ +1. Because of all the new matrix products, except for the last term, are dropped and the new value replaces the old one in storage element 726 , no more of the backward sequence need to be saved in order to update ⁇ t .
  • the decoding operation is illustrated in another aspect in the flowchart of FIG. 8 . It will be understood that the following processing steps are illustrated as executed by controller 252 in coordination with memory 254 , matrix multiplier 256 and related elements. Processing begins in step 610 , followed by initialization in step 615 of the matrices and parameters as described herein. In step 620 , the current matrix is read from the FIFO, and in 625 that quantity is used to generate the current forward portion, ⁇ t ⁇ 1 . In step 626 ⁇ t ⁇ 1 W t is generated. In step 630 , ⁇ t is stored in storage location 226 . In step 635 , ⁇ t is generated.
  • step 640 p t representing the a posteriori probability of the input symbol is generated by multiplying ⁇ t ⁇ 1 W t and ⁇ t .
  • step 645 the controller 252 directs the generation of the next matrix model for the following time period.
  • step 650 the next matrix model is written into the FIFO.
  • step 655 the next matrix model is multiplied by the contents of each of the storage locations 230 , 232 , . . . , 234 .
  • step 660 the results of those multiplications are stored in locations 232 , . . . , 235 .
  • next matrix model is then overwritten in storage location 204 in step 665 , and in step 675 the matrix values for succeeding storage elements 206 , 208 , . . . , 210 are replaced with the matrix contents for the next time.
  • the processing tests whether time has reached the end of the time period T. If not, processing repeats for t+1, otherwise it ends in step 685 .
  • the fixed-lag algorithm can be implemented in the vector form thus reducing the computation and storage requirements.
  • Equation (1) Equation (1)
  • a list structure can be used for evaluating the a posteriori probabilities in the following way.
  • s(X t ,Y 1 u ⁇ 1 ) represents a list at the moment u.
  • ⁇ u ⁇ 1 with ⁇ u
  • s(X t ,Y 1 u ⁇ 1 ) with s(X t ,Y t u ) using Equation (II.B) and the equation
  • this algorithm does not have a backward portion.
  • computing probabilities with this algorithm requires less memory than required by the forward-backward algorithm.
  • a fixed-lag algorithm using vectors requires less storage and computation than the matrix fixed-lag algorithm presented above. In this case we do not need to keep in memory s(X t ,Y 1 t+ ⁇ ) for u ⁇ t. Therefore, the list of the forward-only algorithm increases only at the beginning while t ⁇ .
  • FIG. 9 shows a flow diagram of a process for generating the estimate p t using vectors in an alternate embodiment.
  • Matrix M t+1 is input along signal line 902 .
  • ⁇ t is initially stored in storage element 904 .
  • ⁇ t is a row vector.
  • ⁇ t is right-hand multiplied by M t+1 at multiplier 906 and right hand multiplied by matrix W t+1 at multiplier 910 .
  • the result of the first multiplication is then stored in storage element 904 .
  • the multiplication yields a row vector being stored in storage element 904 .
  • the result of the second multiplication is stored in storage element 912 as shown by arrow 914 .
  • the content of storage element 912 and the content of storage element 914 are right-hand multiplied by matrix M t+1 at multipliers 916 and 920 and shifted to the next storage element as indicated by arrows. Additional storage elements may be added with the same multiplication pattern as indicated by the dashed lines 926 .
  • the product of the last multiplication is stored in storage element 918 .
  • This product is right hand multiplied with ⁇ ⁇ .
  • ⁇ ⁇ is a unity column vector
  • the mathematical operation is summing the elements in the row vector s t+ ⁇ +1 .
  • the storage requirements of storage elements 904 , 912 , 914 , 918 are less than the storage requirements of the storage elements shown in FIG. 2 (storing matrices).
  • the total number of storage elements shown in FIG. 9 is less than the total number of the storage elements shown in FIG. 2 .
  • the algorithm shown in FIG. 9 has a faster computation time than the algorithm shown in FIG. 2 as well as a smaller memory requirement.
  • FIGS. 10 and 11 are used to exemplify the process described in FIG. 9 .
  • FIG. 10 illustrates a convolutional encoder 1000 having shift registers 1004 and 1006 and summers 1008 and 1010 .
  • Input symbols I j are input into encoder 1000 along signal line 1002 .
  • FIG. 10 shows that
  • encoder 1000 output symbols X j1 and X j2 are mapped to a modulator, such as a quadrature phase shift keying (QPSK) modulator shown in FIG. 10 .
  • QPSK quadrature phase shift keying
  • the encoder is a rate one-half encoder, outputting two bits for each input bit.
  • the modulated symbols are transmitted over a communications channel with memory that is modeled by an HMM.
  • the ⁇ 0 row vector represents the initial conditions of the communications channel.
  • the P( 0 ) square matrix and the P( 1 ) square matrix are the matrix probabilities of correct reception and erroneous reception, respectively. Assume further that the following bit sequence is received:
  • Column one represents time t and column eight represents the predicted input bit.

Abstract

The present invention discloses an apparatus and method of decoding information received over a noisy communications channel to determine the intended transmitted information. The present invention uses a vector fixed-lag algorithm to determine the probabilities of the intended transmitted information. The algorithm is implemented by multiplying an initial state vector with a matrix containing information about the communications channel. The product is then recursively multiplied by the matrix τ times, using the new product with each recursive multiplication and the forward information is stored for a fixed period of time, τ. The final product is multiplied with a unity column vector yielding a probability of a possible input. The estimated input is the input having the largest probability.

Description

CROSS-REFERENCES TO RELATED APPLICATIONS
This application is a continuation-in-part of 09/183,474 filed Oct. 30, 1998 of U.S. Pat. No. 6,226,613, issued May 1, 2001, entitled Fixed-Lag Decoding of Input Symbols to Input/Output Hidden Markov Models.
FIELD OF THE INVENTION
The present invention relates generally to a method and apparatus for decoding received symbols. More particularly, the present invention discloses a vector fix-lag algorithm for determining the probabilities of transmitted symbols given received symbols.
BACKGROUND OF THE INVENTION
Forward-backward algorithms (FBAs) are often used in a variety of applications such as speech recognition, handwriting verification such as signature verification, error correction code decoding, etc., to calculate probabilities. As the name suggests, FBAs are a combination of forward algorithms and backward algorithms using vector-matrix products. Equipment that performs the algorithms requires large amounts of memory for storing all the matrices and intermediate matrix products needed to support the algorithms.
FBAs can be used to calculate the probabilities associated with the functions of Hidden Markov Models (HMMs) in voice recognition to recognize discrete and continuous speech. When a HMM is applied to describe a communication channel, products of sequences of probability density matrices are used to estimate the a posteriori probabilities of transmitted symbols given the received symbols. In other words, mathematical models are used to estimate the probabilities of the transmitted symbol knowing the received symbol.
Conventional FBA techniques require that a sequence of matrices multiplied by a first vector in a recursive manner in a forward part of the algorithm be stored in memory. The decoding process can start only after a long sequence of symbols has been received. This is unacceptable in many applications (a telephone application, for example) that impose strict constraints on the message delivery delay. Thus, new technology is needed to improve the vector-matrix product calculation that enables a decoder to estimate the product, and thus estimate the input symbols, without waiting for the whole symbol sequence to be received. This technology enables a designer to trade the product estimation accuracy for smaller delays in information delivery.
SUMMARY OF THE INVENTION
The invention provides a method and apparatus that performs a fixed-lag computation process.
The present invention discloses an apparatus and method of decoding information received over a noisy communications channel to determine the intended transmitted information. The present invention improves upon the traditional forward-backward algorithm with a vector fixed-lag algorithm. The algorithm is implemented by multiplying an initial state vector with a matrix containing information about the communications channel. The product is then recursively multiplied by the matrix τ times, using the new product with each recursive multiplication. The new product forward information is stored in storage elements. The final product is multiplied with a final state column vector yielding a probability of a possible input. The estimated input is the input having the largest probability. The invention may be applied to a maximum a posteriori estimation of input symbols in systems modeled by an input-output HMM such as symbols transmitted over noisy channels, to handwriting and speech recognition and other probabilistic systems.
The vector fixed-lag process of the invention replaces the conventional forward-backward algorithm. This eliminates the need of saving long sequences of the forward vectors. Accordingly, memory requirements and decoding delay are reduced when using the fixed-lag process to decode information transmitted over a communication channel.
The present invention discloses a fixed-lag method for determining the probability of a transmitted symbol at a time t, transmitted along a communications channel with bursts of errors, given a received symbol. The method comprises obtaining initial state information vector about the channel and obtaining channel information matrices describing the probabilities that the transmitted symbol would be transmitted along a communications channel with and without error. The method further comprises generating intermediate probabilities, each intermediate probability being the product of the initial state information vector at a time previous to time t, and a channel information matrix, storing the intermediate probabilities in storage elements, and multiplying a last intermediate probability with a final state vector to yield the probability of the transmitted symbol.
BRIEF DESCRIPTION OF THE DRAWING
The invention will be described with reference to the accompanying Figures in which like elements are referenced with like numerals and in which:
FIG. 1 illustrates information processing according to the invention over a wireless communication channel;
FIG. 2 illustrates a decoder used to decode symbols transmitted according to FIG. 1;
FIG. 3 illustrates a decoder in another aspect;
FIG. 4 illustrates matrix storage according to the invention;
FIG. 5 illustrates matrix storage according to the invention in another aspect;
FIG. 6 illustrates a fixed-lag decoding apparatus for three memory elements according to the invention;
FIG. 7 illustrates a fixed-lag decoding apparatus according to another embodiment of the invention in which matrix inversion is used;
FIG. 8 illustrates a flowchart according to the invention;
FIG. 9 illustrates a decoder in accordance with another embodiment of the present invention;
FIG. 10 illustrates an encoder in which the present invention may be used; and
FIG. 11 illustrates a table of data exemplifying the embodiments of FIGS. 9 and 10.
DETAILED DESCRIPTION OF THE INVENTION
The invention provides a method and apparatus to generate estimates for processing data symbols using algorithms and sequences of matrices. The purpose of the algorithms is to determine the intended transmitted or input symbol from the received symbol, which has been corrupted with noise. In general, the matrices reflect the relationship between a system state variables, input sequences and output sequences. For example, the matrices may describe a HMM of a communication system representing the following probabilities: Pr(Xt,Yt,St|St−1). In other words, the matrices describe the transition from state St−1 (i.e., state S at a prior time of t−1) to the next state St (i.e., state S at a later time t) and generate the next input symbol Xt and the next output symbol Yt.
The communication system modeled above could be a wireless radio system, fiber optical system, wired system, or other suitable system. Many other systems can be analyzed using state matrix information. For instance, bioelectrical signals such as electrocardiograms, seismic measurements, handwriting recognition devices, speech recognition devices, control systems and others can be modeled as machines or processes whose next state depends upon the current state, plus input information or symbols. All these systems can be described in terms of communications systems. For example, in speech recognition, the output sequence is what is heard, while the input sequence is the intended meaning. In handwriting recognition, the output is the sequence of scanned handwritten symbols, while the input is the intended sequence of letters that a decoder must recognize. Therefore, in the sequel we will use the communication system terminology, but the results have a broader application.
For the applications noted above or for other suitable applications, the following general structure may be used to calculate the matrix product to determine the probability of the intended input. p t = α 0 i = 1 t - 1 M i W t i = t + 1 T M i β T = α t - 1 W t β t ( 1 )
Figure US06708149-20040316-M00001
where α0 is a row vector representing an initial condition, βT is a column vector representing a terminal condition, and Mi and Wi are square matrices. For different applications, matrices Mi can have different meanings.
Although not exemplified here, the matrices Mi and Wi could be of a dimension other than square as long as the dimensions of the row and column vector correspond appropriately to permit for proper matrix multiplication.
The evaluation of the parameter pt according to Equation (1) above is conventionally done by the forward-backward algorithm (FBA). The FBA requires that the decoding unit must receive all symbols in an input sequence, compute and store the forward vectors α t = α 0 i = 1 t M i for all t = 1 , 2 , , T ( 2 )
Figure US06708149-20040316-M00002
then compute the backward vectors β t = i = t + 1 T M i β T ( 3 )
Figure US06708149-20040316-M00003
and compute ptt−1Wtβt for all t=T−1,T−2, . . . , 1. T represents some total time period which is usually equal to the number of observed output symbols.
This calculation places large demands on memory and processing resources. The present invention avoids the necessity of storing the complete symbol sequence and reduces processing time compared to conventional technology. The invention does so in part by observing that a sufficient estimate of pt may be made, if the application exhibits a fading or finite memory so that some tail portion of the product β t = i = t + 1 T M i β T
Figure US06708149-20040316-M00004
shown in Equation 1 may be ignored with little penalty in accuracy.
FIG. 1 shows an exemplary communications system 10 as a typical application of the estimate process according to the invention. In FIG. 1, an information source 100 outputs information signals to an encoder/transmitter 110, such as a base station in a wireless cellular communications system. Encoder/transmitter 110 transmits an encoded signal from antenna 120 over a communication channel 130, which may, for instance, be the radio frequency channels according to Personal Communications Service (PCS) or other forms of communication channels. The transmitted symbols 140 are received at a receiving unit 150, which may be a mobile cellular telephone, over an antenna 180. The receiving unit 150 receives the transmitted symbols 140 and processes them in a decoder 160 to provide decoded output symbols to an input/output unit 170. The input/output unit 170 may, for instance, output voice sounds in a cellular telephone.
Real communication channels are characterized by the bursty nature of errors that can be modeled quite accurately by HMMs as known in the art. Therefore, communications system 10 may be modeled by an HMM, and the transmitted symbols 140 may be decoded by known methods such as maximum a posteriori (MAP) symbol estimation as briefly discussed herein.
In many applications, it is necessary to find a symbol Xt maximum a posteriori estimate by maximizing its a posteriori probability density function (APPDF) as follows: p ( X t | Y 1 T ) = p ( X t , Y 1 T ) p ( Y 1 T ) . ( a )
Figure US06708149-20040316-M00005
Since the received sequence Y1 T is fixed, it is sufficient to maximize the unnormalized APPDF p(Xt,Y1 T) as follows: X ^ t = arg max X t p ( X t , Y 1 T ) = arg max X t p ( X t | Y 1 T ) where ( b ) p ( X t , Y 1 T ) = π i = 1 t - 1 P ( Y i ) P ( X t , Y t ) i = t + 1 T P ( Y 1 ) 1. ( c )
Figure US06708149-20040316-M00006
This equation can be evaluated by the forward-backward algorithm.
Forward part: Compute and save
α(Y 1 0)=π, α(Y 1 t)=α(Y 1 t−1)P(Y t), t=1,2 . . . , T−1.  (d)
Backward part: For t=T, T−1, . . . , 2 compute
p(X t ,Y 1 T)=α(Y 1 t−1)P(X t ,Y t)β(Y t+1 T), where  (e)
β(Y T+1 T)=1, β(Y t T)=P(Y t)β(Y t+1 T).  (f)
If we need to calculate only one or two of the products in Equation (c), we can apply a forward algorithm, but if we need to calculate p(Xt,Y1 T) for many values of t, we use the forward-backward algorithm.
Since all products of probabilities tend to zero, to increase the calculation accuracy and avoid underflow, it is necessary to scale the equations if T is not small. The scaled vectors are denoted as follows:
{overscore (α)}(Y 1 t)=c tα(Y 1 t).  (g)
After the variable substitution Equation (d) takes the form
{overscore (α)}(Y 1 t+1)=λt+1{overscore (α)}(Y 1 t)P(Y t),  (h)
where λt+1=ct+1/ct.
Let dt be the scaling factor for β(Yt T):
{overscore (β)}(Y t T)=d tβ(Y t T).  (i)
If we use {overscore (α)}(Y1 t−1) instead of α(Y1 t−1) and {overscore (β)}(Yt+1 T) instead of β(Yt+1 T) in Equation (e), we obtain:
{overscore (p)}(Xt,Y1 T)=p(XtY1 Tt where μt=c1c2 . . . ct−1dt+1 . . . dT. If the scaling factors do not depend on Xt, then μt does not depend on Xt, and the solution of Equation (b) does not change if we replace P(Xt,Y1 T) with {overscore (p)}(Xt,Y1 T).
In principle, ci and di can be any numbers. However, it is convenient to choose
c t=1/α(Y 1 t)1.  (j)
so that the normalized vector {overscore (α)}(Y1 T)1=1
The normalized vectors can be obtained recursively using Equation (d) and normalizing the result after each recursive step:
{circumflex over (α)}(Y 1 t+1)={overscore (α)}(Y 1 t)P(Y t), {overscore (α)}(Y1 t+1)=λt+1{circumflex over (α)}(Y 1 t+1)  (k)
where
λt+1=1/{circumflex over (α)}(Y 1 t+1)1=c t+1 /c t.
The normalization factors ct can be recovered from the normalization factors λt of the scaled forward algorithm (k): c t = i = 1 t λ i .
Figure US06708149-20040316-M00007
We can select the normalizing factors for β(Yt T) similarly. However, if we use d t = i = 1 T λ i
Figure US06708149-20040316-M00008
we will have ctdt=1/p(Y1 T), ∀t and we can write the APPDF as
p(X t |Y 1 T)={overscore (α)}(Y 1 t−1)P(X t ,Y t){overscore (β)}(Y t+1 T)/λt.
If T is large and the maximum density functions do not have special structures simplifying their multiplication, the forward-backward algorithm uses a lot of computer resources. Therefore, it is beneficial to find approximate algorithms that have a satisfactory accuracy.
One of the approaches is based on the fact that many processes have a “fading” memory: the process samples dependency is a decreasing function of the sample time separation. In this case
p(X t Y 1 t)≈p(X t Y 1 t+T)
and we can use the fixed-lag algorithm.
With reference back to the modeling, a FBA process may be applied that evaluates a probability at time t, P(Xt|Y1 T), for the transmitted symbol Xt and for the actually received symbols Y1 T=Y1,Y2, . . . YT. P(Xt|Yt T) is proportional to
P(X t ,Y 1 T)=αt−1 P(X t ,Y tt
where α0 is the row vector of the Markov state initial probabilities, αt, and βt are computed according to Equations (2) and (3) in which Mi=P(Yi) representing the matrix probabilities of receiving symbols Yi. However, channel distortions affecting the transmitted information symbols 140 only persist for a finite period of time, for instance as a result of multipath fading. Thus, it is only necessary to look forward by a fixed period of time or time lag τ through the received sequence to decode the transmitted symbols.
If the memory in the communication channel is of length τ, then probability P(Xt|Y1 T) at time t of a transmitted symbol Xt, given the received sequence may be estimated by the expression: p t α 0 i = 1 t - 1 M i W t i = t + 1 t + τ M i β = a t - 1 W t β t , τ ( 3.1 )
Figure US06708149-20040316-M00009
where Wt=P(Xt,Yt) is the matrix probability of transmitting Xt and receiving Yt. When compared with the conventional FBA, at a given time t, only the terms extending from 1 to t+τ rare computed instead of 1 to T, where T is the total time period of the complete received symbols. Thus, the terms extending from t+τ to T are eliminated when computing the estimate. The invention presents the algorithm for computing vectors β t , τ = i = t + 1 t + τ M i β = M t + 1 t + τ β ( 4 )
Figure US06708149-20040316-M00010
recursively, thus saving both the memory space and processing time required to support computation of pt.
The invention makes use of the fact that the matrices M t + 1 t + τ = i = t + 1 t + τ M i
Figure US06708149-20040316-M00011
can be computed recursively by the following equation
M t+k+1 t+τ+1 =M t+k+1 t+τ M t+τ+1 , k=1,2, . . . , τ  (5)
and then compute βt+1=Mt+2 t+τ+1β. The vector β=1 in most applications. With βequal to a unity column vector, the mathematical computation is the summing of elements (by rows) in the matrix Mt+2 t+τ+1 being multiplied by the unity vector.
FIG. 2 shows a flow diagram of a general process for generating the estimate pt. In this figure, letter “R” on signal lines indicates that the corresponding matrix multiplies the matrix on the other line from the right. It is important to show, because matrix products are not commutative. As illustrated in FIG. 2, Mt+τ+1 is input on signal line 202, and then multiplied by a series of matrices: Mt+τ stored in storage element 204, Mt+τ+1 stored in storage element 206, . . . , Mt+1 stored in storage element 208, and Mt stored in storage element 210. αt−1 stored in storage element 226 is then right-multiplied by multiplier 210 and the result is output over signal line 236 to update αt−1 to αt. αt is output over signal line 240 for right multiplication by Wt by multiplier 214. The result of the multiplier 214 is output over signal line 242 to multiplier 216 as a forward portion of the estimate pt. The storage elements 204, 206, . . . , 208, and 210 serve to delay the matrices Mt→Mt+τ to synchronize the generation of the forward pattern with the generation of a backward portion as described below. The partial matrix product Mt+1 t+τ stored in the storage element 235 is then right multiplied by the vector βstored in the storage element 228 and the result is multiplied from the left by the forward portion obtained on line 242 thus producing the desired estimate pt. The partial matrix products stored in the storage elements 230, 232, . . . , 235 may be generated in a progressive manner according to equation (5) by storing a sequence of τ−1 matrix products where each member of the sequence is generated by matrix multiplying a prior member of the sequence by Mt+τ+1 from the right and storing the result in a storage element of the next sequence member.
As shown in FIG. 2, storage elements 230, 232, . . . , 234 and 235 store the sequence of matrix products. When Mt+τ+1 is generated, 1) the content of the storage element 235 is matrix multiplied with β by multiplier 225 to generate the next backward portion, 2) the storage element 235 is then used to store the result of the matrix product between the content of the storage element 234 and Mt+τ+1 generated by multiplier 224, 3) the storage element 234 is then used to store the matrix product between the content of the next storage element earlier in the sequence and Mt+τ+1 generated by the multiplier 222 and so on. After the content of the storage element 232 is used to generate the matrix products for the following storage element in the sequence, it is used to store the output of multiplier 221. Finally, the storage element 230 stores the product Mt+τMt+τ+1. Thus, the storage elements 230, 232, 234 and 235 stores τ−1 sequence of matrix products for generating the backward portion of the pt. The backward portion is multiplied by multiplier 216 with the forward portion to generate pt as the probability at time t.
In the alternative implementation of the algorithm, we assume that it is possible to calculate the inverse matrices Mt −1. In this case, the partial matrix products can be evaluated according to the following equation
M t+2 t+τ+1 =M t+1 −1 M t+1 t+τ M t+τ+1  (6)
Therefore, the whole sequence of storage elements and multipliers 230 through 235 in FIG. 2 may be replaced with a single storage device, two multipliers and the matrix inversion unit. The latter may be replaced with storage units if the inverse matrices are pre-computed and saved. This embodiment is described below more particularly with respect to FIG. 7.
The generation of pt according to FIG. 2 can be implemented by an exemplary fixed-lag apparatus 250 shown in FIG. 3. The fixed-lag apparatus may include a controller 252, a memory 254, a matrix multiplier 256, a matrix inverter 258 and an input/output device 260. The above components are coupled together via signal bus 262.
While the fixed-lag apparatus 250 is shown with a common bus architecture, other structures are well known to one of ordinary skill in the art. In addition, the functions performed by each of the devices could be performed by a general purpose computer, digital signal processors, application specific integrated circuits, DGA's, DLA, etc. which are well known in the art.
When generating pt, the controller 252 reads values of the matrices Mi out of memory 254 for multiplication by matrix multiplier 256 or inversion by matrix inverter 258. The individual matrices Mt−Mt+τ are stored in memory 254, which may be electronic random access memory or other forms of electronic or other storage appreciated by persons skilled in the art. Memory 254 likewise contains the matrix products of storage elements 234-235 which are Mt+τ−1Mt+τ, Mt+τ−1Mt+τ−2Mt+τ−3, . . . , Mt+1Mt+2 . . . Mt+τ.
At each time t, the controller 252 generates the matrix Mt+τ+1. This matrix may be generated based on the HMM of the underlying process and received sequence of symbols for the period T (e.g., received encoded data over a communication channel,or a handwriting analysis process). Once generated, Mt+τ+1 is stored in the memory 254 and used for the fixed-lag operation as described below.
The controller 252 directs matrix multiplier 256 to generate αt by multiplying αt−1 stored in storage element 226 by Mt, further directing the matrix multiplier 256 to multiply αt, by Wt to generate the forward portion. The controller 252 generates the backward portion by directing the matrix multiplier 256 to multiply β stored in the storage element 228 with Mt+1 t+τ stored in storage element 235. The controller 252 then generates pτ by directing the matrix multiplier 256 to multiply the forward portion with the backward portion and outputs pt to further downstream processes.
After generating the backward portion, the controller 252 proceeds to generate each of the matrix products to be stored in the storage element 230, 232, 234 and 235 by directing the matrix multiplier 256 to multiply Mt+τ+1 with the contents of each respective storage element and storing the result in the next following storage element in the sequence. In this way, all the contents of the storage elements 230, 232, 234 and 235 are prepared for the generation of pt+1.
FIG. 4 shows a FIFO 270 as an exemplary device for the storage elements 204, 206, 208 and 210. The FIFO 270 has τ+1 locations 272, 274, 276 and 278 that correspond to the storage elements 204, 206, 208 and 210, respectively.
For each t, Mt is read from the FIFO 270 and a Mt+τ+1 is generated and “pushed” into the FIFO 270. For example, at time t=1, the FIFO 270 contains Mt+τ in location 278, Mτ in location 274, M2 in location 276 and M1 in location 278. At t=2, the FIFO 270 contains M2+τ in location 278, Mτ+1 in location 274, M3 in location 276 and M2 in location 278. M1 is consumed by vector matrix multiplication with α0 to for α1 now stored in storage element 226.
FIG. 5 shows an exemplary memory management scheme of a memory space 280 for storage elements 230, 232, 234 and 235. A block of locations 284, 286, 288 and 290 in the memory 254 may be set aside corresponding to the storage elements 230, 232, 234 and 235. Thus, at t=1, location 284 contains MτM1+τ, location 286 contains Mτ−1MτM1+τ location 288 contains M3M4 . . . M1+τ, and location 290 contains M2M3 . . . M1+τ. The pointer 282 is pointing at location 290 in preparation for generating the backward portion of p1. At t=2 the controller 252 reads the contents of the location pointed to by the pointer 282 and obtains M2M3 . . . M1+τ and sends this matrix product to the matrix multiplier 256 to generate the first backward portion M2 1+τβ. Then the controller 252 directs the matrix multiplier 256 to multiply M1+τ with M2+τ and stores the product in the location pointed to by the pointer 256 which is location 290 thus overwriting M2M3 . . . M1+τ. The controller 252 then updates the pointer 256 to point to location 288 by decrementing the pointer 256 by M, for example, where M is the number of elements in each of the matrix product. In this regard, each of the locations 284, 286, 288 and 290 actually is a block of memory space sufficient to store one of the matrix products.
Then, the controller 252 directs the matrix multiplier 256 to matrix multiply the contents of each of the remaining locations 284, 286 and 288 with M2+τ. At this point, the memory space 280 is ready for the next cycle to generate the backward portion for pt+1.
After τ−2 cycles, the pointer 282 would be pointing to location 284. During the τ−1 cycle the pointer 256 would be incremented by τ−2 to again point to location 290 which essentially permits the memory space 280 to be a circular buffer of τ−1 locations 284, 286, 288 and 290.
FIG. 6 shows a specific example of decoding according to the invention where τ is set equal to 3 and T=256. As illustrated in FIG. 6, the calculations are initialized using αt− and Mt+1 t+3 to generate pt. Matrix Mt+4 is then input over signal line 302 to a sequence of matrices Mt+3. Mt+2, Mt+1, and Mt stored in storage elements 304, 306, 308 and 310, respectively, right-multiply the Mt in storage element 302 by multiplier 312 with αt−1 (of storage element 326) thus generating αt. The forward portion is generated by αt is output over signal line 336 to store αt into storage element 326, thus updating αt−1 to αt. Simultaneously, αt−1 is output over signal line 340 to right-multiplier 314 for right-multiplication by Wt, that product in turn being output to multiplier 316. Multiplier 320 receives Mt+1 t+3 stored in storage 332 and right-multiplies it by β stored in storage 328 and then output to multiplier 316 which multiplies it from the left by the quantity αt−1 Wt and outputs over signal line 338 the desired result ptt−WtMt+1 t+3β. In the mean time, the contents of the storage elements 330 and 332 are replaced by Mt+3 t+4=Mt+ 3Mt+4 and Mt+2 t+4=Mt+2Mt+3Mt+4, respectively. The storage elements 304, 306, 308 and 310 also updated by shifting their contents thus preparing for the next cycle.
In decoder 160, the received value pt which is used to decode the current transmitted symbol. The following illustrates the calculation of several initial values of pt.
p 1α0 W 1 M 2 M 3 M 4β, α10 M 1 , M 4 5 =M 4 M 5 , M 3 5 =M 3 M 4 M 5
p 21 W 2 M 3 5β, α21 M 2 , M 5 6 =M 5 M 6 , M 4 6 =M 4 M 5 M 6, and so on.
As can be seen from FIG. 2 and from p t α 0 i = 1 t - l M i W l i = t + 1 t + τ M i β ( 7 )
Figure US06708149-20040316-M00012
in the invention, the conventional products of Markov matrices are truncated according to the persistence of memory in communications channel 130, reducing storage and computation significantly.
The effects of memory on communications channel 130 are accounted for by the product of matrices Mt−Mt+3. Therefore, decoder 160 as illustrated in FIG. 2 outputs to input/output unit 170 a probability value pt that a symbol Xt was transmitted at time t. In the illustrative embodiment, the encoder/transmitter 110 may be transmitting wireless voice or data signals over communications channel 130, and input/output unit 170 may output a voice output over receiving unit 150, such as a voice sound.
The general form of the equation for calculating the partial matrix products according to the invention is shown in Equation 7 above for arbitrary τ. As can be seen from that expression, in the invention it is only necessary to compute matrix products of matrices modeling the communication channel, whether wireless, radio frequency, optical or otherwise, over the period of time τ representing channel memory.
The transmitted information symbols 140 are illustrated in FIG. 1 as being cellular wireless voice or data symbols, however, it will be understood that the invention can be applied to any information signals that can be modeled by an HMM. Such information signals could also be, for instance, voice recognition information, handwriting information, bioelectrical signals such as electrocardiographs, seismic signals, and others. In a handwriting implementation, for instance, each letter would represent an information symbol which is modeled by an HMM, whose states are composed of preceding and succeeding letters and some hidden states representing a particular style of writing, for example, which would be reflected in matrices drawn to that model.
The system and method of the invention according to the foregoing description achieves information decoding in a streamlined manner. Using the invention, it is possible among other things to avoid having to store all forward (as well as backward) vectors in an HMM, and moreover to look forward through the chain by only a fixed lag, rather than through the entire sequence. This reflects the realization that time delays or fades which create a memory effect and distort a channel are of finite duration. Those distortions could only influence the present information signal as long as those time delays, fades or other distortions are still propagated. The invention capitalizes on these and other characteristics of non-Gaussian channels to achieve improved processing efficiency, while placing much reduced demands on processor bandwidth and storage capacity. Further efficiencies are gained when coefficients are recovered using an inverse matrix as described above.
An embodiment of the invention is illustrated in FIG. 7, in which advantage is taken of the property of matrix inversion to realize storage gains in the backward portion of the algorithm. Specifically, when dealing with products of matrices necessary to compute backward portion, it is possible to avoid the chain multiplication over the complete time period t to t+τ, when the intermediate matrix can be inverted. In this embodiment, the matrices for the forward portion of the algorithm are stored similarly to the apparatus of FIG. 2, with Mt being stored in storage element 708, Mt+1 being stored in storage element 706, and so forth with the last Mt+τ being stored in storage element 700. Mt is multiplied by αt−1 stored in storage element 710 by multiplier 712 and the result is stored in storage element 710 thus generating the forward portion αt. Simultaneously, αt−1 is sent over signal line 720 for multiplication by Wt and that result is then multiplied by multiplier 716 by the product of Mt+1 t+τ stored in storage element 726 and β stored in storage element 724 to crate pt, generally as in the other described embodiment.
However, according to equation (6), to update the value of βt at time t+1, in the case of invertible matrices, storing the entire backward potion is not necessary. Mt+2 t+τ+1 can be generated by inverting Mt+1 in the matrix inverter 258, and multiplying Mt+2 t+τ+1 or by that inverted matrix in multiplier 730 to generate Mt+2 t+τ and then multiplying Mt+2 t+τ by Mt+τ+1 in multiplier 728 to generate Mt+2 t+τ+1. This has the effect of removing the earliest term from the matrix product, while adding the next multiplicative term at time t+τ+1. Because of all the new matrix products, except for the last term, are dropped and the new value replaces the old one in storage element 726, no more of the backward sequence need to be saved in order to update βt.
The decoding operation is illustrated in another aspect in the flowchart of FIG. 8. It will be understood that the following processing steps are illustrated as executed by controller 252 in coordination with memory 254, matrix multiplier 256 and related elements. Processing begins in step 610, followed by initialization in step 615 of the matrices and parameters as described herein. In step 620, the current matrix is read from the FIFO, and in 625 that quantity is used to generate the current forward portion, αt−1. In step 626 αt−1Wt is generated. In step 630, αt is stored in storage location 226. In step 635, βt is generated. In step 640 pt representing the a posteriori probability of the input symbol is generated by multiplying αt−1Wt and βt. In step 645, the controller 252 directs the generation of the next matrix model for the following time period. In step 650, the next matrix model is written into the FIFO. In step 655, the next matrix model is multiplied by the contents of each of the storage locations 230, 232, . . . , 234. In step 660, the results of those multiplications are stored in locations 232, . . . , 235. The next matrix model is then overwritten in storage location 204 in step 665, and in step 675 the matrix values for succeeding storage elements 206, 208, . . . , 210 are replaced with the matrix contents for the next time. In step 680, the processing tests whether time has reached the end of the time period T. If not, processing repeats for t+1, otherwise it ends in step 685.
In an alternate embodiment, the fixed-lag algorithm can be implemented in the vector form thus reducing the computation and storage requirements. Consider the following probability vectors: s ( X t , Y 1 t + τ ) = α 0 i = 1 t - l M i W t i = t + 1 t + τ M i = α t - 1 W t M t + 1 t + τ ( I )
Figure US06708149-20040316-M00013
We can see that s(Xt,Y1 t) can be computed recursively as
s(X t ,Y 1 t)=αt−1 W t  (II.A)
s(X t ,Y 1 u)=s(X t ,Y 1 u−1)M u, (u=t+1, . . . , T)  (II.B)
Using these vectors, we can rewrite Equation (1) as
p t =s(X t ,Y 1 T  (III)
A list structure can be used for evaluating the a posteriori probabilities in the following way. Suppose that, for all Xt where t<u, αu−1, s(Xt,Y1 u−1) represents a list at the moment u. We may then replace αu−1 with αu and s(Xt,Y1 u−1) with s(Xt,Yt u) using Equation (II.B) and the equation
 αuu−1 M u  (IV)
and add to the list s(Xu,Y1 u). At the end, we obtain pt from equation (III).
In contrast with the forward-backward algorithm, this algorithm does not have a backward portion. Thus, computing probabilities with this algorithm requires less memory than required by the forward-backward algorithm. In addition, a fixed-lag algorithm using vectors requires less storage and computation than the matrix fixed-lag algorithm presented above. In this case we do not need to keep in memory s(Xt,Y1 t+τ) for u≦t. Therefore, the list of the forward-only algorithm increases only at the beginning while t<τ.
This vector fixed-lag algorithm is illustrated in FIG. 9. FIG. 9 shows a flow diagram of a process for generating the estimate pt using vectors in an alternate embodiment. Matrix Mt+1 is input along signal line 902. αt is initially stored in storage element 904. Recall that αt is a row vector. In this embodiment, αt is right-hand multiplied by Mt+1 at multiplier 906 and right hand multiplied by matrix Wt+1 at multiplier 910. The result of the first multiplication is then stored in storage element 904. For exemplary purposes, if a row vector and a square matrix are used, the multiplication yields a row vector being stored in storage element 904. The result of the second multiplication is stored in storage element 912 as shown by arrow 914.
The storage elements 912, 914, and 918 at time t contain the probability vectors st+1=s(Xt,Y1 t), st+2=s(Xt−1,Y1 t), . . . , st+τ+1=s (Xt−τ,Y1 t), respectively. The content of storage element 912 and the content of storage element 914 are right-hand multiplied by matrix Mt+1 at multipliers 916 and 920 and shifted to the next storage element as indicated by arrows. Additional storage elements may be added with the same multiplication pattern as indicated by the dashed lines 926. The product of the last multiplication is stored in storage element 918. This product is right hand multiplied with β. As discussed above, if β is a unity column vector, the mathematical operation is summing the elements in the row vector st+τ+1. The product is the probability pt+τ−1=p(Xt−τ−1,Y1 t−1).
Given that row vector αt is right-hand multiplied by matrix Mt+1 yielding a row vector, at the beginning of the algorithm, the storage requirements of storage elements 904, 912, 914, 918 are less than the storage requirements of the storage elements shown in FIG. 2 (storing matrices). In addition, the total number of storage elements shown in FIG. 9 is less than the total number of the storage elements shown in FIG. 2. Thus, the algorithm shown in FIG. 9 has a faster computation time than the algorithm shown in FIG. 2 as well as a smaller memory requirement.
FIGS. 10 and 11 are used to exemplify the process described in FIG. 9. FIG. 10 illustrates a convolutional encoder 1000 having shift registers 1004 and 1006 and summers 1008 and 1010. Input symbols Ij are input into encoder 1000 along signal line 1002. As shown in FIG. 10,
x j1 =I j +I j−1 I j−2
as shown by signal line 1002, shift registers 1004 and 1006 and summer 1008, and
x j2 =I j +I j−2
as shown by signal line 1002, shift register 1006 and summer 1010. The state of encoder 1000 is shown by the contents of shift registers 1004 and 1006 and is represented by the following expression, Sj=[xj−1, xj−2]. Encoder 1000 output symbols Xj1 and Xj2 are mapped to a modulator, such as a quadrature phase shift keying (QPSK) modulator shown in FIG. 10. The encoder is a rate one-half encoder, outputting two bits for each input bit. The modulated symbols are transmitted over a communications channel with memory that is modeled by an HMM.
Assume that the communications channel has bursts of errors as presented in the following parameters: a0=[0.91892 0.08108] P ( 0 ) = [ 0.997 0.00252 0.034 0.81144 ] P ( 1 ) = [ 0.0 0.00048 0.0 0.15456 ]
Figure US06708149-20040316-M00014
The α0 row vector represents the initial conditions of the communications channel. The P(0) square matrix and the P(1) square matrix are the matrix probabilities of correct reception and erroneous reception, respectively. Assume further that the following bit sequence is received:
Y1 T=11 01 11 00 00 11 01 01 00 10 11 00, where T=12. (Given the rate one-half encoder, Y1=11; Y2=01, Y3=11 . . . Y12=00).
FIG. 11 illustrates the normalized a posteriori probabilities of the transmitted bits given the above received bits using the forward-backward algorithm (columns two and three); the vector fixed-lag algorithm with τ=1 (columns four and five); and the vector fixed-lag algorithm with τ=3 (columns six and seven). Column one represents time t and column eight represents the predicted input bit.
So, at time t=0, using the forward-backward algorithm, we can see that the probability that the input X0 was a 0 is 0.00000 and the probability that X0 is a 1 is 0.79311. Thus, it is more likely that the input bit X0 was a 1. Using the vector fixed-lag algorithm with the lag, or memory τ=1, we can see that the probability that the input X0 was a 0 is 0.00013 and the probability that X0 is a 1 is 0.24542. Thus, under this algorithm with τ=1, it is still more likely that the input bit X0 was a 1.
Finally, using the vector fixed-lag algorithm, with the lag, or memory τ=3, we can see that the probability that the input X0 was a 0 is 0.00003 and the probability that X0 is a 1 is 0.67481. Thus, under this algorithm with τ=3 it is also more likely that the input bit X0 was a 1. Column eight shows that the input X0 is 1. The remaining entries in the table show the probabilities of input symbols at times t=1−9.
As we can see, the lag τ=3 estimates (columns 6 and 7) are closer to the complete a posteriori probability (columns 2 and 3) than lag τ=1 estimates (columns 2 and 3), but in both cases the vector fixed-lag algorithm decodes the same input sequence, even for these small lags, as the complete forward-backward algorithm.
The foregoing description of the system and method for processing information according to the invention is illustrative, and variations in configuration and implementation will occur to person skilled in the art.

Claims (9)

What is claimed is:
1. A fixed-lag method for determining the probability of a transmitted symbol at a time t, transmitted along a communications channel with bursts of errors, given a received symbol, the method comprising:
obtaining initial state information vector about the channel;
obtaining channel information matrices describing the probabilities that the transmitted symbol would be transmitted along a communications channel with and without error;
generating τ intermediate probabilities, where τ equals a memory or lag value, each intermediate probability being the product of the initial state information vector, at a time previous to time t, and a channel information matrix;
storing the intermediate probabilities in storage elements; and
multiplying a last intermediate probability with a final state vector to yield the probability of the transmitted symbol.
2. The fixed-lag method of claim 1, wherein the transmitted symbols are one of handwriting symbols in handwriting recognition, voice print features in voice recognition, and bioelectrical signals grouped into symbol units.
3. The fixed-lag method of claim 2, wherein the channel information matrices model processes including communication over channels, handwriting recognition, voice recognition and bioelectrical signal recognition, the matrices being generated based on modeling techniques including Hidden Markov Models.
4. A fixed-lag method for estimating an input symbol given an output symbol, the method comprising:
multiplying an initial state vector, α0, stored in a first storage element and
containing information about an initial state of a communications channel, with a first matrix, Mt+1, containing information about the communications channel, yielding a first vector product;
multiplying the first vector product with a second matrix, Wt+1, containing information about the communications channel, yielding a second vector product, st+1;
storing the second vector product in a second storage element;
multiplying the second vector product with the first matrix, yielding a next vector product, st+2, and storing the next vector product in a next storage element;
repeating the third multiplying step using the next vector product in the multiplication, for a total of τ times, until the last vector product, st+τ+1, is calculated; and multiplying the last vector product with a final state vector, β, to yield a probability, pt−τ−1=p(Xt−τ−1,Y1 t−1), that a selected symbol was the input symbol.
5. The fixed-lag method of claim 4, wherein the input symbol is one of handwriting symbols in handwriting recognition, voice print features in voice recognition, and bioelectrical signals grouped into symbol units.
6. The fixed-lag method of claim 4, wherein the first and second matrices model processes including communication over channels, handwriting recognition, voice recognition and bioelectrical signal recognition, the matrices being generated based on modeling techniques including Hidden Markov Models.
7. A fixed-lag processing device for determining the probability of a transmitted symbol, transmitted along a communications channel with bursts of errors, given a received symbol, the device comprising:
a plurality of storage elements, for storing vectors;
at least one matrix multiplier; and
a controller coupled to the storage elements and the at least one matrix multiplier, the controller generating τ intermediate product vectors, where each intermediate product vector is yielded by multiplying a content of one of the storage elements with a matrix, wherein the matrix contains information about the communications channel, the controller generating a last product vector and multiplying the last product vector with a final state vector, and the controller outputting the probability that the transmitted symbol is a selected symbol.
8. The fixed-lag device of claim 7, wherein the transmitted symbol is one of handwriting symbols in handwriting recognition, voice print features in voice recognition, and bioelectrical signals grouped into symbol units.
9. The fixed-lag device of claim 7, wherein the matrix models processes including communication over channels, handwriting recognition, voice recognition and bioclectrical signal recognition, the matrix being generated based on modeling techniques including Hidden Markov Models.
US09/845,134 1998-10-30 2001-04-30 Vector fixed-lag algorithm for decoding input symbols Expired - Fee Related US6708149B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/845,134 US6708149B1 (en) 1998-10-30 2001-04-30 Vector fixed-lag algorithm for decoding input symbols

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/183,474 US6226613B1 (en) 1998-10-30 1998-10-30 Decoding input symbols to input/output hidden markoff models
US09/845,134 US6708149B1 (en) 1998-10-30 2001-04-30 Vector fixed-lag algorithm for decoding input symbols

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/183,474 Continuation-In-Part US6226613B1 (en) 1998-10-30 1998-10-30 Decoding input symbols to input/output hidden markoff models

Publications (1)

Publication Number Publication Date
US6708149B1 true US6708149B1 (en) 2004-03-16

Family

ID=22672951

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/183,474 Expired - Lifetime US6226613B1 (en) 1998-10-30 1998-10-30 Decoding input symbols to input/output hidden markoff models
US09/845,134 Expired - Fee Related US6708149B1 (en) 1998-10-30 2001-04-30 Vector fixed-lag algorithm for decoding input symbols

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/183,474 Expired - Lifetime US6226613B1 (en) 1998-10-30 1998-10-30 Decoding input symbols to input/output hidden markoff models

Country Status (1)

Country Link
US (2) US6226613B1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020041634A1 (en) * 2000-10-02 2002-04-11 Lg Electronics Inc. VSB Transmission system
US20020085632A1 (en) * 2000-12-28 2002-07-04 Lg Electronics, Inc. VSB transmission system for processing supplemental transmission data
US20020154709A1 (en) * 2001-04-20 2002-10-24 Lg Electronics Inc. Digital VSB transmission system
US20020159520A1 (en) * 2001-04-25 2002-10-31 Lg Electronics Inc. Communication system in digital television
US20020172277A1 (en) * 2001-04-18 2002-11-21 Lg Electronics, Inc. VSB communication system
US20020181599A1 (en) * 2001-04-25 2002-12-05 Lg Electronics Inc. Communication system in digital television
US20020186790A1 (en) * 2001-06-11 2002-12-12 Lg Electronics Inc. Digital VSB transmission system
US20020186780A1 (en) * 2001-06-11 2002-12-12 Lg Electronics Inc. Digital VSB transmission system
US20040090997A1 (en) * 2001-08-20 2004-05-13 Lg Electronics Inc. Digital transmission system with enhanced data multiplexing in VSB transmission system
US20040179614A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20050041748A1 (en) * 2000-09-26 2005-02-24 Lg Electronics Inc. Digital television system
US20050111586A1 (en) * 2003-11-04 2005-05-26 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US20050141606A1 (en) * 2001-04-18 2005-06-30 Lg Electronics Inc. VSB communication system
US7148932B2 (en) 2000-09-22 2006-12-12 Lg Electronics Inc. Communication system in digital television
US20080123210A1 (en) * 2006-11-06 2008-05-29 Wei Zeng Handling synchronization errors potentially experienced by a storage device
US20190259223A1 (en) * 2018-02-22 2019-08-22 Ford Global Technologies, Llc Smart over-the-air updates using learned vehicle usage

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226613B1 (en) * 1998-10-30 2001-05-01 At&T Corporation Decoding input symbols to input/output hidden markoff models
US6480827B1 (en) * 2000-03-07 2002-11-12 Motorola, Inc. Method and apparatus for voice communication
US6760699B1 (en) * 2000-04-24 2004-07-06 Lucent Technologies Inc. Soft feature decoding in a distributed automatic speech recognition system for use over wireless channels
US6954745B2 (en) 2000-06-02 2005-10-11 Canon Kabushiki Kaisha Signal processing system
US7072833B2 (en) * 2000-06-02 2006-07-04 Canon Kabushiki Kaisha Speech processing system
US7010483B2 (en) * 2000-06-02 2006-03-07 Canon Kabushiki Kaisha Speech processing system
GB0013541D0 (en) * 2000-06-02 2000-07-26 Canon Kk Speech processing system
US7035790B2 (en) * 2000-06-02 2006-04-25 Canon Kabushiki Kaisha Speech processing system
US7656846B2 (en) * 2002-11-18 2010-02-02 Ge Fanuc Automation North America, Inc. PLC based wireless communications
US7236107B2 (en) * 2004-09-20 2007-06-26 Fujitsu Limited System and method for identifying optimal encoding for a given trace
US11700518B2 (en) * 2019-05-31 2023-07-11 Huawei Technologies Co., Ltd. Methods and systems for relaying feature-driven communications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963906A (en) * 1997-05-20 1999-10-05 At & T Corp Speech recognition training
US6226613B1 (en) * 1998-10-30 2001-05-01 At&T Corporation Decoding input symbols to input/output hidden markoff models

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963906A (en) * 1997-05-20 1999-10-05 At & T Corp Speech recognition training
US6226613B1 (en) * 1998-10-30 2001-05-01 At&T Corporation Decoding input symbols to input/output hidden markoff models

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
William Turin and Michele Zorzi, "Performance Analysis of Delay-Constrained Communications over Diverse Burst-Error Channels," Proc. IEEE 50th Vehicular Technology Conference, vol. 3, p. 1305-1309.* *
William Turin, "MAP Decoding using the EM Algorithm," Proc. IEEE 49th Vehicular Technology Conference, vol. 3, p. 1866-1870.* *
William Turin, "The Forward-Backward Algorithm-Work Project No. 311614-2003", Technical Memorandum, AT&T, Nov. 1997. *
William Turn, "MAP Decoding in Channels with Memory," IEEE Trans. on Communications, vol. 48, No. 5, p. 757-763.* *

Cited By (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7148932B2 (en) 2000-09-22 2006-12-12 Lg Electronics Inc. Communication system in digital television
US20050041748A1 (en) * 2000-09-26 2005-02-24 Lg Electronics Inc. Digital television system
US7474702B2 (en) 2000-09-26 2009-01-06 Lg Electronics Inc. Digital television system
US8743971B2 (en) 2000-09-26 2014-06-03 Lg Electronics Inc. Digital television system
US8428150B2 (en) 2000-09-26 2013-04-23 Lg Electronics Inc. Digital television system
US20080056388A1 (en) * 2000-09-26 2008-03-06 Choi In H Digital television system
US7474703B2 (en) 2000-09-26 2009-01-06 Lg Electronics Inc. Digital television system
US9756334B2 (en) 2000-09-26 2017-09-05 Lg Electronics Inc. Digital television system
US20050129132A1 (en) * 2000-09-26 2005-06-16 Lg Electronics Inc. Digital television system
US20050041749A1 (en) * 2000-09-26 2005-02-24 Lg Electronics Inc. Digital television system
US20050089103A1 (en) * 2000-09-26 2005-04-28 Lg Electronics Inc. Digital television system
US7742530B2 (en) 2000-09-26 2010-06-22 Lg Electronics Inc. Digital television system
US7706449B2 (en) 2000-09-26 2010-04-27 Lg Electronics Inc. Digital television system
US20100275095A1 (en) * 2000-09-26 2010-10-28 In Hwan Choi Digital television system
US20020041634A1 (en) * 2000-10-02 2002-04-11 Lg Electronics Inc. VSB Transmission system
US20050078760A1 (en) * 2000-10-02 2005-04-14 Lg Electronics Inc. VSB transmission system
US7894549B2 (en) 2000-10-02 2011-02-22 Lg Electronics Inc. VSB transmission system
US20050074069A1 (en) * 2000-10-02 2005-04-07 Lg Electronics Inc. VSB transmission system
US7298786B2 (en) 2000-10-02 2007-11-20 Lg Electronics, Inc. VSB transmission system
US20100017689A1 (en) * 2000-10-02 2010-01-21 In Hwan Choi Vsb transmission system
US20100007785A1 (en) * 2000-10-02 2010-01-14 In Hwan Choi Vsb transmission system
US7577208B2 (en) 2000-10-02 2009-08-18 Lg Electronics Inc. VSB transmission system
US8320485B2 (en) 2000-10-02 2012-11-27 Lg Electronics Inc. VSB transmission system
US7460606B2 (en) * 2000-10-02 2008-12-02 Lg Electronics, Inc. VSB transmission system
US7613246B2 (en) 2000-10-02 2009-11-03 Lg Electronics Inc. VSB transmission system
US20070248187A1 (en) * 2000-10-02 2007-10-25 In Hwan Choi Vsb transmission system
US7010038B2 (en) 2000-12-28 2006-03-07 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US20040184469A1 (en) * 2000-12-28 2004-09-23 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US7522666B2 (en) 2000-12-28 2009-04-21 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US8059718B2 (en) 2000-12-28 2011-11-15 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US7616688B2 (en) 2000-12-28 2009-11-10 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US8130833B2 (en) 2000-12-28 2012-03-06 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US20040179621A1 (en) * 2000-12-28 2004-09-16 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US7430251B2 (en) 2000-12-28 2008-09-30 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US7346107B2 (en) 2000-12-28 2008-03-18 Lg Electronics, Inc. VSB transmission system for processing supplemental transmission data
US20050089095A1 (en) * 2000-12-28 2005-04-28 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US20020085632A1 (en) * 2000-12-28 2002-07-04 Lg Electronics, Inc. VSB transmission system for processing supplemental transmission data
US20040184547A1 (en) * 2000-12-28 2004-09-23 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US7539247B2 (en) 2000-12-28 2009-05-26 Lg Electronics Inc. VSB transmission system for processing supplemental transmission data
US20050168643A1 (en) * 2001-01-19 2005-08-04 Lg Electronics, Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7619690B2 (en) 2001-01-19 2009-11-17 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US8164691B2 (en) 2001-01-19 2012-04-24 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20110129019A1 (en) * 2001-01-19 2011-06-02 In Hwan Choi Vsb reception system with enhanced signal detection for processing supplemental data
US7911539B2 (en) 2001-01-19 2011-03-22 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20100278274A1 (en) * 2001-01-19 2010-11-04 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US20060203127A1 (en) * 2001-01-19 2006-09-14 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179614A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7030935B2 (en) 2001-01-19 2006-04-18 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7167212B2 (en) * 2001-01-19 2007-01-23 Lg Electronics Inc. VSB reception system with enhanced signal detection or processing supplemental data
US20070113141A1 (en) * 2001-01-19 2007-05-17 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US7256839B2 (en) * 2001-01-19 2007-08-14 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7259797B2 (en) * 2001-01-19 2007-08-21 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7027103B2 (en) * 2001-01-19 2006-04-11 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7289162B2 (en) * 2001-01-19 2007-10-30 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7298422B2 (en) * 2001-01-19 2007-11-20 Lg Electronics, Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7298421B2 (en) * 2001-01-19 2007-11-20 Lg Electronics, Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7787053B2 (en) 2001-01-19 2010-08-31 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7317491B2 (en) * 2001-01-19 2008-01-08 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7317492B2 (en) * 2001-01-19 2008-01-08 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7319495B2 (en) * 2001-01-19 2008-01-15 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7327403B2 (en) * 2001-01-19 2008-02-05 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20080049146A1 (en) * 2001-01-19 2008-02-28 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US20080049874A1 (en) * 2001-01-19 2008-02-28 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US20080049842A1 (en) * 2001-01-19 2008-02-28 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US7787054B2 (en) 2001-01-19 2010-08-31 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7782404B2 (en) 2001-01-19 2010-08-24 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20080089408A1 (en) * 2001-01-19 2008-04-17 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US7755704B2 (en) 2001-01-19 2010-07-13 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US6967690B2 (en) 2001-01-19 2005-11-22 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179615A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US6956619B2 (en) 2001-01-19 2005-10-18 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179613A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US6924847B2 (en) 2001-01-19 2005-08-02 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20100073571A1 (en) * 2001-01-19 2010-03-25 Lg Electronics Inc. Vsb reception system with enhanced signal detection for processing supplemental data
US6922215B2 (en) 2001-01-19 2005-07-26 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179616A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7649572B2 (en) * 2001-01-19 2010-01-19 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179612A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc VSB reception system with enhanced signal detection for processing supplemental data
US7643093B2 (en) * 2001-01-19 2010-01-05 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20040179139A1 (en) * 2001-01-19 2004-09-16 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7068326B2 (en) 2001-01-19 2006-06-27 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7619689B2 (en) * 2001-01-19 2009-11-17 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US7630019B2 (en) * 2001-01-19 2009-12-08 Lg Electronics Inc. VSB reception system with enhanced signal detection for processing supplemental data
US20060002464A1 (en) * 2001-04-18 2006-01-05 Lg Electronics Inc. VSB communication system
US7636391B2 (en) 2001-04-18 2009-12-22 Lg Electronics Inc. VSB communication system
US6947487B2 (en) 2001-04-18 2005-09-20 Lg Electronics Inc. VSB communication system
US20110007822A1 (en) * 2001-04-18 2011-01-13 Lg Electronics Inc. Vsb communication system
US7712124B2 (en) 2001-04-18 2010-05-04 Lg Electronics Inc. VSB communication system
US7856651B2 (en) 2001-04-18 2010-12-21 Lg Electronics Inc. VSB communication system
US20020172277A1 (en) * 2001-04-18 2002-11-21 Lg Electronics, Inc. VSB communication system
US20050141606A1 (en) * 2001-04-18 2005-06-30 Lg Electronics Inc. VSB communication system
US20050152446A1 (en) * 2001-04-18 2005-07-14 Lg Electronics Inc. VSB communication system
US20060039503A1 (en) * 2001-04-18 2006-02-23 Lg Electronics Inc. VSB communication system
US7634003B2 (en) 2001-04-18 2009-12-15 Lg Electronics Inc. VSB communication system
US7634006B2 (en) 2001-04-18 2009-12-15 Lg Electronics Inc. VSB communication system
US7631340B2 (en) 2001-04-18 2009-12-08 Lg Electronics Inc. VSB communication system
US20020154709A1 (en) * 2001-04-20 2002-10-24 Lg Electronics Inc. Digital VSB transmission system
US6980603B2 (en) 2001-04-20 2005-12-27 Lg Electronics Inc. Digital VSB transmission system
US7092447B2 (en) 2001-04-25 2006-08-15 Lg Electronics Inc. Communication system in digital television
US20020181599A1 (en) * 2001-04-25 2002-12-05 Lg Electronics Inc. Communication system in digital television
US20060227882A1 (en) * 2001-04-25 2006-10-12 Lg Electronics Inc. Communication system in digital television
US7949055B2 (en) 2001-04-25 2011-05-24 Lg Electronics Inc. Communication system in digital television
US20020159520A1 (en) * 2001-04-25 2002-10-31 Lg Electronics Inc. Communication system in digital television
US7085324B2 (en) 2001-04-25 2006-08-01 Lg Electronics Inc. Communication system in digital television
US20020186790A1 (en) * 2001-06-11 2002-12-12 Lg Electronics Inc. Digital VSB transmission system
US7100182B2 (en) 2001-06-11 2006-08-29 Lg Electronics Inc. Digital VSB transmission system
US7092455B2 (en) 2001-06-11 2006-08-15 Lg Electronics Inc. Digital VSB transmission system
US20020186780A1 (en) * 2001-06-11 2002-12-12 Lg Electronics Inc. Digital VSB transmission system
US7450613B2 (en) 2001-08-20 2008-11-11 Lg Electronics Inc. Digital transmission system with enhanced data multiplexing in VSB transmission system
US20040090997A1 (en) * 2001-08-20 2004-05-13 Lg Electronics Inc. Digital transmission system with enhanced data multiplexing in VSB transmission system
US20090037794A1 (en) * 2001-08-20 2009-02-05 Lg Electronics Inc. Digital transmission system with enhanced data multiplexing in vsb transmission system
US8166374B2 (en) 2001-08-20 2012-04-24 Lg Electronics Inc. Digital transmission system with enhanced data multiplexing in VSB transmission system
US8068517B2 (en) 2003-11-04 2011-11-29 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US7599348B2 (en) 2003-11-04 2009-10-06 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US20050111586A1 (en) * 2003-11-04 2005-05-26 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US9185366B2 (en) 2003-11-04 2015-11-10 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US9363490B2 (en) 2003-11-04 2016-06-07 Lg Electronics Inc. Digital E8-VSB reception system and E8-VSB data demultiplexing method
US20090310014A1 (en) * 2003-11-04 2009-12-17 Kyung Won Kang Digital e8-vsb reception system and e8-vsb data demultiplexing method
US20080123210A1 (en) * 2006-11-06 2008-05-29 Wei Zeng Handling synchronization errors potentially experienced by a storage device
US20190259223A1 (en) * 2018-02-22 2019-08-22 Ford Global Technologies, Llc Smart over-the-air updates using learned vehicle usage
US11017616B2 (en) * 2018-02-22 2021-05-25 Ford Global Technologies, Llc Smart over-the-air updates using learned vehicle usage

Also Published As

Publication number Publication date
US6226613B1 (en) 2001-05-01

Similar Documents

Publication Publication Date Title
US6708149B1 (en) Vector fixed-lag algorithm for decoding input symbols
US11270187B2 (en) Method and apparatus for learning low-precision neural network that combines weight quantization and activation quantization
US6209114B1 (en) Efficient hardware implementation of chien search polynomial reduction in reed-solomon decoding
TWI223527B (en) Baseband processors and methods and systems for decoding a received signal having a transmitter or channel induced coupling between bits
US7539920B2 (en) LDPC decoding apparatus and method with low computational complexity algorithm
US7656974B2 (en) Iterative decoding
KR20010022310A (en) Soft output decoder for convolution code and soft output decoding method
CN111404853B (en) Carrier frequency offset estimation method, device and computer storage medium
US7277506B1 (en) Maximum likelihood sequence estimator which computes branch metrics in real time
US20020107987A1 (en) Compression based on channel characteristics
KR100970223B1 (en) A method of soft-decision decoding of reed-solomon codes, and reed-solomon codeword decoder and computer program product
US6396878B1 (en) Reception method and a receiver
CN114373480A (en) Training method of voice alignment network, voice alignment method and electronic equipment
CN111679123B (en) Symbol edge and frequency estimation method and system suitable for multi-mode modulation system
US20020065859A1 (en) Devices and methods for estimating a series of symbols
US7055089B2 (en) Decoder and decoding method
Turin Unidirectional and parallel Baum-Welch algorithms
JP4587427B2 (en) Decoding method and apparatus and system using the same
EP0083248B1 (en) Apparatus for calculating auto-correlation coefficients
US6457036B1 (en) System for accurately performing an integer multiply-divide operation
JP2002503835A (en) Method and apparatus for fast determination of optimal vector in fixed codebook
Zhang et al. High-throughput interpolation architecture for algebraic soft-decision Reed–Solomon decoding
US20030139927A1 (en) Block processing in a maximum a posteriori processor for reduced power consumption
US20050144209A1 (en) Apparatus and method for selectively performing Fast Hadamard Transform or fast fourier transform
US11621740B1 (en) Systems and methods for low complexity soft data computation

Legal Events

Date Code Title Description
AS Assignment

Owner name: AT&T CORP., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TURIN, WILLIAM;REEL/FRAME:011771/0734

Effective date: 20010430

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20120316

AS Assignment

Owner name: AT&T PROPERTIES, LLC, NEVADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T CORP.;REEL/FRAME:038983/0256

Effective date: 20160204

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T PROPERTIES, LLC;REEL/FRAME:038983/0386

Effective date: 20160204

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T INTELLECTUAL PROPERTY II, L.P.;REEL/FRAME:041498/0316

Effective date: 20161214