EP1205908B1 - Pronunciation of new input words for speech processing - Google Patents

Pronunciation of new input words for speech processing Download PDF

Info

Publication number
EP1205908B1
EP1205908B1 EP01309137A EP01309137A EP1205908B1 EP 1205908 B1 EP1205908 B1 EP 1205908B1 EP 01309137 A EP01309137 A EP 01309137A EP 01309137 A EP01309137 A EP 01309137A EP 1205908 B1 EP1205908 B1 EP 1205908B1
Authority
EP
European Patent Office
Prior art keywords
sub
word
sequence
word units
units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01309137A
Other languages
German (de)
French (fr)
Other versions
EP1205908A3 (en
EP1205908A2 (en
Inventor
Jason Peter Andrew Charlesworth
Jebu Jacob Canon Res. Ctr. Europe Limited Rajan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of EP1205908A2 publication Critical patent/EP1205908A2/en
Publication of EP1205908A3 publication Critical patent/EP1205908A3/en
Application granted granted Critical
Publication of EP1205908B1 publication Critical patent/EP1205908B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0631Creating reference templates; Clustering

Definitions

  • the present invention relates to the determination of phoneme or phoneme like models for words or commands which can be added to a word/command dictionary and used in speech processing related applications, such as speech recognition.
  • the invention particularly relates to the generation of canonical and non-canonical phoneme sequences which represent the pronunciation of input words, which sequences can be used in speech processing applications.
  • speech recognition systems are becoming more and more popular due the increased processing power available to perform the recognition operation.
  • Most speech recognition systems can be classified into small vocabulary systems and large vocabulary systems.
  • the speech recognition engine usually compares the input speech to be recognised with acoustic patterns representative of the words known to the system.
  • the reference patterns usually represent phonemes of a given language. In this way, the input speech is compared with the phoneme patterns to generate a sequence of phonemes representative of the input speech.
  • a word decoder is then used to identify words within the sequence of phonemes using a word to phoneme dictionary.
  • a problem with large vocabulary speech recognition systems is that if the user speaks a word which is not in the word dictionary, then a mis-recognition will occur and the speech recognition system will output the word or words which sound most similar to the out of vocabulary word actually spoken.
  • This problem can be overcome by providing a mechanism which allows users to add new word models for out of vocabulary words. To date, this has predominantly been achieved by generating acoustic patterns representative of the out of vocabulary words. However, this requires the speech recognition system to match the input speech with two different types of model - phoneme models and word models, which slows down the recognition process. Other systems allow the user to add a phonetic spelling to the word dictionary in order to cater for out of vocabulary words.
  • GB-A-2349260 describes a system for generating new reference models for adding to a speech recognition dictionary from three or more training signals.
  • the system simultaneously compares and aligns the three or more training signals with each other and, from the alignment results, generates a reference model representative of the training signals.
  • One aim of the present invention is to provide an alternative technique for generating a phoneme or phoneme-like sequence representative of new words to be added to a word dictionary or command dictionary which may be used in, for example, a speech recognition system.
  • Embodiments of the present invention can be implemented using dedicated hardware circuits, but the embodiment that is to be described is implemented in computer software or code, which is run in conjunction with a personal computer.
  • the software may be run in conjunction with a workstation, photocopier, facsimile machine, personal digital assistant (PDA), web browser or the like.
  • Figure 1 shows a personal computer (PC) 1 which is programmed to operate an embodiment of the present invention.
  • a keyboard 3, a pointing device 5, a microphone 7 and a telephone line 9 are connected to the PC 1 via an interface 11.
  • the keyboard 3 and pointing device enable the system to be controlled by a user.
  • the microphone 7 converts the acoustic speech signal of the user into an equivalent electrical signal and supplies this to the PC 1 for processing.
  • An internal modem and speech receiving circuit may be connected to the telephone line 9 so that the PC 1 can communicate with, for example, a remote computer or with a remote user.
  • the program instructions which make the PC 1 operate in accordance with the present invention may be supplied for use with the PC 1 on, for example, a storage device such as a magnetic disc 13 or by downloading the software from a remote computer over, for example, the Internet via the internal modem and telephone unit 9.
  • this sequence of phonemes are input, via a switch 20, to the word decoder 21 which identifies words within the generated phoneme sequence by comparing the phoneme sequence with those stored in a word to phoneme dictionary 23.
  • the words 25 output by the word decoder 21 are then used by the PC 1 to either control the software applications running on the PC 1 or for insertion as text in a word processing program running on the PC 1.
  • the speech recognition system 14 In order to be able to add words to the word to phoneme dictionary 23, the speech recognition system 14 also has a training mode of operation. This is activated by the user applying an appropriate command through the user interface 27 using the keyboard 3 or the pointing device 5. This request to enter the training mode is passed to the control unit 29 which causes the switch 20 to connect the output of the speech recognition engine 17 to the input of a word model generation unit 31. At the same time, the control unit 29 outputs a prompt to the user, via the user interface 27, to provide several renditions of the word to be added. Each of these renditions is processed by the pre-processor 15 and the speech recognition engine 17 to generate a plurality of sequences of phonemes representative of a respective rendition of the new word.
  • sequences of phonemes are input to the word model generation unit 31 which processes them to identify the most probable phoneme sequence which could have been mis-recognised as all the training examples and this sequence is stored together with a typed version of the word input by the user, in the word to phoneme dictionary 23.
  • the control unit 29 After the user has finished adding words to the dictionary 23, the control unit 29 returns the speech recognition system 14 to its normal mode of operation by connecting the output of the speech recognition engine back to the word decoder 21 through the switch 20.
  • FIG. 3 shows in more detail the components of the word model generation unit 31 discussed above.
  • a memory 41 which receives each phoneme sequence output from the speech recognition engine 17 for each of the renditions of the new word input by the user.
  • the phoneme sequences stored in the memory 41 are applied to the dynamic programming alignment unit 43 which, in this embodiment, uses a dynamic programming alignment technique to compare the phoneme sequences and to determine the best alignment between them.
  • the alignment unit 43 performs the comparison and alignment of all the phoneme sequences at the same time.
  • the identified alignment between the input sequences is then input to a phoneme sequence determination unit 45, which uses this alignment to determine the sequence of phonemes which matches best with the input phoneme sequences.
  • each phoneme sequence representative of a rendition of the new word can have insertions and deletions relative to this unknown sequence of phonemes which matches best with all the input sequences of phonemes.
  • Figure 4 shows a possible matching between a first phoneme sequence (labelled d 1 i , d 1 i+1 , d 1 i+2 ”) representative of a first rendition of the new word, a second phoneme sequence (labelled d 2 j , d 2 j+1 , d 2 j+2 ”) representative of a second rendition of the new word and a sequence of phonemes (labelled p n , p n+1 , P n+2 ”) which represents a canonical sequence of phonemes of the text which best matches the two input sequences.
  • the dynamic programming alignment unit 43 must allow for the insertion of phonemes in both the first and second phoneme sequences (represented by the inserted phonemes d 1 i+3 and d 2 j+1 ) as well as the deletion of phonemes from the first and second phoneme sequences (represented by phonemes d 1 i+1 and d 2 j+2 , which are both aligned with two phonemes in the canonical sequence of phonemes), relative to the canonical sequence of phonemes.
  • dynamic programming is a technique which can be used to find the optimum alignment between sequences of features, which in this embodiment are phonemes.
  • the dynamic programming alignment unit 43 calculates the optimum alignment by simultaneously propagating a plurality of dynamic programming paths, each of which represents a possible alignment between a sequence of phonemes from the first sequence (representing the first rendition) and a sequence of phonemes from the second sequence (representing the second rendition). All paths begin at a start null node which is at the beginning of the two input sequences of phonemes and propagate until they reach an end null node, which is at the end of the two sequences of phonemes.
  • Figures 5 and 6 schematically illustrate the alignment which is performed and this path propagation.
  • Figure 5 shows a rectangular coordinate plot with the horizontal axis being provided for the first phoneme sequence representative of the first rendition and the vertical axis being provided for the second phoneme sequence representative of the second rendition.
  • the start null node ⁇ s is provided at the top left hand corner and the end null node ⁇ e is provided at the bottom right hand corner.
  • the phonemes of the first sequence are provided along the horizontal axis and the phonemes of the second sequence are provided down the vertical axis.
  • Figure 6 also shows a number of lattice points, each of which represents a possible alignment (or decoding) between a phoneme of the first phoneme sequence and a phoneme of the second phoneme sequence.
  • lattice point 21 represents a possible alignment between first sequence phoneme d 1 3 and second sequence phoneme d 2 1 .
  • Figure 6 also shows three dynamic programming paths m 1 , m 2 and m 3 which represent three possible alignments between the first and second phoneme sequences and which begin at the start null node ⁇ s and propagate through the lattice points to the end null node ⁇ e .
  • the dynamic programming alignment unit 43 keeps a score for each of the dynamic programming paths which it propagates, which score is dependent upon the overall similarity of the phonemes which are aligned along the path. Additionally, in order to limit the number of deletions and insertions of phonemes in the sequences being aligned, the dynamic programming process places certain constraints on the way in which each dynamic programming path can propagate.
  • Figure 7 shows the dynamic programming constraints which are used in this embodiment.
  • a dynamic programming path ends at lattice point (i,j), representing an alignment between phoneme d 1 i of the first phoneme sequence and phoneme d 2 j of the second phoneme sequence
  • that dynamic programming path can propagate to the lattice points (i+1,j), (i+2,j), (i+3,j), (i,j+1), (i+1,j+1), (i+2,j+1), (i,j+2), (i+1,j+2) and (i,j+3).
  • the dynamic programming alignment unit 78 keeps a score for each of the dynamic programming paths, which score is dependent upon the similarity of the phonemes which are aligned along the path. Therefore, when propagating a path ending at point (i,j) to these other points, the dynamic programming process adds the respective "cost" of doing so to the cumulative score for the path ending at point (i,j), which is stored in a store (SCORE(i,j)) associated with that point.
  • this cost includes insertion probabilities for any inserted phonemes, deletion probabilities for any deletions and decoding probabilities for a new alignment between a phoneme from the first phoneme sequence and a phoneme from the second phoneme sequence.
  • the cumulative score is multiplied by the probability of inserting the given phoneme; when there is a deletion, the cumulative score is multiplied by the probability of deleting the phoneme; and when there is a decoding, the cumulative score is multiplied by the probability of decoding the two phonemes.
  • the system stores a probability for all possible phoneme combinations in memory 47.
  • the system sums, over all possible phonemes p, the probability of decoding the phoneme p as the first sequence phoneme d 1 i and as the second sequence phoneme d 2 j , weighted by the probability of phoneme p occurring unconditionally, i.e.: P d j 1
  • a back tracking routine can be used to identify the best alignment of the phonemes in the two input phoneme sequences.
  • the phoneme sequence determination unit 79 uses this alignment, to determine the sequence of phonemes which best represents the input phoneme sequences. The way in which this is achieved in this embodiment will be described later.
  • the dynamic programming alignment unit 43 when two sequences of phonemes (for two renditions of the new word), are aligned.
  • the scores associated with all the nodes are set to an appropriate initial value.
  • the alignment unit 43 then propagates paths from the null start node ( ⁇ s ) to all possible start points defined by the dynamic programming constraints discussed above.
  • the dynamic programming score for the paths that are started are then set to equal the transition score for passing from the null start node to the respective start point.
  • the paths which are started in this way are then propagated through the array of lattice points defined by the first and second phoneme sequences until they reach the null end node ⁇ e .
  • the alignment unit 78 processes the array of lattice points column by column in a raster like technique.
  • step s149 the system initialises a first phoneme sequence loop pointer, i, and a second phoneme loop pointer, j, to zero. Then in step s151, the system compares the first phoneme sequence loop pointer i with the number of phonemes in the first phoneme sequence (Nseq1). Initially the first phoneme sequence loop pointer i is set to zero and the processing therefore proceeds to step s153 where a similar comparison is made for the second phoneme sequence loop pointer j relative to the total number of phonemes in the second phoneme sequence (Nseq2).
  • step s155 the system propagates the path ending at lattice point (i,j) using the dynamic programming constraints discussed above. The way in which the system propagates the paths in step s155 will be described in more detail later.
  • step s155 the loop pointer j is incremented by one in step s157 and the processing returns to step s153.
  • step s159 the loop pointer j is reset to zero and the loop pointer i is incremented by one.
  • step s151 a similar procedure is performed for the next column of lattice points.
  • step s161 the loop pointer i is reset to zero and the processing ends.
  • step s155 shown in Figure 9 the system propagates the path ending at lattice point (i,j) using the dynamic programming constraints discussed above.
  • Figure 10 is a flowchart which illustrates the processing steps involved in performing this propagation step.
  • the system sets the values of two variables mxi and mxj and initialises first phoneme sequence loop pointer i2 and second phoneme sequence loop pointer j2.
  • the loop pointers i2 and j2 are provided to loop through all the lattice points to which the path ending at point (i,j) can propagate to and the variables mxi and mxj are used to ensure that i2 and j2 can only take the values which are allowed by the dynamic programming constraints.
  • mxj is set equal to j plus mxhops, provided this is less than or equal to the number of phonemes in the second phoneme sequence, otherwise mxj is set equal to the number of phonemes in the second phoneme sequence (Nseq2).
  • the system initialises the first phoneme sequence loop pointer i2 to be equal to the current value of the first phoneme sequence loop pointer i and the second phoneme sequence loop pointer j2 to be equal to the current value of the second phoneme sequence loop pointer j.
  • step s219 the system compares the first phoneme sequence loop pointer i2 with the variable mxi. Since loop pointer i2 is set to i and mxi is set equal to i+4, in step s211, the processing will proceed to step s221 where a similar comparison is made for the second phoneme sequence loop pointer j2. The processing then proceeds to step s223 which ensures that the path does not stay at the same lattice point (i,j) since initially, i2 will equal i and j2 will equal j. Therefore, the processing will initially proceed to step s225 where the query phoneme loop pointer j2 is incremented by one.
  • step s221 the incremented value of j2 is compared with mxj. If j2 is less than mxj, then the processing returns to step s223 and then proceeds to step s227, which is operable to prevent too large a hop along both phoneme sequences. It does this by ensuring that the path is only propagated if i2 + j2 is less than i + j + mxhops. This ensures that only the triangular set of points shown in Figure 7 are processed. Provided this condition is met, the processing proceeds to step s229 where the system calculates the transition score (TRANSCORE) from lattice point (i,j) to lattice point (i2,j2).
  • TSNSCORE transition score
  • step s233 the system compares TEMPSCORE with the cumulative score already stored for point (i2,j2) and the largest score is stored in SCORE (i2,j2) and an appropriate back pointer is stored to identify which path had the larger score.
  • the processing then returns to step s225 where the loop pointer j2 is incremented by one and the processing returns to step s221.
  • step s235 the loop pointer j2 is reset to the initial value j and the first phoneme sequence loop pointer i2 is incremented by one.
  • the processing then returns to step s219 where the processing begins again for the next column of points shown in Figure 7. Once the path has been propagated from point (i,j) to all the other points shown in Figure 7, the processing ends.
  • step s229 the transition score from one point (i,j) to another point (i2,j2) is calculated. This involves calculating the appropriate insertion probabilities, deletion probabilities and decoding probabilities relative to the start point and end point of the transition. The way in which this is achieved in this embodiment, will now be described with reference to Figures 11 and 12.
  • Figure 11 shows a flow diagram which illustrates the general processing steps involved in calculating the transition score for a path propagating from lattice point (i,j) to lattice point (i2,j2).
  • step s291 the system calculates, for each first sequence phoneme which is inserted between point (i,j) and point (i2,j2), the score for inserting the inserted phoneme(s) (which is just the log of probability PI( ) discussed above) and adds this to an appropriate store, INSERTSCORE.
  • the processing then proceeds to step s293 where the system performs a similar calculation for each second sequence phoneme which is inserted between point (i,j) and point (i2,j2) and adds this to INSERTSCORE.
  • step s295 the processing involved in step s295 to determine the deletion and/or decoding scores in propagating from point (i,j) to point (i2,j2) will now be described in more detail with reference to Figure 12.
  • the system determines if the first phoneme sequence loop pointer i2 equals first phoneme sequence loop pointer i. If it does, then the processing proceeds to step s327 where a phoneme loop pointer r is initialised to one. The phoneme pointer r is used to loop through each possible phoneme known to the system during the calculation of equation (1) above.
  • step s329 the system compares the phoneme pointer r with the number of phonemes known to the system, Nphonemes (which in this embodiment equals 43).
  • step s331 the system determines the log probability of phoneme p r occurring (i.e. log P(p r )) and copies this to a temporary score TEMPDELSCORE. If first phoneme sequence loop pointer i2 equals annotation phoneme i, then the system is propagating the path ending at point (i,j) to one of the points (i,j+1), (i,j+2) or (i,j+3). Therefore, there is a phoneme in the second phoneme sequence which is not in the first phoneme sequence. Consequently, in step s333, the system adds the log probability of deleting phoneme p r from the first phoneme sequence (i.e.
  • step s337 the processing proceeds to step s339 where the phoneme loop pointer r is incremented by one and then the processing returns to step s329 where a similar processing is performed for the next phoneme known to the system. Once this calculation has been performed for each of the 43 phonemes known to the system, the processing ends.
  • step s325 the system determines that i2 is not equal to i, then the processing proceeds to step s341 where the system determines if the second phoneme sequence loop pointer j2 equals second phoneme sequence loop pointer j. If it does, then the processing proceeds to step s343 where the phoneme loop pointer r is initialised to one. The processing then proceeds to step s345 where the phoneme loop pointer r is compared with the total number of phonemes known to the system (Nphonemes). Initially r is set to one in step s343, and therefore, the processing proceeds to step s347 where the log probability of phoneme p r occurring is determined and copied into the temporary store TEMPDELSCORE.
  • step s349 the system determines the log probability of decoding phoneme p r as first sequence phoneme d 1 i2 and adds this to TEMPDELSCORE. If the second phoneme sequence loop pointer j2 equals loop pointer j, then the system is propagating the path ending at point (i,j) to one of the points (i+1,j), (i+2,j) or (i+3,j). Therefore, there is a phoneme in the first phoneme sequence which is not in the second phoneme sequence. Consequently, in step s351, the system determines the log probability of deleting phoneme p r from the second phoneme sequence and adds this to TEMPDELSCORE.
  • step s363 the log probability of decoding phoneme p r as first sequence phoneme d 1 i2 is added to TEMPDELSCORE.
  • step s365 the log probability of decoding phoneme p r as second sequence phoneme d 2 j2 is determined and added to TEMPDELSCORE.
  • step s367 The system then performs, in step s367, the log addition of TEMPDELSCORE with DELSCORE and stores the result in DELSCORE.
  • the phoneme counter r is then incremented by one in step s369 and the processing returns to step s359.
  • the phoneme sequence determination unit 45 determines, for each aligned pair of phonemes (d 1 m , d 2 n ) of the best alignment, the unknown phoneme, p, which maximises: P d m 1
  • This phoneme, p is the phoneme which is taken to best represent the aligned pair of phonemes.
  • the determination unit 45 identifies the sequence of canonical phonemes that best represents the two input phoneme sequences. In this embodiment, this canonical sequence is then output by the determination unit 45 and stored in the word to phoneme dictionary 23 together with the text of the new word typed in by the user.
  • the dynamic programming alignment unit 43 aligns two sequences of phonemes and the way in which the phoneme sequence determination unit 45 obtains the sequence of phonemes which best represents the two input sequences given this best alignment.
  • the dynamic programming alignment unit 43 should preferably be able to align any number of input phoneme sequences and the determination unit 45 should be able to derive the phoneme sequence which best represents any number of input phoneme sequences given the best alignment between them.
  • a description will now be given of the way in which the dynamic programming alignment unit 43 aligns three input phoneme sequences together and how the determination unit 45 determines the phoneme sequence which best represents the three input phoneme sequences.
  • Figure 13 shows a three-dimensional coordinate plot with one dimension being provided for each of the three phoneme sequences and illustrates the three-dimensional lattice of points which are processed by the dynamic programming alignment unit 43 in this case.
  • the alignment unit 43 uses the same transition scores and phoneme probabilities and similar dynamic programming constraints in order to propagate and score each of the paths through the three-dimensional network of lattice points in the plot shown in Figure 13.
  • the dynamic programming alignment unit 43 propagates dynamic programming paths from the null start node ⁇ e to each of the start points defined by the dynamic programming constraints. It then propagates these paths from these start points to the null end node ⁇ e by processing the points in the search space in a raster-like fashion.
  • the control algorithm used to control this raster processing operation is shown in Figure 14. As can be seen from a comparison of Figure 14 with Figure 9, this control algorithm has the same general form as the control algorithm used when there were only two phoneme sequences to be aligned.
  • step s463 in Figure 16 determines the deletion and/or decoding scores in propagating from point (i,j,k) to point (i2,j2,k2) will now be described in more detail with reference to Figure 17.
  • the system determines (in steps s525 to s537) if there are any phoneme deletions from any of the three phoneme sequences by comparing i2, j2 and k2 with i, j and k respectively.
  • FIGs 17a to 17d there are eight main branches which operate to determine the appropriate decoding and deletion probabilities for the eight possible situations. Since the processing performed in each situation is very similar, a description will only be given of one of the situations.
  • step s541 r is set to one in step s541. Therefore the processing proceeds to step s545 where the system determines the log probability of phoneme p r occurring and copies this to a temporary score TEMPDELSCORE. The processing then proceeds to step s547 where the system determines the log probability of deleting phoneme p r in the first phoneme sequence and adds this to TEMPDELSCORE. The processing then proceeds to step s549 where the system determines the log probability of decoding phoneme p r as second sequence phoneme d 2 j2 and adds this to TEMPDELSCORE.
  • the term calculated within the dynamic programming algorithm for decodings and deletions is similar to equation (1) but has an additional probability term for the third phoneme sequence.
  • ⁇ r 1 N p P d i 1
  • the dynamic programming alignment unit 78 identifies the path having the best score and uses the back pointers which have been stored for this path to identify the aligned phoneme triples (i.e. the aligned phonemes in the three sequences) which lie along this best path.
  • the phoneme sequence determination unit 79 determines the phoneme, p, which maximises: P d m 1
  • the dynamic programming alignment unit 43 aligns two or three sequences of phonemes.
  • the addition of a further phoneme sequence simply involves the addition of a number of loops in the control algorithm in order to account for the additional phoneme sequence.
  • the alignment unit 43 can therefore identify the best alignment between any number of input phoneme sequences by identifying how many sequences are input and then ensuring that appropriate control variables are provided for each input sequence.
  • the determination unit 45 can then identify the sequence of phonemes which best represents the input phoneme sequences using these alignment results.
  • the method of word training described earlier is used to initially create a single postulated phoneme sequence for a word taken from a large number of example phonetic decodings of the word.
  • This postulated version is then used to score all the decodings of the word according to their similarity to the postulated form. Versions with similar scores are then clustered. If more than one cluster emerges then a postulated representation for each cluster is determined and the original decodings are re-scored and re-clustered relative to the new postulated representations. This process is then iterated until some convergence criteria is achieved. This process will now be explained in more detail with reference to Figures 18 to 20.
  • Figure 18 shows in more detail the main components of the word model generation unit 31 of the third embodiment.
  • the word generation unit 31 is similar to the word generation unit of the first embodiment.
  • it includes a memory 41 which receives each phoneme sequence (D i ) output from the speech recognition engine 17 for each of the renditions of the new word input by the user.
  • the phoneme sequences stored in the memory 41 are applied to the dynamic programming alignment unit 43 which determines the best alignment between the phoneme sequences in the manner described above.
  • the phoneme sequence determination unit 45 determines (also in the manner described above) the sequence of phonemes which matches best with the input phoneme sequences.
  • This best phoneme sequence (D best ) and the original-input phoneme sequences (D i ) are then passed to an analysis unit 61 which compares the best phoneme sequence with each of the input sequences to determine how well each of the input sequences corresponds to the best sequence. If the input sequence is the same length as the best sequence, then the analysis unit does this, in this embodiment, by calculating: P ( D i
  • the analysis unit 61 analyses each of these probabilities using a clustering algorithm to identify if different clusters can be found within these probability scores - which would indicate that the input sequences include different pronunciations for the input word.
  • This is schematically illustrated in the plot shown in Figure 19.
  • Figure 19 has the probability scores determined in the above manner plotted on the x-axis with the number of training sequences having that score plotted on the y-axis. (As those skilled in the art will appreciate, in practice the plot will be a histogram, since it is unlikely that many scores will be exactly the same). The two peaks 71 and 73 in this plot indicate that there are two different pronunciations of the training word.
  • the analysis unit 61 assigns each of the input phoneme sequences (D i ) to one of the different clusters.
  • the analysis unit 61 then outputs the input phoneme sequences of each cluster back to the dynamic programming alignment unit 43 which processes the input phoneme sequences in each cluster separately so that the phoneme sequence determination unit 45 can determine a representative phoneme sequence for each of the clusters.
  • the phoneme sequences of the or each other cluster are stored in the memory 47.
  • the analysis unit 61 compares each of the input phoneme sequences with all of the cluster representative sequences and then re-clusters the input phoneme sequences. This whole process is then iterated until a suitable convergence criteria is achieved.
  • the representative sequence for each cluster identified using this process may then be stored in the word to phoneme dictionary 23 together with the typed version of the word.
  • the cluster representations are input to a phoneme sequence combination unit 63 which combines the representative phoneme sequences to generate a phoneme lattice using a standard forward/backward truncation technique.
  • Figures 20 and 21 show two sequences of phonemes 75 and 77 represented by sequences A-B-C-D and A-E-C-D and Figure 21 shows the resulting phoneme lattice 79 obtained by combining the two sequences shown in Figure 20 using the forward/backward truncation technique.
  • the phoneme lattice 79 output by the phoneme combination unit 63 is then stored in the word to phoneme dictionary 23 together with the typed version of the word.
  • the dynamic programming alignment unit 78 used 1892 decoding/deletion probabilities and 43 insertion probabilities to score the dynamic programming paths in the phoneme alignment operation.
  • these probabilities are determined in advance during a training session and are stored in the memory 47.
  • a speech recognition system is used to provide a phoneme decoding of speech in two ways. In the first way, the speech recognition system is provided with both the speech and the actual words which are spoken. The speech recognition system can therefore use this information to generate the canonical phoneme sequence of the spoken words to obtain an ideal decoding of the speech. The speech recognition system is then used to decode the same speech, but this time without knowledge of the actual words spoken (referred to hereinafter as the free decoding).
  • the phoneme sequence generated from the free decoding will differ from the canonical phoneme sequence in the following ways:
  • the probability of decoding phoneme p as phoneme d is given by: P d
  • p c dp n p where c dp is the number of times the automatic speech recognition system decoded d when it should have been p and n p is the number of times the automatic speech recognition system decoded anything (including a deletion) when it should have been p.
  • phoneme has been used throughout the above description, the present application is not limited to its linguistic meaning, but includes the different sub-word units that are normally identified and used in standard speech recognition systems.
  • phoneme covers any such sub-word unit, such as phones, syllables or katakana (Japanese alphabet).
  • the dynamic programming alignment unit calculated decoding scores for each transition using equation (1) above.
  • the dynamic programming alignment unit may be arranged, instead, to identify the unknown phoneme, p, which maximises the probability term within the summation and to use this maximum probability term as the probability of decoding the corresponding phonemes in the input sequences.
  • the dynamic programming alignment unit would also preferably store an indication of the phoneme which maximised this probability with the appropriate back pointer, so that after the best alignment between the input phoneme sequences has been determined, the sequence of phonemes which best represents the input sequences can simply be determined by the phoneme sequence determination unit from this stored data.
  • the insertion, deletion and decoding probabilities were calculated from statistics of the speech recognition system using a maximum likelihood estimate of the probabilities.
  • other techniques such as maximum entropy techniques, can be used to estimate these probabilities. Details of a suitable maximum entropy technique can be found at pages 45 to 52 in the book entitled “Maximum Entropy and Bayesian Methods" published by Kluwer Academic publishers and written by John Skilling.
  • equation (1) was calculated for each aligned pair of phonemes.
  • the first sequence phoneme and the second sequence phoneme were compared with each of the phonemes known to the system.
  • the aligned phonemes may only be compared with a subset of all the known phonemes, which subset is determined in advance from the training data.
  • the input phonemes to be aligned could be used to address a lookup table which would identify the phonemes which need to be compared with them using equation (1) (or its multi-input sequence equivalent).
  • the user input a number of spoken renditions of the new input word together with a typed rendition of the new word.
  • it may be input as a handwritten version which is subsequently converted into text using appropriate handwriting recognition software.
  • new word models were generated for use in a speech recognition system.
  • the new word models are stored together with a text version of the word so that the text can be used in a word processing application.
  • the word models may be used as a control command rather than being for use in generating corresponding text. In this case, rather than storing text corresponding to the new word model, the corresponding control action or command would be input and stored.

Description

  • The present invention relates to the determination of phoneme or phoneme like models for words or commands which can be added to a word/command dictionary and used in speech processing related applications, such as speech recognition. The invention particularly relates to the generation of canonical and non-canonical phoneme sequences which represent the pronunciation of input words, which sequences can be used in speech processing applications.
  • The use of speech recognition systems is becoming more and more popular due the increased processing power available to perform the recognition operation. Most speech recognition systems can be classified into small vocabulary systems and large vocabulary systems. In small vocabulary systems the speech recognition engine usually compares the input speech to be recognised with acoustic patterns representative of the words known to the system. In the case of large vocabulary systems, it is not practical to store a word model for each word known to the system. Instead, the reference patterns usually represent phonemes of a given language. In this way, the input speech is compared with the phoneme patterns to generate a sequence of phonemes representative of the input speech. A word decoder is then used to identify words within the sequence of phonemes using a word to phoneme dictionary.
  • A problem with large vocabulary speech recognition systems is that if the user speaks a word which is not in the word dictionary, then a mis-recognition will occur and the speech recognition system will output the word or words which sound most similar to the out of vocabulary word actually spoken. This problem can be overcome by providing a mechanism which allows users to add new word models for out of vocabulary words. To date, this has predominantly been achieved by generating acoustic patterns representative of the out of vocabulary words. However, this requires the speech recognition system to match the input speech with two different types of model - phoneme models and word models, which slows down the recognition process. Other systems allow the user to add a phonetic spelling to the word dictionary in order to cater for out of vocabulary words. However, this requires the user to explicitly provide each phoneme for the new word and this is not practical for users who have a limited knowledge of the system and who do not know the phonemes which make up the word. An alternative technique would be to decode the new word into a sequence of phonemes using a speech recognition system and treat the decoded sequence of phonemes as being correct. However, as even the best systems today have an accuracy of less than 80%, this would introduce a number of errors which would ultimately lead to a lower recognition rate of the system.
  • GB-A-2349260 describes a system for generating new reference models for adding to a speech recognition dictionary from three or more training signals. The system simultaneously compares and aligns the three or more training signals with each other and, from the alignment results, generates a reference model representative of the training signals.
  • One aim of the present invention, as set out in the appended claims, is to provide an alternative technique for generating a phoneme or phoneme-like sequence representative of new words to be added to a word dictionary or command dictionary which may be used in, for example, a speech recognition system.
  • Exemplary embodiments of the present invention will now be described in more detail with reference to the accompanying drawings in which:
    • Figure 1 is a schematic block view of a computer which may be programmed to operate an embodiment of the present invention;
    • Figure 2 is a schematic diagram of an overview of a speech recognition system embodying the present invention;
    • Figure 3 is a schematic block diagram illustrating the main components of the word model generation unit which forms part of the speech recognition system shown in Figure 2;
    • Figure 4 is a schematic diagram which shows a first and second sequence of phonemes representative of two renditions of a new word after being processed by the speech recognition engine shown in Figure 2, and a third sequence of phonemes which best represents the first and second sequence of phonemes, and which illustrates the possibility of there being phoneme insertions and deletions from the first and second sequence of phonemes relative to the third sequence of phonemes;
    • Figure 5 schematically illustrates a search space created by the sequences of phonemes for the two renditions of the new word together with a start null node and an end null node;
    • Figure 6 is a two-dimensional plot with the horizontal axis being provided for the phonemes corresponding to one rendition of the new word and the vertical axis being provided for the phonemes corresponding to the other rendition of the new word, and showing a number of lattice points, each corresponding to a possible match between a phoneme of the first rendition of the word and a phoneme of the second rendition of the word;
    • Figure 7 schematically illustrates the dynamic programming constraints employed by the dynamic programming alignment unit which forms part of the word model generation unit shown in Figure 3;
    • Figure 8 schematically illustrates the deletion and decoding probabilities which are stored for an example phoneme and which are used in the scoring during the dynamic programming alignment process performed by the alignment unit shown in Figure 3;
    • Figure 9 is a flow diagram illustrating the main processing steps performed by the dynamic programming matching alignment unit shown in Figure 3;
    • Figure 10 is a flow diagram illustrating the main processing steps employed to propagate dynamic programming paths from the null start node to the null end node;
    • Figure 11 is a flow diagram illustrating the processing steps involved in determining a transition score for propagating a path during the dynamic programming matching process;
    • Figure 12 is a flow diagram illustrating the processing steps employed in calculating scores for deletions and decodings of the first and second phoneme sequences corresponding to the word renditions;
    • Figure 13 schematically illustrates a search space created by three sequences of phonemes generated for three renditions of a new word;
    • Figure 14 is a flow diagram illustrating the main processing steps employed to propagate dynamic programming paths from the null start node to the null end node shown in Figure 13;
    • Figure 15 is a flow diagram illustrating the processing steps employed in propagating a path during the dynamic programming process;
    • Figure 16 is a flow diagram illustrating the processing steps involved in determining a transition score for propagating a path during the dynamic programming matching process;
    • Figure 17a is a flow diagram illustrating a first part of the processing steps employed in calculating scores for deletions and decodings of phonemes during the dynamic programming matching process;
    • Figure 17b is a flow diagram illustrating a second part of the processing steps employed in calculating scores for deletions and decodings of phonemes during the dynamic programming matching process;
    • Figure 17c is a flow diagram illustrating a third part of the processing steps employed in calculating scores for deletions and decodings of phonemes during the dynamic programming matching process;
    • Figure 17d is a flow diagram illustrating the remaining steps employed in the processing steps employed in calculating scores for deletions and decodings of phonemes during the dynamic programming matching process;
    • Figure 18 is a schematic block diagram illustrating the main components of an alternative word model generation unit which may be used in the speech recognition system shown in Figure 2;
    • Figure 19 is a plot illustrating the way in which probability scores vary with different pronunciations of input words;
    • Figure 20 schematically illustrates two sequences of phonemes; and
    • Figure 21 schematically illustrates a phoneme lattice formed by combining the two phoneme sequences illustrated in Figure 20.
    FIRST EMBODIMENT
  • Embodiments of the present invention can be implemented using dedicated hardware circuits, but the embodiment that is to be described is implemented in computer software or code, which is run in conjunction with a personal computer. In alternative embodiments, the software may be run in conjunction with a workstation, photocopier, facsimile machine, personal digital assistant (PDA), web browser or the like.
  • Figure 1 shows a personal computer (PC) 1 which is programmed to operate an embodiment of the present invention. A keyboard 3, a pointing device 5, a microphone 7 and a telephone line 9 are connected to the PC 1 via an interface 11. The keyboard 3 and pointing device enable the system to be controlled by a user. The microphone 7 converts the acoustic speech signal of the user into an equivalent electrical signal and supplies this to the PC 1 for processing. An internal modem and speech receiving circuit (not shown) may be connected to the telephone line 9 so that the PC 1 can communicate with, for example, a remote computer or with a remote user.
  • The program instructions which make the PC 1 operate in accordance with the present invention may be supplied for use with the PC 1 on, for example, a storage device such as a magnetic disc 13 or by downloading the software from a remote computer over, for example, the Internet via the internal modem and telephone unit 9.
  • The operation of the speech recognition system 14 implemented in the PC 1 will now be described in more detail with reference to Figure 2. Electrical signals representative of the user's input speech from the microphone 7 are applied to a preprocessor 15 which converts the input speech signal into a sequence of parameter frames, each representing a corresponding time frame of the input speech signal. The sequence of parameter frames output by the preprocessor 15 are then supplied to the speech recognition engine 17 where the speech is recognised by comparing the input sequence of parameter frames with phoneme models 19 to generate a sequence of phonemes representative of the input utterance. During the speech recognition systems 14 normal mode of operation, this sequence of phonemes are input, via a switch 20, to the word decoder 21 which identifies words within the generated phoneme sequence by comparing the phoneme sequence with those stored in a word to phoneme dictionary 23. The words 25 output by the word decoder 21 are then used by the PC 1 to either control the software applications running on the PC 1 or for insertion as text in a word processing program running on the PC 1.
  • In order to be able to add words to the word to phoneme dictionary 23, the speech recognition system 14 also has a training mode of operation. This is activated by the user applying an appropriate command through the user interface 27 using the keyboard 3 or the pointing device 5. This request to enter the training mode is passed to the control unit 29 which causes the switch 20 to connect the output of the speech recognition engine 17 to the input of a word model generation unit 31. At the same time, the control unit 29 outputs a prompt to the user, via the user interface 27, to provide several renditions of the word to be added. Each of these renditions is processed by the pre-processor 15 and the speech recognition engine 17 to generate a plurality of sequences of phonemes representative of a respective rendition of the new word. These sequences of phonemes are input to the word model generation unit 31 which processes them to identify the most probable phoneme sequence which could have been mis-recognised as all the training examples and this sequence is stored together with a typed version of the word input by the user, in the word to phoneme dictionary 23. After the user has finished adding words to the dictionary 23, the control unit 29 returns the speech recognition system 14 to its normal mode of operation by connecting the output of the speech recognition engine back to the word decoder 21 through the switch 20.
  • WORD TRAINING
  • Figure 3 shows in more detail the components of the word model generation unit 31 discussed above. As shown, there is a memory 41 which receives each phoneme sequence output from the speech recognition engine 17 for each of the renditions of the new word input by the user. After the user has finished inputting the training examples (which is determined from an input received from the user through the user interface 27), the phoneme sequences stored in the memory 41 are applied to the dynamic programming alignment unit 43 which, in this embodiment, uses a dynamic programming alignment technique to compare the phoneme sequences and to determine the best alignment between them. In this embodiment, the alignment unit 43 performs the comparison and alignment of all the phoneme sequences at the same time. The identified alignment between the input sequences is then input to a phoneme sequence determination unit 45, which uses this alignment to determine the sequence of phonemes which matches best with the input phoneme sequences.
  • As those skilled in the art will appreciate, each phoneme sequence representative of a rendition of the new word can have insertions and deletions relative to this unknown sequence of phonemes which matches best with all the input sequences of phonemes. This is illustrated in Figure 4, which shows a possible matching between a first phoneme sequence (labelled d1 i, d1 i+1, d1 i+2 ...) representative of a first rendition of the new word, a second phoneme sequence (labelled d2 j, d2 j+1, d2 j+2 ...) representative of a second rendition of the new word and a sequence of phonemes (labelled pn, pn+1, Pn+2 ...) which represents a canonical sequence of phonemes of the text which best matches the two input sequences. As shown in Figure 4, the dynamic programming alignment unit 43 must allow for the insertion of phonemes in both the first and second phoneme sequences (represented by the inserted phonemes d1 i+3 and d2 j+1) as well as the deletion of phonemes from the first and second phoneme sequences (represented by phonemes d1 i+1 and d2 j+2, which are both aligned with two phonemes in the canonical sequence of phonemes), relative to the canonical sequence of phonemes.
  • OVERVIEW OF DP ALIGNMENT
  • As those skilled in the art of speech processing know, dynamic programming is a technique which can be used to find the optimum alignment between sequences of features, which in this embodiment are phonemes. In the simple case where there are two renditions of the new word (and hence only two sequences of phonemes to be aligned), the dynamic programming alignment unit 43 calculates the optimum alignment by simultaneously propagating a plurality of dynamic programming paths, each of which represents a possible alignment between a sequence of phonemes from the first sequence (representing the first rendition) and a sequence of phonemes from the second sequence (representing the second rendition). All paths begin at a start null node which is at the beginning of the two input sequences of phonemes and propagate until they reach an end null node, which is at the end of the two sequences of phonemes.
  • Figures 5 and 6 schematically illustrate the alignment which is performed and this path propagation. In particular, Figure 5 shows a rectangular coordinate plot with the horizontal axis being provided for the first phoneme sequence representative of the first rendition and the vertical axis being provided for the second phoneme sequence representative of the second rendition. The start null node øs is provided at the top left hand corner and the end null node øe is provided at the bottom right hand corner. As shown in Figure 6, the phonemes of the first sequence are provided along the horizontal axis and the phonemes of the second sequence are provided down the vertical axis. Figure 6 also shows a number of lattice points, each of which represents a possible alignment (or decoding) between a phoneme of the first phoneme sequence and a phoneme of the second phoneme sequence. For example, lattice point 21 represents a possible alignment between first sequence phoneme d1 3 and second sequence phoneme d2 1. Figure 6 also shows three dynamic programming paths m1, m2 and m3 which represent three possible alignments between the first and second phoneme sequences and which begin at the start null node øs and propagate through the lattice points to the end null node øe.
  • In order to determine the best alignment between the first and second phoneme sequences, the dynamic programming alignment unit 43 keeps a score for each of the dynamic programming paths which it propagates, which score is dependent upon the overall similarity of the phonemes which are aligned along the path. Additionally, in order to limit the number of deletions and insertions of phonemes in the sequences being aligned, the dynamic programming process places certain constraints on the way in which each dynamic programming path can propagate.
  • Figure 7 shows the dynamic programming constraints which are used in this embodiment. In particular, if a dynamic programming path ends at lattice point (i,j), representing an alignment between phoneme d1 i of the first phoneme sequence and phoneme d2 j of the second phoneme sequence, then that dynamic programming path can propagate to the lattice points (i+1,j), (i+2,j), (i+3,j), (i,j+1), (i+1,j+1), (i+2,j+1), (i,j+2), (i+1,j+2) and (i,j+3). These propagations therefore allow the insertion and deletion of phonemes in the first and second phoneme sequences relative to the unknown canonical sequence of phonemes corresponding to the text of what was actually spoken.
  • As mentioned above, the dynamic programming alignment unit 78 keeps a score for each of the dynamic programming paths, which score is dependent upon the similarity of the phonemes which are aligned along the path. Therefore, when propagating a path ending at point (i,j) to these other points, the dynamic programming process adds the respective "cost" of doing so to the cumulative score for the path ending at point (i,j), which is stored in a store (SCORE(i,j)) associated with that point. In this embodiment, this cost includes insertion probabilities for any inserted phonemes, deletion probabilities for any deletions and decoding probabilities for a new alignment between a phoneme from the first phoneme sequence and a phoneme from the second phoneme sequence. In particular, when there is an insertion, the cumulative score is multiplied by the probability of inserting the given phoneme; when there is a deletion, the cumulative score is multiplied by the probability of deleting the phoneme; and when there is a decoding, the cumulative score is multiplied by the probability of decoding the two phonemes.
  • In order to be able to calculate these probabilities, the system stores a probability for all possible phoneme combinations in memory 47. In this embodiment, the deletion of a phoneme from either the first or second phoneme sequence is treated in a similar manner to a decoding. This is achieved by simply treating a deletion as another phoneme. Therefore, if there are 43 phonemes known to the system, then the system will store one thousand eight hundred and ninety two (1892 = 43 x 44) decoding/deletion probabilities, one for each possible phoneme decoding and deletion. This is illustrated in Figure 8, which shows the possible phoneme decodings which are stored for the phoneme /ax/ and which includes the deletion phoneme (0) as one of the possibilities. As those skilled in the art will appreciate, all the decoding probabilities for a given phoneme must sum to one, since there are no other possibilities. In addition to these decoding/deletion probabilities, 43 insertion probabilities (PI( )), one for each possible phoneme insertion, is also stored in the memory 47. As will be described later, these probabilities are determined in advance from training data.
  • In this embodiment, to calculate the probability of decoding a phoneme (d2 j) from the second phoneme sequence as a phoneme (d1 i) from the first phoneme sequence, the system sums, over all possible phonemes p, the probability of decoding the phoneme p as the first sequence phoneme d1 i and as the second sequence phoneme d2 j, weighted by the probability of phoneme p occurring unconditionally, i.e.: P d j 1 | d j 2 = r = 1 N p P d i 1 | p r P ( d j 2 | p r ) P p r
    Figure imgb0001
    where Np is the total number of phonemes known to the system; p(d1 i|pr) is the probability of decoding phoneme pr as the first sequence phoneme d1 i; p(d2 j|pr) is the probability of decoding phoneme pr as the second sequence phoneme d2 j; and P(pr) is the probability of phoneme pr occurring unconditionally.
  • To illustrate the score propagations, an example will now be considered. In particular, when propagating from lattice point (i,j) to lattice point (i+2,j+1), the phoneme d1 i+1 from the first phoneme sequence is inserted relative to the second phoneme sequence and there is a decoding between phoneme d1 i+2 from the first phoneme sequence and phoneme d2 j+1 from the second phoneme sequence. Therefore, the score propagated to point (i+2,j+1) is given by: S i + 2 j + 1 = S i j . PI d i + 1 1 . r = 1 N p P d i + 1 1 | p r P ( d j + 1 2 | p r ) P p r
    Figure imgb0002
  • As those skilled in the art will appreciate, during this path propagation, several paths will meet at the same lattice point. In order that the best path is propagated, a comparison between the scores is made at each lattice point and the path having the best score is continued whilst the other path(s) is (are) discarded. In order that the best alignment between the two input phoneme sequences can be determined, where paths meet and paths are discarded, a back pointer is stored pointing to the lattice point from which the path which was not discarded, propagated. In this way, once the dynamic programming alignment unit 78 has propagated the paths through to the null end node and the path having the overall best score has been determined, a back tracking routine can be used to identify the best alignment of the phonemes in the two input phoneme sequences. The phoneme sequence determination unit 79 then uses this alignment, to determine the sequence of phonemes which best represents the input phoneme sequences. The way in which this is achieved in this embodiment will be described later.
  • DETAILED DESCRIPTION OF DP ALIGNMENT
  • A more detailed description will now be given of the operation of the dynamic programming alignment unit 43 when two sequences of phonemes (for two renditions of the new word), are aligned. Initially, the scores associated with all the nodes are set to an appropriate initial value. The alignment unit 43 then propagates paths from the null start node (øs) to all possible start points defined by the dynamic programming constraints discussed above. The dynamic programming score for the paths that are started are then set to equal the transition score for passing from the null start node to the respective start point. The paths which are started in this way are then propagated through the array of lattice points defined by the first and second phoneme sequences until they reach the null end node øe. To do this, the alignment unit 78 processes the array of lattice points column by column in a raster like technique.
  • The control algorithm used to control this raster processing operation is shown in Figure 9. As shown, in step s149, the system initialises a first phoneme sequence loop pointer, i, and a second phoneme loop pointer, j, to zero. Then in step s151, the system compares the first phoneme sequence loop pointer i with the number of phonemes in the first phoneme sequence (Nseq1). Initially the first phoneme sequence loop pointer i is set to zero and the processing therefore proceeds to step s153 where a similar comparison is made for the second phoneme sequence loop pointer j relative to the total number of phonemes in the second phoneme sequence (Nseq2). Initially the loop pointer j is also set to zero and therefore the processing proceeds to step s155 where the system propagates the path ending at lattice point (i,j) using the dynamic programming constraints discussed above. The way in which the system propagates the paths in step s155 will be described in more detail later. After step s155, the loop pointer j is incremented by one in step s157 and the processing returns to step s153. Once this processing has looped through all the phonemes in the second phoneme sequence (thereby processing the current column of lattice points), the processing proceeds to step s159 where the loop pointer j is reset to zero and the loop pointer i is incremented by one. The processing then returns to step s151 where a similar procedure is performed for the next column of lattice points. Once the last column of lattice points has been processed, the processing proceeds to step s161 where the loop pointer i is reset to zero and the processing ends.
  • Propagate
  • In step s155 shown in Figure 9, the system propagates the path ending at lattice point (i,j) using the dynamic programming constraints discussed above. Figure 10 is a flowchart which illustrates the processing steps involved in performing this propagation step. As shown, in step s211, the system sets the values of two variables mxi and mxj and initialises first phoneme sequence loop pointer i2 and second phoneme sequence loop pointer j2. The loop pointers i2 and j2 are provided to loop through all the lattice points to which the path ending at point (i,j) can propagate to and the variables mxi and mxj are used to ensure that i2 and j2 can only take the values which are allowed by the dynamic programming constraints. In particular, mxi is set equal to i plus mxhops (which is a constant having a value which is one more than the maximum number of "hops" allowed by the dynamic programming constraints and in this embodiment is set to a value of four, since a path can jump at most to a phoneme that is three phonemes further along the sequence), provided this is less than or equal to the number of phonemes in the first phoneme sequence, otherwise mxi is set equal to the number of phonemes in the first phoneme sequence (Nseq1). Similarly, mxj is set equal to j plus mxhops, provided this is less than or equal to the number of phonemes in the second phoneme sequence, otherwise mxj is set equal to the number of phonemes in the second phoneme sequence (Nseq2). Finally, in step s211, the system initialises the first phoneme sequence loop pointer i2 to be equal to the current value of the first phoneme sequence loop pointer i and the second phoneme sequence loop pointer j2 to be equal to the current value of the second phoneme sequence loop pointer j.
  • The processing then proceeds to step s219 where the system compares the first phoneme sequence loop pointer i2 with the variable mxi. Since loop pointer i2 is set to i and mxi is set equal to i+4, in step s211, the processing will proceed to step s221 where a similar comparison is made for the second phoneme sequence loop pointer j2. The processing then proceeds to step s223 which ensures that the path does not stay at the same lattice point (i,j) since initially, i2 will equal i and j2 will equal j. Therefore, the processing will initially proceed to step s225 where the query phoneme loop pointer j2 is incremented by one.
  • The processing then returns to step s221 where the incremented value of j2 is compared with mxj. If j2 is less than mxj, then the processing returns to step s223 and then proceeds to step s227, which is operable to prevent too large a hop along both phoneme sequences. It does this by ensuring that the path is only propagated if i2 + j2 is less than i + j + mxhops. This ensures that only the triangular set of points shown in Figure 7 are processed. Provided this condition is met, the processing proceeds to step s229 where the system calculates the transition score (TRANSCORE) from lattice point (i,j) to lattice point (i2,j2). In this embodiment, the transition and cumulative scores are probability based and they are combined by multiplying the probabilities together. However, in this embodiment, in order to remove the need to perform multiplications and in order to avoid the use of high floating point precision, the system employs log probabilities for the transition and cumulative scores. Therefore, in step s231, the system adds this transition score to the cumulative score stored for the point (i,j) and copies this to a temporary store, TEMPSCORE.
  • As mentioned above, in this embodiment, if two or more dynamic programming paths meet at the same lattice point, the cumulative scores associated with each of the paths are compared and all but the best path (i.e. the path having the best score) are discarded. Therefore, in step s233, the system compares TEMPSCORE with the cumulative score already stored for point (i2,j2) and the largest score is stored in SCORE (i2,j2) and an appropriate back pointer is stored to identify which path had the larger score. The processing then returns to step s225 where the loop pointer j2 is incremented by one and the processing returns to step s221. Once the second phoneme sequence loop pointer j2 has reached the value of mxj, the processing proceeds to step s235, where the loop pointer j2 is reset to the initial value j and the first phoneme sequence loop pointer i2 is incremented by one. The processing then returns to step s219 where the processing begins again for the next column of points shown in Figure 7. Once the path has been propagated from point (i,j) to all the other points shown in Figure 7, the processing ends.
  • Transition Score
  • In step s229 the transition score from one point (i,j) to another point (i2,j2) is calculated. This involves calculating the appropriate insertion probabilities, deletion probabilities and decoding probabilities relative to the start point and end point of the transition. The way in which this is achieved in this embodiment, will now be described with reference to Figures 11 and 12.
  • In particular, Figure 11 shows a flow diagram which illustrates the general processing steps involved in calculating the transition score for a path propagating from lattice point (i,j) to lattice point (i2,j2). In step s291, the system calculates, for each first sequence phoneme which is inserted between point (i,j) and point (i2,j2), the score for inserting the inserted phoneme(s) (which is just the log of probability PI( ) discussed above) and adds this to an appropriate store, INSERTSCORE. The processing then proceeds to step s293 where the system performs a similar calculation for each second sequence phoneme which is inserted between point (i,j) and point (i2,j2) and adds this to INSERTSCORE. As mentioned above, the scores which are calculated are log based probabilities, therefore the addition of the scores in INSERTSCORE corresponds to the multiplication of the corresponding insertion probabilities. The processing then proceeds to step s295 where the system calculates (in accordance with equation (1) above) the scores for any deletions and/or any decodings in propagating from point (i,j) to point (i2,j2) and these scores are added and stored in an appropriate store, DELSCORE. The processing then proceeds to step s297 where the system adds INSERTSCORE and DELSCORE and copies the result to TRANSCORE.
  • The processing involved in step s295 to determine the deletion and/or decoding scores in propagating from point (i,j) to point (i2,j2) will now be described in more detail with reference to Figure 12. As shown, initially in step s325, the system determines if the first phoneme sequence loop pointer i2 equals first phoneme sequence loop pointer i. If it does, then the processing proceeds to step s327 where a phoneme loop pointer r is initialised to one. The phoneme pointer r is used to loop through each possible phoneme known to the system during the calculation of equation (1) above. The processing then proceeds to step s329, where the system compares the phoneme pointer r with the number of phonemes known to the system, Nphonemes (which in this embodiment equals 43). Initially r is set to one in step s327, therefore the processing proceeds to step s331 where the system determines the log probability of phoneme pr occurring (i.e. log P(pr)) and copies this to a temporary score TEMPDELSCORE. If first phoneme sequence loop pointer i2 equals annotation phoneme i, then the system is propagating the path ending at point (i,j) to one of the points (i,j+1), (i,j+2) or (i,j+3). Therefore, there is a phoneme in the second phoneme sequence which is not in the first phoneme sequence. Consequently, in step s333, the system adds the log probability of deleting phoneme pr from the first phoneme sequence (i.e. log P(ø|pr)) to TEMPDELSCORE. The processing then proceeds to step s335, where the system adds the log probability of decoding phoneme pr as second sequence phoneme d2 j2 (i.e. log P(d2 j2|pr)) to TEMPDELSCORE. The processing then proceeds to step s337 where a "log addition" of TEMPDELSCORE and DELSCORE is performed and the result is stored in DELSCORE.
  • In this embodiment, since the calculation of decoding probabilities (in accordance with equation (1) above) involves summations and multiplications of probabilities, and since we are using log probabilities, this "log addition" operation effectively converts TEMDELSCORE and DELSCORE from log probabilities back to probabilities, adds them and then reconverts them back to log probabilities. This "log addition" is a well known technique in the art of speech processing and is described in, for example, the book entitled "Automatic Speech Recognition. The development of the (Sphinx) system" by Lee, Kai-Fu published by Kluwer Academic Publishers, 1989, at pages 28 and 29. After step s337, the processing proceeds to step s339 where the phoneme loop pointer r is incremented by one and then the processing returns to step s329 where a similar processing is performed for the next phoneme known to the system. Once this calculation has been performed for each of the 43 phonemes known to the system, the processing ends.
  • If at step s325, the system determines that i2 is not equal to i, then the processing proceeds to step s341 where the system determines if the second phoneme sequence loop pointer j2 equals second phoneme sequence loop pointer j. If it does, then the processing proceeds to step s343 where the phoneme loop pointer r is initialised to one. The processing then proceeds to step s345 where the phoneme loop pointer r is compared with the total number of phonemes known to the system (Nphonemes). Initially r is set to one in step s343, and therefore, the processing proceeds to step s347 where the log probability of phoneme pr occurring is determined and copied into the temporary store TEMPDELSCORE. The processing then proceeds to step s349 where the system determines the log probability of decoding phoneme pr as first sequence phoneme d1 i2 and adds this to TEMPDELSCORE. If the second phoneme sequence loop pointer j2 equals loop pointer j, then the system is propagating the path ending at point (i,j) to one of the points (i+1,j), (i+2,j) or (i+3,j). Therefore, there is a phoneme in the first phoneme sequence which is not in the second phoneme sequence. Consequently, in step s351, the system determines the log probability of deleting phoneme pr from the second phoneme sequence and adds this to TEMPDELSCORE. The processing then proceeds to step s353 where the system performs the log addition of TEMPDELSCORE with DELSCORE and stores the result in DELSCORE. The phoneme loop pointer r is then incremented by one in step s355 and the processing returns to step s345. Once the processing steps s347 to s353 have been performed for all the phonemes known to the system, the processing ends.
  • If at step s341, the system determines that second phoneme sequence loop pointer j2 is not equal to loop pointer j, then the processing proceeds to step s357 where the phoneme loop pointer r is initialised to one. The processing then proceeds to step s359 where the system compares the phoneme counter r with the number of phonemes known to the system (Nphonemes). Initially r is set to one in step s357, and therefore, the processing proceeds to step s361 where the system determines the log probability of phoneme pr occurring and copies this to the temporary score TEMPDELSCORE. If the loop pointer j2 is not equal to loop pointer j, then the system is propagating the path ending at point (i,j) to one of the points (i+1,j+1), (i+1,j+2) and (i+2,j+1). Therefore, there are no deletions, only insertions and decodings. The processing therefore proceeds to step s363 where the log probability of decoding phoneme pr as first sequence phoneme d1 i2 is added to TEMPDELSCORE. The processing then proceeds to step s365 where the log probability of decoding phoneme pr as second sequence phoneme d2 j2 is determined and added to TEMPDELSCORE. The system then performs, in step s367, the log addition of TEMPDELSCORE with DELSCORE and stores the result in DELSCORE. The phoneme counter r is then incremented by one in step s369 and the processing returns to step s359. Once processing steps s361 to s367 have been performed for all the phonemes known to the system, the processing ends.
  • Backtracking and Phoneme Sequence Generation
  • As mentioned above, after the dynamic programming paths have been propagated to the null end node øe, the path having the best cumulative score is identified and the dynamic programming alignment unit 43 backtracks through the back pointers which were stored in step s233 for the best path, in order to identify the best alignment between the two input sequences of phonemes. In this embodiment, the phoneme sequence determination unit 45 then determines, for each aligned pair of phonemes (d1 m, d2 n) of the best alignment, the unknown phoneme, p, which maximises: P d m 1 | p P ( d n 2 | p ) P p
    Figure imgb0003
    using the decoding probabilities discussed above which are stored in memory 81. This phoneme, p, is the phoneme which is taken to best represent the aligned pair of phonemes. By identifying phoneme, p, for each aligned pair, the determination unit 45 identifies the sequence of canonical phonemes that best represents the two input phoneme sequences. In this embodiment, this canonical sequence is then output by the determination unit 45 and stored in the word to phoneme dictionary 23 together with the text of the new word typed in by the user.
  • SECOND EMBODIMENT
  • A description has been given above of the way in which the dynamic programming alignment unit 43 aligns two sequences of phonemes and the way in which the phoneme sequence determination unit 45 obtains the sequence of phonemes which best represents the two input sequences given this best alignment. As those skilled in the art will appreciate, when training a new word, the user may input more than two renditions. Therefore, the dynamic programming alignment unit 43 should preferably be able to align any number of input phoneme sequences and the determination unit 45 should be able to derive the phoneme sequence which best represents any number of input phoneme sequences given the best alignment between them. A description will now be given of the way in which the dynamic programming alignment unit 43 aligns three input phoneme sequences together and how the determination unit 45 determines the phoneme sequence which best represents the three input phoneme sequences.
  • Figure 13 shows a three-dimensional coordinate plot with one dimension being provided for each of the three phoneme sequences and illustrates the three-dimensional lattice of points which are processed by the dynamic programming alignment unit 43 in this case. The alignment unit 43 uses the same transition scores and phoneme probabilities and similar dynamic programming constraints in order to propagate and score each of the paths through the three-dimensional network of lattice points in the plot shown in Figure 13.
  • A detailed description will now be given with reference to Figures 14 to 17 of the three-dimensional dynamic programming alignment carried out by the alignment unit 43 in this case. As those skilled in the art will appreciate from a comparison of Figures 14 to 17 with Figures 9 to 12, the three-dimensional dynamic programming process that is performed is essentially the same as the two-dimensional dynamic programming process performed when there were only two input phoneme sequences, except with the addition of a few further control loops in order to take into account the extra phoneme sequence.
  • As in the first case, the scores associated with all the nodes are initialised and then the dynamic programming alignment unit 43 propagates dynamic programming paths from the null start node øe to each of the start points defined by the dynamic programming constraints. It then propagates these paths from these start points to the null end node øe by processing the points in the search space in a raster-like fashion. The control algorithm used to control this raster processing operation is shown in Figure 14. As can be seen from a comparison of Figure 14 with Figure 9, this control algorithm has the same general form as the control algorithm used when there were only two phoneme sequences to be aligned. The only differences are in the more complex propagation step s419 and in the provision of query block s421, block s423 and block s425 which are needed in order to process the additional lattice points caused by the third phoneme sequence. For a better understanding of how the control algorithm illustrated in Figure 17 operates, the reader is referred to the description given above of Figure 12.
  • Figure 15 is a flowchart which illustrates the processing steps involved in the propagation step s419 shown in Figure 10. Figure 10 shows the corresponding flowchart for the two-dimensional case described above. As can be seen from a comparison of Figure 15 with Figure 10, the main differences between the two flowcharts are the additional variables (mxk and k2) and processing blocks (s451, s453, s455 and s457) which are required to process the additional lattice points due to the third phoneme sequence. For a better understanding of the processing steps involved in the flowchart shown in Figure 15, the reader is referred to the description of Figure 10.
  • Figure 16 is a flowchart illustrating the processing steps involved in calculating the transition score when a dynamic programming path is propagated from point (i,j,k) to point (i2, j2, k2) during the processing steps in Figure 15. Figure 11 shows the corresponding flowchart for the two-dimensional case described above. As can be seen from comparing Figure 16 to Figure 11, the main difference between the two flowcharts is the additional process step s461 for calculating the insertion probabilities for inserted phonemes in the third phoneme sequence. Therefore, for a better understanding of the processing steps involved in the flowchart shown in Figure 16, the reader is referred to the description of Figure 11.
  • The processing steps involved in step s463 in Figure 16 to determine the deletion and/or decoding scores in propagating from point (i,j,k) to point (i2,j2,k2) will now be described in more detail with reference to Figure 17. Initially, the system determines (in steps s525 to s537) if there are any phoneme deletions from any of the three phoneme sequences by comparing i2, j2 and k2 with i, j and k respectively. As shown in Figures 17a to 17d, there are eight main branches which operate to determine the appropriate decoding and deletion probabilities for the eight possible situations. Since the processing performed in each situation is very similar, a description will only be given of one of the situations.
  • In particular, if at steps s525, s527 and s531, the system determines that there has been a deletion from the first phoneme sequence (because i2 = i) and that there are no deletions from the other two phoneme sequences (because j2 ≠ j and k2 ≠ k), then the processing proceeds to step s541 where a phoneme loop pointer r is initialised to one. The phoneme loop pointer r is used to loop through each possible phoneme known to the system during the calculation of an equation similar to equation (1) described above. The processing then proceeds to step s543 where the system compares the phoneme pointer r with the number of phonemes known to the system, Nphonemes (which in this embodiment equals 43). Initially, r is set to one in step s541. Therefore the processing proceeds to step s545 where the system determines the log probability of phoneme pr occurring and copies this to a temporary score TEMPDELSCORE. The processing then proceeds to step s547 where the system determines the log probability of deleting phoneme pr in the first phoneme sequence and adds this to TEMPDELSCORE. The processing then proceeds to step s549 where the system determines the log probability of decoding phoneme pr as second sequence phoneme d2 j2 and adds this to TEMPDELSCORE. The processing then proceeds to step s551 where the system determines the log probability of decoding phoneme pr as third sequence phoneme d3 k2 and adds this to TEMPDELSCORE. The processing then proceeds to step s553 where the system performs the log addition of TEMPDELSCORE with DELSCORE and stores the result in DELSCORE. The processing then proceeds to step s555 where the phoneme pointer r is incremented by one. The processing then returns to step s543 where a similar processing is performed for the next phoneme known to the system. Once this calculation has been performed for each of the 43 phonemes known to the system, the processing ends.
  • As can be seen from a comparison of the processing steps performed in Figure 17 and the steps performed in Figure 12, the term calculated within the dynamic programming algorithm for decodings and deletions is similar to equation (1) but has an additional probability term for the third phoneme sequence. In particular, it has the following form: r = 1 N p P d i 1 | p r P ( d j 2 | p r ) P d k 3 | p r P p r
    Figure imgb0004
  • As with the two dimensional case, after the dynamic programming paths have been propagated through to the null end node øe, the dynamic programming alignment unit 78 identifies the path having the best score and uses the back pointers which have been stored for this path to identify the aligned phoneme triples (i.e. the aligned phonemes in the three sequences) which lie along this best path. In this embodiment, for each of these aligned phoneme triples (d1 m, d2 n, d3 o), the phoneme sequence determination unit 79 determines the phoneme, p, which maximises: P d m 1 | p P ( d n 2 | p ) P d o 3 | p P p
    Figure imgb0005
    in order to generate the canonical sequence of phonemes which best represents the three input phoneme sequences.
  • A description has been given above of the way in which the dynamic programming alignment unit 43 aligns two or three sequences of phonemes. As has been demonstrated for the three phoneme sequence case, the addition of a further phoneme sequence simply involves the addition of a number of loops in the control algorithm in order to account for the additional phoneme sequence. As those skilled in the art will appreciate, the alignment unit 43 can therefore identify the best alignment between any number of input phoneme sequences by identifying how many sequences are input and then ensuring that appropriate control variables are provided for each input sequence. The determination unit 45 can then identify the sequence of phonemes which best represents the input phoneme sequences using these alignment results.
  • THIRD EMBODIMENT
  • A system was described above which allows a user to add word models to a word to phoneme dictionary used in a speech recognition system. The new word as typed in is stored in the dictionary together with the sequence or phonemes which matches best with the spoken renditions of the new word. As those skilled in the art will appreciate, many words have different pronunciations. In this case, the user may input the different pronunciations as different word models. Alternatively, a lattice of the phoneme sequences representative of the different pronunciations could be generated and stored for the single word. A third embodiment will now be described which illustrates the way in which phoneme sequences representative of different pronunciations of a new word may be generated and how such a lattice of phoneme sequences may be created. In this third embodiment, the method of word training described earlier is used to initially create a single postulated phoneme sequence for a word taken from a large number of example phonetic decodings of the word. This postulated version is then used to score all the decodings of the word according to their similarity to the postulated form. Versions with similar scores are then clustered. If more than one cluster emerges then a postulated representation for each cluster is determined and the original decodings are re-scored and re-clustered relative to the new postulated representations. This process is then iterated until some convergence criteria is achieved. This process will now be explained in more detail with reference to Figures 18 to 20.
  • Figure 18 shows in more detail the main components of the word model generation unit 31 of the third embodiment. As shown, the word generation unit 31 is similar to the word generation unit of the first embodiment. In particular, it includes a memory 41 which receives each phoneme sequence (Di) output from the speech recognition engine 17 for each of the renditions of the new word input by the user. After the user has finished inputting the training examples, the phoneme sequences stored in the memory 41 are applied to the dynamic programming alignment unit 43 which determines the best alignment between the phoneme sequences in the manner described above.
  • The phoneme sequence determination unit 45 then determines (also in the manner described above) the sequence of phonemes which matches best with the input phoneme sequences. This best phoneme sequence (Dbest) and the original-input phoneme sequences (Di) are then passed to an analysis unit 61 which compares the best phoneme sequence with each of the input sequences to determine how well each of the input sequences corresponds to the best sequence. If the input sequence is the same length as the best sequence, then the analysis unit does this, in this embodiment, by calculating: P ( D i | D best ) = j = 1 N i P d j i | d j best
    Figure imgb0006
    where di j and dbest j are the corresponding phonemes from the current input sequence and the representative sequence respectively. If, on the other hand, the input sequence is not the same length as the best sequence, then, in this embodiment, the analysis unit compares the two sequences using a dynamic programming technique such as the one described above.
  • The analysis unit 61 then analyses each of these probabilities using a clustering algorithm to identify if different clusters can be found within these probability scores - which would indicate that the input sequences include different pronunciations for the input word. This is schematically illustrated in the plot shown in Figure 19. In particular, Figure 19 has the probability scores determined in the above manner plotted on the x-axis with the number of training sequences having that score plotted on the y-axis. (As those skilled in the art will appreciate, in practice the plot will be a histogram, since it is unlikely that many scores will be exactly the same). The two peaks 71 and 73 in this plot indicate that there are two different pronunciations of the training word. Once the analysis unit 61 has performed the clustering algorithm, it assigns each of the input phoneme sequences (Di) to one of the different clusters. The analysis unit 61 then outputs the input phoneme sequences of each cluster back to the dynamic programming alignment unit 43 which processes the input phoneme sequences in each cluster separately so that the phoneme sequence determination unit 45 can determine a representative phoneme sequence for each of the clusters. In this embodiment, when the alignment unit 43 processes the phoneme sequences of one cluster, the phoneme sequences of the or each other cluster are stored in the memory 47. Once a representative sequence for each cluster has been determined, the analysis unit 61 then compares each of the input phoneme sequences with all of the cluster representative sequences and then re-clusters the input phoneme sequences. This whole process is then iterated until a suitable convergence criteria is achieved.
  • The representative sequence for each cluster identified using this process may then be stored in the word to phoneme dictionary 23 together with the typed version of the word. However, as shown in Figure 18, in this embodiment, the cluster representations are input to a phoneme sequence combination unit 63 which combines the representative phoneme sequences to generate a phoneme lattice using a standard forward/backward truncation technique. This is illustrated in Figures 20 and 21. In particular, Figure 20 shows two sequences of phonemes 75 and 77 represented by sequences A-B-C-D and A-E-C-D and Figure 21 shows the resulting phoneme lattice 79 obtained by combining the two sequences shown in Figure 20 using the forward/backward truncation technique. The phoneme lattice 79 output by the phoneme combination unit 63 is then stored in the word to phoneme dictionary 23 together with the typed version of the word.
  • As those skilled in the art will appreciate, storing the different pronunciations in a lattice in this way, reduces the amount of storage space required for the phoneme sequences in the word to phoneme dictionary 23.
  • TRAINING
  • In the above embodiments, the dynamic programming alignment unit 78 used 1892 decoding/deletion probabilities and 43 insertion probabilities to score the dynamic programming paths in the phoneme alignment operation. In this embodiment, these probabilities are determined in advance during a training session and are stored in the memory 47. In particular, during this training session, a speech recognition system is used to provide a phoneme decoding of speech in two ways. In the first way, the speech recognition system is provided with both the speech and the actual words which are spoken. The speech recognition system can therefore use this information to generate the canonical phoneme sequence of the spoken words to obtain an ideal decoding of the speech. The speech recognition system is then used to decode the same speech, but this time without knowledge of the actual words spoken (referred to hereinafter as the free decoding). The phoneme sequence generated from the free decoding will differ from the canonical phoneme sequence in the following ways:
    1. i) the free decoding may make mistakes and insert phonemes into the decoding which are not present in the canonical sequence or, alternatively, omit phonemes in the decoding which are present in the canonical sequence;
    2. ii) one phoneme may be confused with another; and
    3. iii) even if the speech recognition system decodes the speech perfectly, the free decoding may nonetheless differ from the canonical decoding due to the differences between conversational pronunciation and canonical pronunciation, e.g., in conversational speech the word "and" (whose canonical forms are /ae/ /n/ /d/ and /ax/ /n/ /d/) is frequently reduced to /ax/ /n/ or even /n/.
  • Therefore, if a large number of utterances are decoded into their canonical forms and their free decoded forms, then a dynamic programming method (similar to the one described above) can be used to align the two. This provides counts of what was decoded, d, when the phoneme should, canonically, have been a p. From these training results, the above decoding, deletion and insertion probabilities can be approximated in the following way.
  • The probability that phoneme, d, is an insertion is given by: P I ( d ) = I d n o d
    Figure imgb0007
    where Id is the number of times the automatic speech recognition system inserted phoneme d and no d is the total number of decoded phonemes which are inserted relative to the canonical sequence.
  • The probability of decoding phoneme p as phoneme d is given by: P d | p = c dp n p
    Figure imgb0008
    where cdp is the number of times the automatic speech recognition system decoded d when it should have been p and np is the number of times the automatic speech recognition system decoded anything (including a deletion) when it should have been p.
  • The probability of not decoding anything (i.e. there being a deletion) when the phoneme p should have been decoded is given by: P φ | p = O p n p
    Figure imgb0009
    where Op is the number of times the automatic speech recognition system decoded nothing when it should have decoded p and np is the same as above.
  • ALTERNATIVE EMBODIMENTS
  • As those skilled in the art will appreciate, whilst the term "phoneme" has been used throughout the above description, the present application is not limited to its linguistic meaning, but includes the different sub-word units that are normally identified and used in standard speech recognition systems. In particular, the term "phoneme" covers any such sub-word unit, such as phones, syllables or katakana (Japanese alphabet).
  • As those skilled in the art will appreciate, the above description of the dynamic programming alignment of the sequences of phonemes was given by way of example only and various modifications can be made. For example, whilst a raster scanning technique for propagating the paths through the lattice points was employed, other techniques could be employed which progressively propagate the paths through the lattice points. Additionally, as those skilled in the art will appreciate, dynamic programming constraints other than those described above may be used to control the matching process.
  • In the above embodiment, the dynamic programming alignment unit calculated decoding scores for each transition using equation (1) above. Instead of summing over all possible phonemes known to the system in accordance with equation (1), the dynamic programming alignment unit may be arranged, instead, to identify the unknown phoneme, p, which maximises the probability term within the summation and to use this maximum probability term as the probability of decoding the corresponding phonemes in the input sequences. In such an embodiment, the dynamic programming alignment unit would also preferably store an indication of the phoneme which maximised this probability with the appropriate back pointer, so that after the best alignment between the input phoneme sequences has been determined, the sequence of phonemes which best represents the input sequences can simply be determined by the phoneme sequence determination unit from this stored data.
  • In the above embodiment, the insertion, deletion and decoding probabilities were calculated from statistics of the speech recognition system using a maximum likelihood estimate of the probabilities. As those skilled in the art of statistics will appreciate, other techniques, such as maximum entropy techniques, can be used to estimate these probabilities. Details of a suitable maximum entropy technique can be found at pages 45 to 52 in the book entitled "Maximum Entropy and Bayesian Methods" published by Kluwer Academic publishers and written by John Skilling.
  • In the above embodiments, a dynamic programming algorithm was used to align the sequences of phonemes output by the speech recognition engine. As those skilled in the art will appreciate, any alignment technique could be used. For example, a naive technique could be used which considers all possible alignments. However, dynamic programming is preferred because of its ease of implementation using standard processing hardware. Additionally, whilst in the above embodiment, the dynamic programming alignment unit determined the "best" alignment between the input sequences of phonemes, this may not, in some applications, be strictly necessary. For example, the second, third or fourth best alignment may be used instead.
  • In the above embodiments, the dynamic programming alignment unit was operable to simultaneously align a plurality of input sequences of phonemes in order to identify the best alignment between them. In an alternative embodiment, the dynamic programming alignment unit may be arranged to compare two sequences of input phonemes at a time. In this case, for example, the third input phoneme sequence would be aligned with the sequence of phonemes which best represents the first two sequences of phonemes etc.
  • In the embodiment described above, during the dynamic programming algorithm, equation (1) was calculated for each aligned pair of phonemes. In the calculation of equation (1), the first sequence phoneme and the second sequence phoneme were compared with each of the phonemes known to the system. As those skilled in the art will appreciate, for a given first sequence phoneme and second sequence phoneme pair, many of the probabilities given in equation (1) will be equal to or very close to zero. Therefore, in an alternative embodiment the aligned phonemes may only be compared with a subset of all the known phonemes, which subset is determined in advance from the training data. To implement such an embodiment, the input phonemes to be aligned could be used to address a lookup table which would identify the phonemes which need to be compared with them using equation (1) (or its multi-input sequence equivalent).
  • In the above embodiment, the user input a number of spoken renditions of the new input word together with a typed rendition of the new word. In an alternative embodiment, rather than typing in a rendition of the new word, it may be input as a handwritten version which is subsequently converted into text using appropriate handwriting recognition software.
  • In the above embodiments, new word models were generated for use in a speech recognition system. In particular, in the examples given above, the new word models are stored together with a text version of the word so that the text can be used in a word processing application. However, as mentioned in the introduction, the word models may be used as a control command rather than being for use in generating corresponding text. In this case, rather than storing text corresponding to the new word model, the corresponding control action or command would be input and stored.
  • A number of embodiments and modifications have been described above. As those skilled in the art will appreciate, there are many other embodiments and modifications which will be apparent to those skilled in the art.

Claims (50)

  1. An apparatus for generating a sequence of sub-word units representative of a new word to be added to a dictionary (23) of a speech recognition system (14), the apparatus comprising:
    receiving means (17) for receiving signals representative of first and second spoken renditions of the new word;
    speech recognition means (17) for comparing the received first and second spoken renditions with pre-stored sub-word unit models (19) to generate first and second sequences of sub-word units representative of said first and second spoken renditions of the new word respectively;
    means (43) for aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    first comparing means (43) for comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    second comparing means (43) for comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set; and
    means (45) for determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated by said first and second comparing means for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word.
  2. An apparatus according to claim 1, wherein said determining means (45) is operable to determine said sequence of sub-word units by determining, for each aligned pair of sub-word units, a sub-word unit that is confusingly similar to the first and second sub-word units of the aligned pair.
  3. An apparatus according to claim 1 or 2, further comprising:
    means (43) for combining the comparison scores obtained when comparing the first and second sequence sub-word units in the aligned pair with the same sub-word unit from the set, to generate a plurality of combined comparison scores;
    third comparing means (43) for comparing, for each aligned pair, the combined comparison scores generated by said combining means (43) for the aligned pair; and
    wherein said determining means (45) is operable to determine, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon a comparison result output by said third comparing means for the aligned pair.
  4. An apparatus according to any preceding claim, wherein said first and second comparing means (43) are operable to compare the first sequence sub-word unit and the second sequence sub-word unit respectively with each of the sub-word units in said set of sub-word units.
  5. An apparatus according to claim 3 or 4, wherein said first and second comparing means (43) are operable to provide comparison scores which are indicative of a probability of confusing the corresponding sub-word unit taken from the set of predetermined sub-word units as the sub-word unit in the aligned pair.
  6. An apparatus according to claim 5, wherein said combining means (43) is operable to combine the comparison scores in order to multiply the probabilities of confusing the corresponding sub-word unit taken from the set as the sub-word units in the aligned pair.
  7. An apparatus according to claim 6, wherein each of said sub-word units in said set of predetermined sub-word units has a predetermined probability of occurring within a sequence of sub-word units and wherein said combining means (43) is operable to weight each of said combined comparison scores in dependence upon the respective probability of occurrence for the sub-word unit of the set used to generate the combined comparison score.
  8. An apparatus according to claim 7, wherein said combining means (43) is operable to combine said comparison scores by calculating: P d i 1 | p r P ( d j 2 | p r ) P p r
    Figure imgb0010

    where d1 i and d2 j are an aligned pair of first and second sequence sub-word units respectively; P(di |pr) is the comparison score output by said first comparing means and is representative of the probability of confusing set sub-word unit pr as first sequence sub-word unit d1 i; P(d2 j|pr) is the comparison score output by said second comparing means and is representative of the probability of confusing set sub-word unit pr as second sequence sub-word unit d2 j; and P(pr) is a weight which represents the probability of set sub-word unit pr occurring in a sequence of sub-word units.
  9. An apparatus according to claim 8, wherein said third comparing means (43) is operable to identify the set sub-word unit which gives the maximum combined comparison score and wherein said determining means (45) is operable to determine said sub-word unit representative of the sub-word units in the aligned pair as being the sub-word unit which provides the maximum combined comparison score.
  10. An apparatus according to any of claims 6 to 9, wherein said comparison scores represent log probabilities and wherein said combining means (43) is operable to multiply said probabilities by adding the respective comparison scores.
  11. An apparatus according to any of claims 3 to 10, wherein each of the sub-word units in said first and second sequences of sub-word units belong to said set of predetermined sub-word units and wherein said first and second comparing means (43) are operable to provide said comparison scores using predetermined data which relate the sub-word units in said set to each other.
  12. An apparatus according to claim 11, wherein said predetermined data comprises, for each sub-word unit in the set of sub-word units, a probability for confusing that sub-word unit with each of the other sub-word units in the set of sub-word units.
  13. An apparatus according to any preceding claim, wherein said aligning means (43) comprises dynamic programming means for aligning said first and second sequences of sub-word units using a dynamic programming technique.
  14. An apparatus according to claim 13, wherein said dynamic programming means is operable to determine an optimum alignment between said first and second sequences of sub-word units.
  15. An apparatus according to any preceding claim, wherein each of said sub-word units represents a phoneme.
  16. An apparatus according to any preceding claim, wherein said receiving means (17) is operable to receive signals representative of a third spoken rendition of the new word, wherein said recognition means (17) is operable to compare the third rendition of the new word with said pre-stored sub-word unit models to generate a third sequence of sub-word units representative of said third rendition of the new word, wherein said aligning means (43) is operable to align simultaneously the sub-word units of the first, second and third sequences of sub-word units to generate a number of aligned groups of sub-word units, each aligned group comprising a sub-word unit from each of the renditions and wherein said determining means (45) is operable to determine said representative sequence of sub-word units in dependence upon the aligned groups of sub-word units.
  17. An apparatus according to any of claims 1 to 15, wherein said receiving means (17) is operable to receive signals representative of a third spoken rendition of the new word, wherein said recognition means (17) is operable to compare the third rendition of the new word with said pre-stored sub-word unit models to generate a third sequence of sub-word units representative of said third rendition of the new word and wherein said aligning means (43) is operable to align two sequences of sub-word units at a time.
  18. An apparatus according to any preceding claim, wherein said receiving means (17) is operable to receive signals representative of a plurality of spoken renditions of the new word, wherein said speech recognition means (17) is operable to compare the received spoken renditions with pre-stored sub-word unit models to generate a sequence of sub-word units for each of the plurality of spoken renditions, wherein said aligning means (43) is operable to align the sub-word units of the plurality of sequences of sub-word units to form a number of aligned groups of sub-word units, each group including a sub-word unit from each sequence; wherein said determining means (45) is operable to determine a sequence of sub-word units representative of the spoken renditions of the new word; and wherein the apparatus further comprises (i) means (45) for comparing each sequence of sub-word units with said representative sequence of sub-word units to determine a score representative of the similarity there between; (ii) means (45) for processing the scores output by the comparing means to identify clusters within the scores indicating one or more different pronunciations of the spoken rendition of the new word; and wherein said determining means (45) is operable to determine a sequence of sub-word units representative of the spoken renditions of the new word within each cluster.
  19. An apparatus according to claim 18, wherein said comparing means (45), processing means (45) and determining means (45) operate iteratively until a predetermined convergence criterion is met.
  20. An apparatus according to claim 18 or 19, further comprising means (45) for combining the sequences of sub-word units for each of the clusters into a sub-word unit lattice.
  21. An apparatus according to any preceding claim, wherein said generated sequence of sub-word units are representative of a new command to be added to a command dictionary of said speech recognition system.
  22. An apparatus according to any of claims 1 to 20, wherein said generated sequence of sub-word units are representative of a new word to be added to a word dictionary of a speech recognition system together with a text rendition of the new word.
  23. An apparatus for adding a new word and sub-word representation of the new word to a word dictionary of a speech recognition system, the apparatus comprising,
    means (41) for receiving a first sequence of sub-word units representative of a first spoken rendition of the new word and for receiving a second sequence of sub-word units representative of a second spoken rendition of the new word;
    means (43) for aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    first comparing means (43) for comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    second comparing means (43) for comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set;
    means (45) for determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated by said first and second comparing means for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word; and
    means for adding the new word and the representative sequence of sub-word units to said word dictionary.
  24. A speech recognition system comprising:
    means (15) for receiving speech signals to be recognised;
    means (19) for storing sub-word unit models;
    means (17) for comparing received speech with the sub-word unit models to generate one or more sequences of sub-word units representative of the received speech signals;
    a dictionary (23) relating sequences of sub-word units to words or commands;
    a decoder (21) for processing the one or more sequences of sub-word units output by said comparing means (17) using the dictionary (23) to generate one or more words or commands corresponding to the received speech signals;
    means (31) for adding a new word or command and a sub-word unit representation of the new word or command to the dictionary (23); and
    means (20,29) for controllably connecting the output of said comparing means (17) to either said decoder (21) or to said means (31) for adding the new word or command;
    characterised in that said means (31) for adding the new word or command comprises:
    means (41) for receiving a first sequence of sub-word units representative of a first spoken rendition of the new word or command output by said comparing means (17) and for receiving a second sequence of sub-word units representative of a second spoken rendition of the new word or command output by said comparing means (17);
    means (43) for aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    first comparing means (43) for comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    second comparing means (43) for comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set;
    means (45) for determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated by said first and second comparing means for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word or command;
    means for receiving a text rendition of the new word or a control action associated with the new command; and
    means (31) for adding said text rendition of the new word or the control action associated with the new command and the representative sequence of sub-word units to said dictionary (23).
  25. A method of generating a sequence of sub-word units representative of a new word to be added to a dictionary of a speech recognition system, the method comprising the steps of:
    a first receiving step of receiving signals representative of first and second spoken renditions of the new word;
    a first comparing step of comparing the received first and second spoken renditions with pre-stored sub-word unit models to generate a first sequence of sub-word units representative of said first spoken rendition of the new word and a second sequence of sub-word units representative of said second spoken rendition of the new word;
    aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    a second comparing step of comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    a third comparing step of comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set; and
    determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated in said second and third comparing steps for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word.
  26. A method according to claim 25, wherein said determining step determines said sequence of sub-word units by determining, for each aligned pair of sub-word units, a sub-word unit that is confusingly similar to the first and second sub-word units of the aligned pair.
  27. A method according to claim 25 or 26, further comprising the steps of:
    combining the comparison scores obtained when comparing the first and second sequence sub-word units in the aligned pair with the same sub-word unit from the set, to generate a plurality of combined comparison scores;
    a fourth comparing step of comparing, for each aligned pair, the combined comparison scores generated in said combining step for the aligned pair; and
    wherein said determining step determines, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon a comparison result output by said fourth comparing step for the aligned pair.
  28. A method according to claim 27, wherein said second and third comparing steps compare the first sequence sub-word unit and the second sequence sub-word unit respectively with each of the sub-word units in said set of sub-word units.
  29. A method according to claim 27 or 28, wherein said second and third comparing steps provide comparison scores which are indicative of a probability of confusing the corresponding sub-word unit taken from the set of predetermined sub-word units as the sub-word unit in the aligned pair.
  30. A method according to claim 29, wherein said combining step combines the comparison scores in order to multiply the probabilities of confusing the corresponding sub-word unit taken from the set as the sub-word units in the aligned pair.
  31. A method according to claim 30, wherein each of said sub-word units in said set of predetermined sub-word units has a predetermined probability of occurring within a sequence of sub-word units and wherein said combining step weights each of said combined comparison scores in dependence upon the respective probability of occurrence for the sub-word unit of the set used to generate the combined comparison score.
  32. A method according to claim 31, wherein said combining step combines said comparison scores by calculating: P d i 1 | p r P ( d j 2 | p r ) P p r
    Figure imgb0011

    where d1 i and d2 j are an aligned pair of first and second sequence sub-word units respectively; P(di|pr) is the comparison score output by said second comparing step and is representative of the probability of confusing set sub-word unit pr as first sequence sub-word unit d1 i; P(d2 j |pr) is the comparison score output by said third comparing step and is representative of the probability of confusing set sub-word unit pr as second sequence sub-word unit d2 j; and P(pr) is a weight which represents the probability of set sub-word unit pr occurring in a sequence of sub-word units.
  33. A method according to claim 32, wherein said fourth comparing step identifies the set sub-word unit which gives the maximum combined comparison score and wherein said determining step determines said sub-word unit representative of the sub-word units in the aligned pair as being the sub-word unit which provides the maximum combined comparison score.
  34. A method according to any of claims 30 to 33, wherein said comparison scores represent log probabilities and wherein said combining step multiplies said probabilities by adding the respective comparison scores.
  35. A method according to any of claims 27 to 34, wherein each of the sub-word units in said first and second sequences of sub-word units belong to said set of predetermined sub-word units and wherein said second and third comparing steps provide said comparison scores using predetermined data which relate the sub-word units in said set to each other.
  36. A method according to claim 35, wherein said predetermined data comprises, for each sub-word unit in the set of sub-word units, a probability for confusing that sub-word unit with each of the other sub-word units in the set of sub-word units.
  37. A method according to any of claims 25 to 36, wherein said aligning step uses a dynamic programming technique to align said first and second sequences of sub-word units.
  38. A method according to claim 37, wherein said dynamic programming technique determines an optimum alignment between said first and second sequences of sub-word units.
  39. A method according to any of claims 25 to 38, wherein each of said sub-word units represents a phoneme.
  40. A method according to any of claim 25 to 39, wherein said receiving step receives signals representative of a third spoken rendition of the new word, wherein said first comparing step compares the third rendition of the new word with said pre-stored sub-word unit models to generate a third sequence of sub-word units representative of said third rendition of the new word, wherein said aligning step simultaneously aligns the sub-word units of the first, second and third sequences of sub-word units to generate a number of aligned groups of sub-word units, each aligned group comprising a sub-word unit from each of the renditions and wherein said determining step determines said representative sequence of sub-word units in dependence upon the aligned groups of sub-word units.
  41. A method according to any of claims 25 to 39, wherein said receiving step receives signals representative of a third spoken rendition of the new word, wherein said first comparing step compares the third rendition of the new word with said pre-stored sub-word unit models to generate a third sequence of sub-word units representative of said third rendition of the new word and wherein said aligning step aligns two sequences of sub-word units at a time.
  42. A method according to any of claims 25 to 41, wherein said first receiving step receives signals representative of a plurality of spoken renditions of the new word, wherein said first comparing step compares the received spoken renditions with pre-stored sub-word unit models to generate a sequence of sub-word units for each of the plurality of spoken renditions, wherein said aligning step aligns the sub-word units of the plurality of sequences of sub-word units to form a number of aligned groups of sub-word units, each group including a sub-word unit from each sequence; wherein said determining step determines a sequence of sub-word units representative of the spoken renditions of the new word; and wherein the method further comprises: (i) a fourth comparing step of comparing each sequence of sub-word units with said representative sequence of sub-word units to determine a score representative of the similarity there between; and (ii) processing the scores output by the fourth comparing step to identify clusters within the scores indicating one or more different pronunciations of the spoken rendition of the new word; and wherein said determining step determines a sequence of sub-word units representative of the spoken renditions of the new word within each cluster.
  43. A method according to claim 42, wherein said fourth comparing step, processing step and determining step operate iteratively until a predetermined convergence criterion is met.
  44. A method according to claim 42 or 43, further comprising the step of combining the sequences of sub-word units for each of the clusters into a sub-word unit lattice.
  45. A method according to any of claims 25 to 44, wherein said generated sequence of sub-word units are representative of a new word to be added to a word dictionary of a speech recognition system.
  46. A method according to any of claims 25 to 44, wherein said generated sequence of sub-word units are representative of a new word to be added to a command dictionary of a speech recognition system.
  47. A method of adding a new word and sub-word representation of the new word to a word dictionary of a speech recognition system, the method comprising the steps of:
    receiving a first sequence of sub-word units representative of a first spoken rendition of the new word and for receiving a second sequence of sub-word units representative of a second spoken rendition of the new word;
    aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    a first comparing step of comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    a second comparing step of comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set;
    determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated in said second and third comparing steps for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word; and
    adding the new word and the representative sequence of sub-word units to said word dictionary.
  48. A speech recognition method comprising the steps of:
    receiving speech signals to be recognised;
    storing sub-word unit models;
    comparing received speech signals with the sub-word unit models to generate one or more sequences of sub-word units representative of the received speech signals;
    storing a dictionary relating sequences of sub-word units to words or to commands;
    processing the one or more sequences of sub-word units output by said comparing step using the stored dictionary to generate one or more words or commands corresponding to the received speech signals;
    adding a new word or command and a sub-word representation of the new word or command to the dictionary; and
    controllably feeding the output of said comparing step to either said processing step or said adding step;
    characterised in that said adding step comprises:
    receiving a first sequence of sub-word units representative of a first spoken rendition of the new word or command output by said comparing step;
    receiving a second sequence of sub-word units representative of a second spoken rendition of the new word or command output by said comparing step;
    aligning sub-word units of the first sequence with sub-word units of the second sequence to form a number of aligned pairs of sub-word units;
    a first comparing step of comparing, for each aligned pair, the first sequence sub-word unit in the aligned pair with each of a plurality of sub-word units taken from a set of predetermined sub-word units, to provide a corresponding plurality of comparison scores representative of the similarities between the first sequence sub-word unit and the respective sub-word units of the set;
    a second comparing step of comparing, for each aligned pair, the second sequence sub-word unit in the aligned pair with each of said plurality of sub-word units from the set, to provide a further corresponding plurality of comparison scores representative of the similarities between said second sequence sub-word unit and the respective sub-word units of the set;
    determining, for each aligned pair of sub-word units, a sub-word unit representative of the sub-word units in the aligned pair in dependence upon the comparison scores generated in said second and third comparing steps for the aligned pair, to determine a sequence of sub-word units representative of the spoken renditions of the new word or command;
    receiving a text rendition of the new word or a control action associated with the new command; and
    adding said text rendition of the new word or said control action associated with the new command and the representative sequence of sub-word units to said dictionary.
  49. A storage medium storing processor implementable instructions to control a processor to implement the method of any one of claims 25 to 48.
  50. Processor implementable instructions to control a processor to implement the method of any one of claims 25 to 48.
EP01309137A 2000-11-07 2001-10-29 Pronunciation of new input words for speech processing Expired - Lifetime EP1205908B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0027178 2000-11-07
GBGB0027178.3A GB0027178D0 (en) 2000-11-07 2000-11-07 Speech processing system

Publications (3)

Publication Number Publication Date
EP1205908A2 EP1205908A2 (en) 2002-05-15
EP1205908A3 EP1205908A3 (en) 2003-11-19
EP1205908B1 true EP1205908B1 (en) 2007-02-21

Family

ID=9902706

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01309137A Expired - Lifetime EP1205908B1 (en) 2000-11-07 2001-10-29 Pronunciation of new input words for speech processing

Country Status (5)

Country Link
US (1) US7337116B2 (en)
EP (1) EP1205908B1 (en)
JP (1) JP2002156995A (en)
DE (1) DE60126722T2 (en)
GB (1) GB0027178D0 (en)

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7310600B1 (en) 1999-10-28 2007-12-18 Canon Kabushiki Kaisha Language recognition using a similarity measure
US7295982B1 (en) * 2001-11-19 2007-11-13 At&T Corp. System and method for automatic verification of the understandability of speech
US20030220788A1 (en) * 2001-12-17 2003-11-27 Xl8 Systems, Inc. System and method for speech recognition and transcription
US6990445B2 (en) * 2001-12-17 2006-01-24 Xl8 Systems, Inc. System and method for speech recognition and transcription
US20030115169A1 (en) * 2001-12-17 2003-06-19 Hongzhuan Ye System and method for management of transcribed documents
US7398209B2 (en) * 2002-06-03 2008-07-08 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7693720B2 (en) * 2002-07-15 2010-04-06 Voicebox Technologies, Inc. Mobile systems and methods for responding to natural language speech utterance
WO2004029931A1 (en) * 2002-09-23 2004-04-08 Infineon Technologies Ag Voice recognition device, control device and method for computer-assisted completion of an electronic dictionary for a voice recognition device
DE10244169A1 (en) * 2002-09-23 2004-04-01 Infineon Technologies Ag Speech recognition device, control device and method for computer-aided supplementing of an electronic dictionary for a speech recognition device
JP4072718B2 (en) * 2002-11-21 2008-04-09 ソニー株式会社 Audio processing apparatus and method, recording medium, and program
WO2005010866A1 (en) * 2003-07-23 2005-02-03 Nexidia Inc. Spoken word spotting queries
US8577681B2 (en) * 2003-09-11 2013-11-05 Nuance Communications, Inc. Pronunciation discovery for spoken words
US8954325B1 (en) * 2004-03-22 2015-02-10 Rockstar Consortium Us Lp Speech recognition in automated information services systems
GB0426347D0 (en) 2004-12-01 2005-01-05 Ibm Methods, apparatus and computer programs for automatic speech recognition
US7640160B2 (en) * 2005-08-05 2009-12-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7620549B2 (en) 2005-08-10 2009-11-17 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition in conversational speech
US7949529B2 (en) 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US7634409B2 (en) * 2005-08-31 2009-12-15 Voicebox Technologies, Inc. Dynamic speech sharpening
US7512574B2 (en) * 2005-09-30 2009-03-31 International Business Machines Corporation Consistent histogram maintenance using query feedback
KR100717385B1 (en) * 2006-02-09 2007-05-11 삼성전자주식회사 Recognition confidence measuring by lexical distance between candidates
US7693717B2 (en) * 2006-04-12 2010-04-06 Custom Speech Usa, Inc. Session file modification with annotation using speech recognition or text to speech
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US20080126093A1 (en) * 2006-11-28 2008-05-29 Nokia Corporation Method, Apparatus and Computer Program Product for Providing a Language Based Interactive Multimedia System
US7818176B2 (en) 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
EP2135231A4 (en) * 2007-03-01 2014-10-15 Adapx Inc System and method for dynamic learning
US8024191B2 (en) * 2007-10-31 2011-09-20 At&T Intellectual Property Ii, L.P. System and method of word lattice augmentation using a pre/post vocalic consonant distinction
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8019604B2 (en) * 2007-12-21 2011-09-13 Motorola Mobility, Inc. Method and apparatus for uniterm discovery and voice-to-voice search on mobile device
US8015005B2 (en) * 2008-02-15 2011-09-06 Motorola Mobility, Inc. Method and apparatus for voice searching for stored content using uniterm discovery
GB2471811B (en) * 2008-05-09 2012-05-16 Fujitsu Ltd Speech recognition dictionary creating support device,computer readable medium storing processing program, and processing method
US9202460B2 (en) * 2008-05-14 2015-12-01 At&T Intellectual Property I, Lp Methods and apparatus to generate a speech recognition library
US9077933B2 (en) 2008-05-14 2015-07-07 At&T Intellectual Property I, L.P. Methods and apparatus to generate relevance rankings for use by a program selector of a media presentation system
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8326637B2 (en) * 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
JP5377430B2 (en) * 2009-07-08 2013-12-25 本田技研工業株式会社 Question answering database expansion device and question answering database expansion method
US9171541B2 (en) * 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
WO2011059997A1 (en) 2009-11-10 2011-05-19 Voicebox Technologies, Inc. System and method for providing a natural language content dedication service
US9275640B2 (en) * 2009-11-24 2016-03-01 Nexidia Inc. Augmented characterization for speech recognition
JP5633042B2 (en) * 2010-01-28 2014-12-03 本田技研工業株式会社 Speech recognition apparatus, speech recognition method, and speech recognition robot
US9576570B2 (en) * 2010-07-30 2017-02-21 Sri International Method and apparatus for adding new vocabulary to interactive translation and dialogue systems
US8527270B2 (en) 2010-07-30 2013-09-03 Sri International Method and apparatus for conducting an interactive dialogue
US9159319B1 (en) * 2012-12-03 2015-10-13 Amazon Technologies, Inc. Keyword spotting with competitor models
US20150279351A1 (en) * 2012-12-19 2015-10-01 Google Inc. Keyword detection based on acoustic alignment
WO2014176750A1 (en) * 2013-04-28 2014-11-06 Tencent Technology (Shenzhen) Company Limited Reminder setting method, apparatus and system
US20140350933A1 (en) * 2013-05-24 2014-11-27 Samsung Electronics Co., Ltd. Voice recognition apparatus and control method thereof
US9390708B1 (en) * 2013-05-28 2016-07-12 Amazon Technologies, Inc. Low latency and memory efficient keywork spotting
US9837070B2 (en) 2013-12-09 2017-12-05 Google Inc. Verification of mappings between phoneme sequences and words
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
EP3195145A4 (en) 2014-09-16 2018-01-24 VoiceBox Technologies Corporation Voice commerce
CN107003999B (en) 2014-10-15 2020-08-21 声钰科技 System and method for subsequent response to a user's prior natural language input
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
GB2544070B (en) * 2015-11-04 2021-12-29 The Chancellor Masters And Scholars Of The Univ Of Cambridge Speech processing system and method
US10331784B2 (en) 2016-07-29 2019-06-25 Voicebox Technologies Corporation System and method of disambiguating natural language processing requests
US10586537B2 (en) * 2017-11-30 2020-03-10 International Business Machines Corporation Filtering directive invoking vocal utterances
JP7131518B2 (en) * 2019-09-20 2022-09-06 カシオ計算機株式会社 Electronic device, pronunciation learning method, server device, pronunciation learning processing system and program
CN115101063B (en) * 2022-08-23 2023-01-06 深圳市友杰智新科技有限公司 Low-computation-power voice recognition method, device, equipment and medium

Family Cites Families (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4227176A (en) 1978-04-27 1980-10-07 Dialog Systems, Inc. Continuous speech recognition method
JPS59226400A (en) 1983-06-07 1984-12-19 松下電器産業株式会社 Voice recognition equipment
US5131043A (en) * 1983-09-05 1992-07-14 Matsushita Electric Industrial Co., Ltd. Method of and apparatus for speech recognition wherein decisions are made based on phonemes
FR2554623B1 (en) * 1983-11-08 1986-08-14 Texas Instruments France SPEAKER-INDEPENDENT SPEECH ANALYSIS PROCESS
US4980918A (en) 1985-05-09 1990-12-25 International Business Machines Corporation Speech recognition system with efficient storage and rapid assembly of phonological graphs
US4903305A (en) * 1986-05-12 1990-02-20 Dragon Systems, Inc. Method for representing word models for use in speech recognition
JP2739945B2 (en) 1987-12-24 1998-04-15 株式会社東芝 Voice recognition method
US5075896A (en) 1989-10-25 1991-12-24 Xerox Corporation Character and phoneme recognition based on probability clustering
US6236964B1 (en) 1990-02-01 2001-05-22 Canon Kabushiki Kaisha Speech recognition apparatus and method for matching inputted speech and a word generated from stored referenced phoneme data
US5136655A (en) 1990-03-26 1992-08-04 Hewlett-Pacard Company Method and apparatus for indexing and retrieving audio-video data
US5202952A (en) 1990-06-22 1993-04-13 Dragon Systems, Inc. Large-vocabulary continuous speech prefiltering and processing system
US5390278A (en) 1991-10-08 1995-02-14 Bell Canada Phoneme based speech recognition
US5333275A (en) 1992-06-23 1994-07-26 Wheatley Barbara J System and method for time aligning speech
US5625554A (en) 1992-07-20 1997-04-29 Xerox Corporation Finite-state transduction of related word forms for text indexing and retrieval
DE69333422T2 (en) 1992-07-31 2004-12-16 International Business Machines Corp. Finding strings in a database of strings
EP0597798A1 (en) 1992-11-13 1994-05-18 International Business Machines Corporation Method and system for utilizing audible search patterns within a multimedia presentation
AU5803394A (en) 1992-12-17 1994-07-04 Bell Atlantic Network Services, Inc. Mechanized directory assistance
US5467425A (en) 1993-02-26 1995-11-14 International Business Machines Corporation Building scalable N-gram language models using maximum likelihood maximum entropy N-gram models
US5787414A (en) 1993-06-03 1998-07-28 Kabushiki Kaisha Toshiba Data retrieval system using secondary information of primary data to be retrieved as retrieval key
DE69423838T2 (en) 1993-09-23 2000-08-03 Xerox Corp Semantic match event filtering for speech recognition and signal translation applications
JP2986345B2 (en) 1993-10-18 1999-12-06 インターナショナル・ビジネス・マシーンズ・コーポレイション Voice recording indexing apparatus and method
SE513456C2 (en) * 1994-05-10 2000-09-18 Telia Ab Method and device for speech to text conversion
IT1272259B (en) 1994-05-30 1997-06-16 Texas Instruments Italia Spa PROCEDURE AND APPARATUS FOR THE RECOGNITION OF CHARACTERS
JP3260979B2 (en) 1994-07-15 2002-02-25 株式会社リコー Character recognition method
US5799267A (en) 1994-07-22 1998-08-25 Siegel; Steven H. Phonic engine
US5737723A (en) 1994-08-29 1998-04-07 Lucent Technologies Inc. Confusable word detection in speech recognition
US5835667A (en) 1994-10-14 1998-11-10 Carnegie Mellon University Method and apparatus for creating a searchable digital video library and a system and method of using such a library
NZ294659A (en) * 1994-11-01 1999-01-28 British Telecomm Method of and apparatus for generating a vocabulary from an input speech signal
US5680605A (en) 1995-02-07 1997-10-21 Torres; Robert J. Method and apparatus for searching a large volume of data with a pointer-based device in a data processing system
US5999902A (en) 1995-03-07 1999-12-07 British Telecommunications Public Limited Company Speech recognition incorporating a priori probability weighting factors
CA2170669A1 (en) 1995-03-24 1996-09-25 Fernando Carlos Neves Pereira Grapheme-to phoneme conversion with weighted finite-state transducers
US5675706A (en) 1995-03-31 1997-10-07 Lucent Technologies Inc. Vocabulary independent discriminative utterance verification for non-keyword rejection in subword based speech recognition
US5729741A (en) 1995-04-10 1998-03-17 Golden Enterprises, Inc. System for storage and retrieval of diverse types of information obtained from different media sources which includes video, audio, and text transcriptions
DE69607913T2 (en) * 1995-05-03 2000-10-05 Koninkl Philips Electronics Nv METHOD AND DEVICE FOR VOICE RECOGNITION ON THE BASIS OF NEW WORD MODELS
JPH0916598A (en) 1995-07-03 1997-01-17 Fujitsu Ltd System and method for character string correction using error pattern
US5721939A (en) 1995-08-03 1998-02-24 Xerox Corporation Method and apparatus for tokenizing text
US5684925A (en) 1995-09-08 1997-11-04 Matsushita Electric Industrial Co., Ltd. Speech representation by feature-based word prototypes comprising phoneme targets having reliable high similarity
US5737489A (en) 1995-09-15 1998-04-07 Lucent Technologies Inc. Discriminative utterance verification for connected digits recognition
JPH09128396A (en) 1995-11-06 1997-05-16 Hitachi Ltd Preparation method for bilingual dictionary
US6567778B1 (en) 1995-12-21 2003-05-20 Nuance Communications Natural language speech recognition using slot semantic confidence scores related to their word recognition confidence scores
US5960395A (en) * 1996-02-09 1999-09-28 Canon Kabushiki Kaisha Pattern matching method, apparatus and computer readable memory medium for speech recognition using dynamic programming
GB2303955B (en) 1996-09-24 1997-05-14 Allvoice Computing Plc Data processing method and apparatus
US5870740A (en) 1996-09-30 1999-02-09 Apple Computer, Inc. System and method for improving the ranking of information retrieval results for short queries
US5708759A (en) 1996-11-19 1998-01-13 Kemeny; Emanuel S. Speech recognition using phoneme waveform parameters
US6172675B1 (en) 1996-12-05 2001-01-09 Interval Research Corporation Indirect manipulation of data using temporally related data, with particular application to manipulation of audio or audiovisual data
US5852822A (en) 1996-12-09 1998-12-22 Oracle Corporation Index-only tables with nested group keys
EP0849723A3 (en) 1996-12-20 1998-12-30 ATR Interpreting Telecommunications Research Laboratories Speech recognition apparatus equipped with means for removing erroneous candidate of speech recognition
WO1998047084A1 (en) 1997-04-17 1998-10-22 Sharp Kabushiki Kaisha A method and system for object-based video description and linking
WO1999005681A1 (en) 1997-07-23 1999-02-04 Siemens Aktiengesellschaft Process for storing search parameters of an image sequence and access to an image stream in said image sequence
US6487532B1 (en) 1997-09-24 2002-11-26 Scansoft, Inc. Apparatus and method for distinguishing similar-sounding utterances speech recognition
US6026398A (en) 1997-10-16 2000-02-15 Imarket, Incorporated System and methods for searching and matching databases
US6061679A (en) 1997-11-25 2000-05-09 International Business Machines Corporation Creating and searching a data structure ordered by ranges of key masks associated with the data structure
US5983177A (en) * 1997-12-18 1999-11-09 Nortel Networks Corporation Method and apparatus for obtaining transcriptions from multiple training utterances
US6182039B1 (en) * 1998-03-24 2001-01-30 Matsushita Electric Industrial Co., Ltd. Method and apparatus using probabilistic language model based on confusable sets for speech recognition
US6243680B1 (en) * 1998-06-15 2001-06-05 Nortel Networks Limited Method and apparatus for obtaining a transcription of phrases through text and spoken utterances
US6321226B1 (en) 1998-06-30 2001-11-20 Microsoft Corporation Flexible keyboard searching
US6192337B1 (en) 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6490563B2 (en) 1998-08-17 2002-12-03 Microsoft Corporation Proofreading with text to speech feedback
DE19842404A1 (en) 1998-09-16 2000-03-23 Philips Corp Intellectual Pty Procedure for estimating probabilities of occurrence for language vocabulary elements
WO2000031723A1 (en) * 1998-11-25 2000-06-02 Sony Electronics, Inc. Method and apparatus for very large vocabulary isolated word recognition in a parameter sharing speech recognition system
AU777693B2 (en) 1999-03-05 2004-10-28 Canon Kabushiki Kaisha Database annotation and retrieval
GB2349260B (en) * 1999-04-23 2003-05-28 Canon Kk Training apparatus and method
US6662180B1 (en) 1999-05-12 2003-12-09 Matsushita Electric Industrial Co., Ltd. Method for searching in large databases of automatically recognized text
US6567816B1 (en) 2000-03-07 2003-05-20 Paramesh Sampatrai Desai Method, system, and program for extracting data from database records using dynamic code
US6535850B1 (en) 2000-03-09 2003-03-18 Conexant Systems, Inc. Smart training and smart scoring in SD speech recognition system with user defined vocabulary

Also Published As

Publication number Publication date
EP1205908A3 (en) 2003-11-19
EP1205908A2 (en) 2002-05-15
DE60126722D1 (en) 2007-04-05
DE60126722T2 (en) 2007-10-25
US20020120447A1 (en) 2002-08-29
JP2002156995A (en) 2002-05-31
US7337116B2 (en) 2008-02-26
GB0027178D0 (en) 2000-12-27

Similar Documents

Publication Publication Date Title
EP1205908B1 (en) Pronunciation of new input words for speech processing
US6801891B2 (en) Speech processing system
US7054812B2 (en) Database annotation and retrieval
US10210862B1 (en) Lattice decoding and result confirmation using recurrent neural networks
Novak et al. Phonetisaurus: Exploring grapheme-to-phoneme conversion with joint n-gram models in the WFST framework
US5680510A (en) System and method for generating and using context dependent sub-syllable models to recognize a tonal language
Liu et al. Two efficient lattice rescoring methods using recurrent neural network language models
US7212968B1 (en) Pattern matching method and apparatus
US7310600B1 (en) Language recognition using a similarity measure
US6873993B2 (en) Indexing method and apparatus
EP0805434B1 (en) Method and system for speech recognition using continuous density hidden Markov models
EP3948850B1 (en) System and method for end-to-end speech recognition with triggered attention
US8386254B2 (en) Multi-class constrained maximum likelihood linear regression
EP2192575A1 (en) Speech recognition based on a multilingual acoustic model
US20220262352A1 (en) Improving custom keyword spotting system accuracy with text-to-speech-based data augmentation
US5706397A (en) Speech recognition system with multi-level pruning for acoustic matching
US20150179169A1 (en) Speech Recognition By Post Processing Using Phonetic and Semantic Information
EP0953968B1 (en) Speaker and environment adaptation based on eigenvoices including maximum likelihood method
WO1996002051A1 (en) Method and apparatus for creating models of chinese sounds including tones
US5764851A (en) Fast speech recognition method for mandarin words
CN108806691B (en) Voice recognition method and system
Jiang et al. A minimax search algorithm for robust continuous speech recognition
JP3104900B2 (en) Voice recognition method
Amdal et al. Pronunciation variation modeling in automatic speech recognition
JP2731133B2 (en) Continuous speech recognition device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17P Request for examination filed

Effective date: 20040407

AKX Designation fees paid

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 20050429

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60126722

Country of ref document: DE

Date of ref document: 20070405

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20071122

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20121127

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131031

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20151026

Year of fee payment: 15

Ref country code: DE

Payment date: 20151031

Year of fee payment: 15

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60126722

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20161029

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170503

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20161029