EP0562138A1 - Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary - Google Patents

Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary Download PDF

Info

Publication number
EP0562138A1
EP0562138A1 EP92105090A EP92105090A EP0562138A1 EP 0562138 A1 EP0562138 A1 EP 0562138A1 EP 92105090 A EP92105090 A EP 92105090A EP 92105090 A EP92105090 A EP 92105090A EP 0562138 A1 EP0562138 A1 EP 0562138A1
Authority
EP
European Patent Office
Prior art keywords
word
phonetic
probability
language
transcription
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP92105090A
Other languages
German (de)
French (fr)
Inventor
Marco Ferreti
Anna Maria Mazza
Stefano Scarci
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IBM Semea SpA
International Business Machines Corp
Original Assignee
IBM Semea SpA
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IBM Semea SpA, International Business Machines Corp filed Critical IBM Semea SpA
Priority to EP92105090A priority Critical patent/EP0562138A1/en
Publication of EP0562138A1 publication Critical patent/EP0562138A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • G10L15/144Training of HMMs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0631Creating reference templates; Clustering

Definitions

  • the present invention relates to a method and an apparatus for automatically generating Markov word models of new words to be added to a predefined vocabulary.
  • the Markov word models are primarily for use in speech recognition applications, they may however be employed in phonetic applications where the description of a word pronunciation is sought.
  • HMM Hidden Markov Models
  • a matching process is performed to determine which word or words in the vocabulary are the most likely to have produced the string of labels generated by the acoustic processor of the system.
  • the acoustic processor is a device able to transform the speech waveform input into a string of fenemes, called also labels.
  • the labels are selected from an alphabet typically containing 200 different labels.
  • the generation of such labels has been discussed in articles of the prior art and, in particular, in European Patent No 0 179 280 entitled “Nonlinear Signal Processing in a Speech Recognition System”.
  • the matching procedure is described in European Patent No 0 238 689 entitled "Method for performing Acoustic Matching in a Speech Recognition System".
  • Acoustic matching is performed by characterizing each word in a vocabulary by a sequence of Markov model phone machines and determining the respective likelihood of each word representing a sequence of phone machines.
  • Each sequence of phone machines representing a word is called word model, sometimes referred to as word baseform or simply baseform.
  • phonetic baseforms In a speech recognition system, two different types of word baseforms are normally used: phonetic baseforms and fenemic baseforms. Either phonetic baseforms or fenemic baseforms may be used in acoustic matching or for other speech recognition purposes.
  • Phonetic baseforms are built concatenating phonetic Markov models. Typically such Markov models have a one-to-one correspondence with phonetic elements. The Markov models corresponding to the sequence of phonetic elements of a word can be concatenated to form a phonetic Markov word baseform for the word. The generation of phonetic baseforms is described in European Patent No. 0 238 695 entitled "Automatic generation of simple Markov model stunted baseforms for words in a vocabulary".
  • Fenemic baseforms are constructed concatenating fenemic Markov word models. These models are described in European Patent No. 0 238 693 entitled "Speech Recognition System and Method Using Statistical Models for Words". For each of the 200 fenemes in a fenemic alphabet, a Markov model is provided which indicates the probability of a particular feneme producing zero, one, or more fenemes (as generated by the acoustic processor) when spoken. With the fenemic baseforms the number of phone machines in each word baseform is approximately equal to the number of fenemes per word.
  • the vocabulary of the recognizer contains a predefined set of words.
  • the vocabulary usually contains several thousands of words.
  • the set of words in the vocabulary is chosen according to the application of the speech recognition system in order to minimize the number of words not included in the vocabulary but uttered by the user during the normal use of the system.
  • the percentage of words uttered by the user and included in the vocabulary is called vocabulary coverage.
  • the vocabulary coverage is strongly dependent on the vocabulary size and type of application. For vocabularies containing from 10,000 to 20,000 words, the coverage typically ranges from 80% to 99% according to the type of lexicon involved. Due to real-time constraints, the vocabulary of the speech recognizer for the Italian language is generally limited to about 20,000 words as described in the article "A 20,000-Word Speech Recognizer of Italian" by M. Brandetti, M. Ferretti, A. Fusi, G. Maltese, S. Scarci, G. Vitillaro, Recent Issues in Pattern Analysis and Recognition, Lecture Notes in Computer Science, Springer-Verlag, 1989.
  • a vocabulary containing 20,000 words allows high coverages (usually about 96-97%), nevertheless the users of a speech recognition system feel the need that the system can recognize words not included in the vocabulary but typical of their activity and environment (last names, street names, jargon words and so on). Besides it is not sure that even a larger vocabulary would include such words. In many applications this drawback can be a severe constraint to the practical usability of the speech recognizer. Therefore it is necessary to provide a feature to allow a user to add new words to the predefined vocabulary of the recognizer. The new words to be added are usually called "add-words".
  • each word in the vocabulary is represented by statistical models that must be created a priori and are part of the recognizer.
  • a phonetic word baseform To add a new word to the vocabulary of the speech recognizer, either the phonetic or the fenemic word models must be supplied.
  • a technique exists to automatically construct the corresponding fenemic word model. This technique is described in the article "Automatic Construction of Fenemic Markov Word Models for Speech Recognition" by M. Ferretti, S. Scarci, IBM Technical Disclosure Bulletin, vol. 33, No. 6B, November 1990, pages 233-237.
  • Another solution is to use a rule-based system that, starting from the spelling of the word, determines how the word is pronounced.
  • Several systems based on this idea have been built, but unfortunately the accuracy of the phonetic transcription produced is too low to make this technique feasible for speech recognition systems.
  • the rules are used as a language model to perform decoding of a spoken utterance as the add-word using a technique similar to the stack search described in the above-mentioned article "A Maximum Likelihood Approach to Continuous Speech Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, and in the article “The Development of an Experimental Discrete Dictation Recognizer", by F. Jelinek, Proc. of IEEE. Vol. 73, No. 11, Nov. 1985, Pages 1616-1624.
  • the invention as defined in the claims, is able to produce the correct phonetic transcription for any new word in a few seconds without the need to use special-purpose hardware.
  • the invention is here described with reference to the Italian language and using Italian words as examples of words of which the correct phonetic transcription is searched. However the same concepts can be applied to other languages having a low ambiguity in grapheme-to-phoneme translation.
  • Fig. 1 is a block diagram of an apparatus for generating the correct phonetic transcription of a new word according to the present invention.
  • Fig. 2 is a block diagram for obtaining a weighted combination for the best-of-set score calculation.
  • the user of the system when the user of the system wishes to add a new word to the vocabulary, he/she must provide the word spelling and then one utterance.
  • the standard way to produce a reliable phonetic transcription is to use human knowledge to decide, on the basis of linguistic knowledge, about the word meaning.
  • Some words in a language may have more than one pronunciation according to their meaning. In these cases it is not possible to solve the ambiguity using standard techniques to automatically produce the correct word phonetic transcription. Only with the intervention of a phonetician is it possible to associate the word spelling with the sequence of phonetic symbols describing its correct pronunciation.
  • the basic set of sounds used to utter Italian words, as described by classical phonetics, includes 30 phonemes, each represented by its respective symbol. However, since the set of 30 phonemes cannot give an adequate account of many relevant contextual variations in pronunciation, the set of phones used by the Italian speech recognizer of the present invention is an extension of such phoneme set.
  • a set of 56 phonemes is used to take into account the acoustical phenomena necessary to achieve a high recognition rate.
  • an adequate set of phonemes can be selected for each language on the basis of the desired recognition rate.
  • the basic idea of the invention is to use phonotactical knowledge and linguistic knowledge separately.
  • the process of producing the correct phonetic transcription for the new word is divided into two steps.
  • phonotactical knowledge is used to produce a list of all the possible phonetic transcriptions for the word.
  • a score is computed for each phonetic transcription combining the score obtained matching the word utterance of the user against the HMM derived from its transcription and the a priori probability of the phonetic transcription. The phonetic transcription with the highest score is selected as the correct one.
  • Phonotactical knowledge allows to find all the ways in which the letters of the word spelling correspond to the sounds in order to form a word utterance. It is possible to describe phonotactical knowledge for each language by an appropriate set of rules. For example, for the Italian language, phonotactical knowledge can be descibed using only 78 rules, as described in the paper "Automatic Transcription Based on Phonotactical Rules", by S. Scarci, S. Taraglio, Proceedings of Speech 1988 Seventh FASE Symposium, Edinburgh, 1988. Each rule determines the way a letter in the word spelling can be pronounced (e.g.
  • LL CL RL --> LP where CL is the current letter in the word spelling, LL is the letter on the left of the current letter, RL is the letter on the right of the current letter and LP is the list of possible phonetic units for the current letter CL in the context LL-RL.
  • a two-stage translation is performed using two different phonetic alphabets.
  • a first alphabet has been derived directly from the standard set of Italian phonemes, here referred to as phonetic alphabet P1.
  • a second alphabet, P2 contains phonetic units not considered by standard phonetics and introduced to improve the recognition rate.
  • alphabet P1 can be considered invariable
  • alphabet P2 may vary in the future according to new knowledge acquired on the pronunciation behaviour of speakers. For this reason two phonetic transcriptions are produced: in the first stage from graphemes to symbols of P1, and in the second stage from symbols of P1 to symbols of P2.
  • Table 1 and 2 The P1 and P2 phonetic alphabets are shown in Table 1 and 2 respectively.
  • Table 3 shows the set of rules used to produce all the possible phonetic transcriptions of a word using the P1 alphabet.
  • Table 4 shows the set of rules used to perform the translation from alphabet P1 into phonetic alphabet P2.
  • a set of global rules is used to prune all impossible phonetic transcriptions (e.g. all the phonetic transcriptions having more than one stressed syllable).
  • the utterance of the new word and probabilistic decision trees are used to perform the selection of the correct one.
  • the utterance of the word is processed by the acoustic feature processor of the recognition system, designed to transform a speech waveform input into a string of acoustic labels. Given the string of acoustic labels, the phonetic transcription having the highest probability to represent the utterance of the word is selected as the correct phonetic transcription.
  • the highest probability is obtained by computing the maximum of P(T
  • U) can be expressed as: where: P(U
  • P(T) is the probability of the phonetic baseform.
  • P(U) is the a priori probability of utterance U.
  • T) is computed by means of the standard forward-pass algorithm using the candidate phonetic baseforms as phonetic Hidden Markov word models as described in the cited European Patent No. 0 238 689.
  • P(T) is computed for each candidate transcription by means of binary decision trees.
  • Binary decision trees are described in the publication "Classification and Regression Trees" by L. Breiman, J. Friedman, R. Olshen, C. Stone, Wadsworth & Brooks/Cole Advanced Book & Software, 1984.
  • a binary decision tree is a computational technique that allows to compute the probability of a target variable, given its context. Given a target variable and an observed context, a visit of the decision tree is performed to compute the target probability. At each node of the tree a predefined question is asked about the right or left context of the variable to be predicted. According to the answer (that can be Yes or No) the left or right child node is selected as the next node. When a leaf is reached a probability distribution is found that assigns a probability to all the possible values of the variable.
  • the tree can be built a priori using well-known training techniques.
  • P(T) is computed as the product of the probability of each phone in the transcript ion T, given its context. Therefore P(T) is equal to P(T
  • S) P(t1, ... t n
  • Expression (2) can be computed as: P(t n
  • the last expression is approximated by using the following context: 5 letters to the left of the current vowel 5 letters to the right of the current vowel 5 phones to the right of the current phone.
  • a probability equal to 1 is assigned to each non-vowel phone.
  • Fig.1 shows an example of an apparatus for generating the most likely phonetic transcription of a new word according to the present invention.
  • the apparatus determines the phonetic transcriptions of the word, acceptable on the basis of the phonetic rules of the language.
  • a set of global rules, stored in store 13 allows to prune such phonetic transcriptions, eliminating for example all the phonetic transcriptions with more than one accent.
  • store 14 contains all the possible phonetic transcriptions and naturally also the correct one.
  • an acoustic transducer 16 e.g. a microphone, converts the utterance U into an analog electrical signal and an acoustic feature processor 17 outputs a string of acoustic labels, each label representing an acoustic feature prototype vector.
  • P(T) calculator 19 determines the probability of each phonetic transcription stored in store 14, considering the left and the right context of each letter of the spelling S, stored in store 11 and utilizing a probabilistic binary decision tree computation, based on the information stored in store 15.
  • T) calculator 18 using the string of labels derived from the utterance U of the new word and each phonetic transcription stored in store 14, determines the probability that the user, pronouncing the string of phones contained in each transcription will utter the sounds described by the string of labels of the utterance U.
  • the best-of-set score calculator 20 receives input from P(U
  • Fig.2 shows an embodiment for improving the determination of the phonetic transcript ion with the best-of-set score.
  • T) calculator 18' is sent to block 22 which calculates the log of P(U
  • the output of P(T) calculator 19' is sent to block 23 which determines the log of P(T), multipled by weight w2.
  • Calculator 20' selects the most likely phonetic transcription on the basis of the best-of-set score as in Fig.1
  • a test of the invention method was performed using 2,000 words uttered by one speaker and not included in the set of words used to perform decision trees' training.
  • the average accuracy of the phonetic transcription obtained by the invention method was found equal to that obtained when the transcriptions are made by a phonetician.
  • Baseform 1 AS L1 AA TH OA SP
  • Baseform 2 AA L1 AS TH OA SP
  • S1 0.868
  • S2 5.642 Therefore the most likely baseform for the word "alato” is: AA L1 AS TH OA SP
  • the selected baseform is the correct one.
  • the list of candidate baseforms for "mostrarvi" in symbols of alphabet P2 is the following: Baseform 1: MH OO SH TH RR AS RH VH IO SP Baseform 2: MH OS SH TH RR AA RH VH IO SP Baseform 3: MH OC SH TH RR AA RH VH IO SP
  • the most likely baseform for the word "mostrarvi” is: MH OO SH TH RR AA RH VH IO SP
  • the selected baseform is the correct one.

Abstract

A method and apparatus for generating a word model to be used in a speech recognizer based on the spelling of the word and one utterance of the word by the user, are described. In a first step a list of all the possible phonetic transcriptions for the new word is produced using phonotactical knowledge of the language. In a second step a score is computed for each transcription of the list, combining the score obtained matching the utterance against the Hidden Markov Models derived from its possible phonetic transcriptions and the a priori probability of the phonetic transcription. The phonetic transcription with the highest score is selected as the correct one.

Description

  • The present invention relates to a method and an apparatus for automatically generating Markov word models of new words to be added to a predefined vocabulary. The Markov word models are primarily for use in speech recognition applications, they may however be employed in phonetic applications where the description of a word pronunciation is sought.
  • In many approaches to speech recognition, Hidden Markov Models (HMM) are used as models for each word in a predefined speech recognizer vocabulary. HMM is a very well-known technique to represent acoustic word models for speech recognition.
  • This technique is described in various articles such as: "Continuous Speech Recognition by Statistical Methods" by F. Jelinek, Proceedings of the IEEE, vol. 64, number 4, 1976, pages 532-556 and "A Maximum Likelihood Approach to Speech Recognition", by L.R. Bahl, F. Jelinek, and R.L. Mercer, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume PAMI-5, No 2, March 1983, pages 179-190.
  • In a speech recognition system, a matching process is performed to determine which word or words in the vocabulary are the most likely to have produced the string of labels generated by the acoustic processor of the system.
  • The acoustic processor is a device able to transform the speech waveform input into a string of fenemes, called also labels. The labels are selected from an alphabet typically containing 200 different labels. The generation of such labels has been discussed in articles of the prior art and, in particular, in European Patent No 0 179 280 entitled "Nonlinear Signal Processing in a Speech Recognition System". The matching procedure is described in European Patent No 0 238 689 entitled "Method for performing Acoustic Matching in a Speech Recognition System".
  • Acoustic matching is performed by characterizing each word in a vocabulary by a sequence of Markov model phone machines and determining the respective likelihood of each word representing a sequence of phone machines. Each sequence of phone machines representing a word is called word model, sometimes referred to as word baseform or simply baseform.
  • In a speech recognition system, two different types of word baseforms are normally used: phonetic baseforms and fenemic baseforms. Either phonetic baseforms or fenemic baseforms may be used in acoustic matching or for other speech recognition purposes.
  • Phonetic baseforms are built concatenating phonetic Markov models. Typically such Markov models have a one-to-one correspondence with phonetic elements. The Markov models corresponding to the sequence of phonetic elements of a word can be concatenated to form a phonetic Markov word baseform for the word. The generation of phonetic baseforms is described in European Patent No. 0 238 695 entitled "Automatic generation of simple Markov model stunted baseforms for words in a vocabulary".
  • Fenemic baseforms are constructed concatenating fenemic Markov word models. These models are described in European Patent No. 0 238 693 entitled "Speech Recognition System and Method Using Statistical Models for Words". For each of the 200 fenemes in a fenemic alphabet, a Markov model is provided which indicates the probability of a particular feneme producing zero, one, or more fenemes (as generated by the acoustic processor) when spoken. With the fenemic baseforms the number of phone machines in each word baseform is approximately equal to the number of fenemes per word.
  • In the probabilistic approach to speech recognition, the vocabulary of the recognizer contains a predefined set of words. The vocabulary usually contains several thousands of words. Generally, the set of words in the vocabulary is chosen according to the application of the speech recognition system in order to minimize the number of words not included in the vocabulary but uttered by the user during the normal use of the system. The percentage of words uttered by the user and included in the vocabulary is called vocabulary coverage.
  • The vocabulary coverage is strongly dependent on the vocabulary size and type of application. For vocabularies containing from 10,000 to 20,000 words, the coverage typically ranges from 80% to 99% according to the type of lexicon involved. Due to real-time constraints, the vocabulary of the speech recognizer for the Italian language is generally limited to about 20,000 words as described in the article "A 20,000-Word Speech Recognizer of Italian" by M. Brandetti, M. Ferretti, A. Fusi, G. Maltese, S. Scarci, G. Vitillaro, Recent Issues in Pattern Analysis and Recognition, Lecture Notes in Computer Science, Springer-Verlag, 1989.
  • A vocabulary containing 20,000 words allows high coverages (usually about 96-97%), nevertheless the users of a speech recognition system feel the need that the system can recognize words not included in the vocabulary but typical of their activity and environment (last names, street names, jargon words and so on). Besides it is not sure that even a larger vocabulary would include such words. In many applications this drawback can be a severe constraint to the practical usability of the speech recognizer. Therefore it is necessary to provide a feature to allow a user to add new words to the predefined vocabulary of the recognizer. The new words to be added are usually called "add-words".
  • As described before, each word in the vocabulary is represented by statistical models that must be created a priori and are part of the recognizer. There are two models for each word: a phonetic word baseform and a fenemic word baseform. To add a new word to the vocabulary of the speech recognizer, either the phonetic or the fenemic word models must be supplied. Starting from the correct phonetic word model, a technique exists to automatically construct the corresponding fenemic word model. This technique is described in the article "Automatic Construction of Fenemic Markov Word Models for Speech Recognition" by M. Ferretti, S. Scarci, IBM Technical Disclosure Bulletin, vol. 33, No. 6B, November 1990, pages 233-237.
  • The problem of producing the acoustic models needed to add a new word to the vocabulary can be thus reduced to the problem of producing the correct phonetic transcript ion for the word. Several solutions to this problem have been proposed.
  • The easiest solution is the Dictionary Look-Up. When a new word has to be added, the retrieval of the correct phonetic transcription from a background dictionary is performed. This solution has several limitations, one of which is that it is not conceivable to have a vocabulary containing all the possible add-words (e.g. proper names).
  • Another solution is to use a rule-based system that, starting from the spelling of the word, determines how the word is pronounced. Several systems based on this idea have been built, but unfortunately the accuracy of the phonetic transcription produced is too low to make this technique feasible for speech recognition systems. For this solution there are some limitations: it is very difficult to build a set of rules able to solve all the ambiguities in the grapheme-to-phoneme translation, since for many words the pronunciation depends on the linguistic meaning; this kind of ambiguity cannot be solved by a rule-based system.
  • More sophisticated and reliable techniques have been developed by researchers of the IBM T.J Watson Research Center. Such techniques are based on the idea of finding a statistical set of Spelling-to-Sound Rules as described in the article "Automatic Determination of the Pronunciation of Words from their Spelling" by R.L. Bahl, P.F. Brown, P.V. De Souza, R.L. Mercer, IBM Technical Disclosure Bulletin, No. 10B, March 1990, pages 19-23. The rules are used as a language model to perform decoding of a spoken utterance as the add-word using a technique similar to the stack search described in the above-mentioned article "A Maximum Likelihood Approach to Continuous Speech Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, and in the article "The Development of an Experimental Discrete Dictation Recognizer", by F. Jelinek, Proc. of IEEE. Vol. 73, No. 11, Nov. 1985, Pages 1616-1624.
  • In this approach, it is assumed that no a priori constraint about the set of possible phoneme sequences representing the add-word pronunciation can be provided. A search strategy is used to find an optimal phonetic transcription among all the possible ones. The drawback with this technique is the computational time required. Employing special hardware to perform the acoustic match, the technique can take several seconds to produce the results.
  • Another solution is described in European Patent Application No. 91116859.9 entitled "Method and Apparatus for Generating Models of Spoken Words Based on a Small Number of Utterances". For this solution the drawback derives from the fact that more than one utterance for each new word is required to produce the new phonetic transcription.
  • It is, therefore, an object of this invention to provide a technique able to take advantage of characteristics of some natural languages to make the add-word process simpler. If the grapheme-to-phoneme translation has low ambiguity in a language (e. g. Italian), it is possible to limit, given the spelling of the add-word, the set of admissible pronunciations eliminating the very time-consuming search process.
  • It is an object of the invention to automatically build a phonetic Markov word model for a new word starting from its spelling and from one utterance spoken by the user. The invention, as defined in the claims, is able to produce the correct phonetic transcription for any new word in a few seconds without the need to use special-purpose hardware.
  • The invention is here described with reference to the Italian language and using Italian words as examples of words of which the correct phonetic transcription is searched. However the same concepts can be applied to other languages having a low ambiguity in grapheme-to-phoneme translation.
  • Fig. 1 is a block diagram of an apparatus for generating the correct phonetic transcription of a new word according to the present invention.
  • Fig. 2 is a block diagram for obtaining a weighted combination for the best-of-set score calculation.
  • According to the method of the present invention, when the user of the system wishes to add a new word to the vocabulary, he/she must provide the word spelling and then one utterance. The standard way to produce a reliable phonetic transcription is to use human knowledge to decide, on the basis of linguistic knowledge, about the word meaning.
  • Some words in a language may have more than one pronunciation according to their meaning. In these cases it is not possible to solve the ambiguity using standard techniques to automatically produce the correct word phonetic transcription. Only with the intervention of a phonetician is it possible to associate the word spelling with the sequence of phonetic symbols describing its correct pronunciation.
  • The basic set of sounds used to utter Italian words, as described by classical phonetics, includes 30 phonemes, each represented by its respective symbol. However, since the set of 30 phonemes cannot give an adequate account of many relevant contextual variations in pronunciation, the set of phones used by the Italian speech recognizer of the present invention is an extension of such phoneme set.
  • To describe the present invention, a set of 56 phonemes is used to take into account the acoustical phenomena necessary to achieve a high recognition rate. However, an adequate set of phonemes can be selected for each language on the basis of the desired recognition rate.
  • The basic idea of the invention is to use phonotactical knowledge and linguistic knowledge separately. The process of producing the correct phonetic transcription for the new word is divided into two steps. In the first step, phonotactical knowledge is used to produce a list of all the possible phonetic transcriptions for the word. In the second step, a score is computed for each phonetic transcription combining the score obtained matching the word utterance of the user against the HMM derived from its transcription and the a priori probability of the phonetic transcription. The phonetic transcription with the highest score is selected as the correct one.
  • Phonotactical knowledge allows to find all the ways in which the letters of the word spelling correspond to the sounds in order to form a word utterance. It is possible to describe phonotactical knowledge for each language by an appropriate set of rules. For example, for the Italian language, phonotactical knowledge can be descibed using only 78 rules, as described in the paper "Automatic Transcription Based on Phonotactical Rules", by S. Scarci, S. Taraglio, Proceedings of Speech 1988 Seventh FASE Symposium, Edinburgh, 1988. Each rule determines the way a letter in the word spelling can be pronounced (e.g. a vowel can be stressed or not) Generally the format of a rule is based on the following pattern:
       LL CL RL --> LP
    where CL is the current letter in the word spelling, LL is the letter on the left of the current letter, RL is the letter on the right of the current letter and LP is the list of possible phonetic units for the current letter CL in the context LL-RL.
  • In the present description, a two-stage translation is performed using two different phonetic alphabets.
  • A first alphabet has been derived directly from the standard set of Italian phonemes, here referred to as phonetic alphabet P1.
  • A second alphabet, P2, contains phonetic units not considered by standard phonetics and introduced to improve the recognition rate.
  • While alphabet P1 can be considered invariable, alphabet P2 may vary in the future according to new knowledge acquired on the pronunciation behaviour of speakers. For this reason two phonetic transcriptions are produced: in the first stage from graphemes to symbols of P1, and in the second stage from symbols of P1 to symbols of P2.
  • The P1 and P2 phonetic alphabets are shown in Table 1 and 2 respectively. Table 3 shows the set of rules used to produce all the possible phonetic transcriptions of a word using the P1 alphabet. Table 4 shows the set of rules used to perform the translation from alphabet P1 into phonetic alphabet P2.
  • Before performing the translation from alphabet P1 into alphabet P2, a set of global rules is used to prune all impossible phonetic transcriptions (e.g. all the phonetic transcriptions having more than one stressed syllable).
  • It was observed that, at the end of the process, the correct phonetic transcription is always included in the list of candidate transcriptions. The average number of possible phonetic transcriptions for an Italian word is 5.1. It is possible to easily create a device able to apply these rules to an input spelling and to produce as output all the possible phonetic strings. For example a computer program could be used to implement such a device.
  • When the list of candidate phonetic transcriptions has been completed, the utterance of the new word and probabilistic decision trees are used to perform the selection of the correct one. For this purpose, the utterance of the word is processed by the acoustic feature processor of the recognition system, designed to transform a speech waveform input into a string of acoustic labels. Given the string of acoustic labels, the phonetic transcription having the highest probability to represent the utterance of the word is selected as the correct phonetic transcription.
  • The highest probability is obtained by computing the maximum of P(T|U) where T is a phonetic transcription in the list produced at the end of the phonetic transcription process and U is the word string of labels corresponding to the utterance.
  • Applying the Bayes theorem, P(T|U) can be expressed as:
    Figure imgb0001

    where:
    P(U|T) is the probability that the speaker, pronouncing the sequence of phones T, will utter sounds described by U.
  • P(T) is the probability of the phonetic baseform.
  • P(U) is the a priori probability of utterance U.
  • To maximize expression (1) with respect to all possible transcriptions T, the denominator of the fraction can be ignored, since P(U) is independent of T.
  • P(U|T) is computed by means of the standard forward-pass algorithm using the candidate phonetic baseforms as phonetic Hidden Markov word models as described in the cited European Patent No. 0 238 689.
  • P(T) is computed for each candidate transcription by means of binary decision trees. Binary decision trees are described in the publication "Classification and Regression Trees" by L. Breiman, J. Friedman, R. Olshen, C. Stone, Wadsworth & Brooks/Cole Advanced Book & Software, 1984.
  • A binary decision tree is a computational technique that allows to compute the probability of a target variable, given its context. Given a target variable and an observed context, a visit of the decision tree is performed to compute the target probability. At each node of the tree a predefined question is asked about the right or left context of the variable to be predicted. According to the answer (that can be Yes or No) the left or right child node is selected as the next node. When a leaf is reached a probability distribution is found that assigns a probability to all the possible values of the variable. The tree can be built a priori using well-known training techniques.
  • To perform the tree training, the availability of a considerable amount of correct phonetic transcriptions is assumed. 25,000 correct phonetic transcriptions are considered a sufficient amount of data to produce well-trained binary decision trees. For the Italian language, 5 binary decision trees are built, one for each Italian vowel. In fact, for the Italian language, most of the pronunciation ambiguities are related to the vowels pronunciation.
    P(T) is computed as the product of the probability of each phone in the transcript ion T, given its context. Therefore P(T) is equal to P(T|S) where S is the spelling of the word and can be written as:

    P(T|S) = P(t₁, ... t n |s₁, ... s n )   (2)
    Figure imgb0002


    where t₁, t₂,.., tn-1, tn are phones of the transcription T
    and s₁, s₂,...sn are letters of word spelling.
  • Expression (2) can be computed as:

    P(t n |s₁,...s n )*P(t n-1 |t n ,s₁,...s n )*...P(t₁|t₂,...t n ,s₁,..s n )
    Figure imgb0003


    The last expression is approximated by using the following context:
    5 letters to the left of the current vowel
    5 letters to the right of the current vowel
    5 phones to the right of the current phone.
  • A probability equal to 1 is assigned to each non-vowel phone.
  • The final score for each candidate transcription in the list is computed in the following way:

    S = w₁log P(U|T) + w₂log P(T)
    Figure imgb0004


    where the optimal weights w₁ and w₂ are found by an iterative process that modifies the weights to minimize the number of incorrect phonetic transcriptions.
  • After describing the method, a brief description of an apparatus for implementing such a method is here given.
  • Fig.1 shows an example of an apparatus for generating the most likely phonetic transcription of a new word according to the present invention. With reference to Fig.1, when the user of the speech recognition system wishes to add a new word to the vocabulary of the system, he/she keys the word on keyboard 10 and its spelling is stored in store 11. Using the set of rules stored in store 12, the apparatus determines the phonetic transcriptions of the word, acceptable on the basis of the phonetic rules of the language. A set of global rules, stored in store 13, allows to prune such phonetic transcriptions, eliminating for example all the phonetic transcriptions with more than one accent. At the end of this step, store 14 contains all the possible phonetic transcriptions and naturally also the correct one.
  • Then the user of the speech recognition system pronounces the new word, emitting an utterance U. An acoustic transducer 16, e.g. a microphone, converts the utterance U into an analog electrical signal and an acoustic feature processor 17 outputs a string of acoustic labels, each label representing an acoustic feature prototype vector.
  • P(T) calculator 19 determines the probability of each phonetic transcription stored in store 14, considering the left and the right context of each letter of the spelling S, stored in store 11 and utilizing a probabilistic binary decision tree computation, based on the information stored in store 15.
  • P(U|T) calculator 18, using the string of labels derived from the utterance U of the new word and each phonetic transcription stored in store 14, determines the probability that the user, pronouncing the string of phones contained in each transcription will utter the sounds described by the string of labels of the utterance U.
  • The best-of-set score calculator 20 receives input from P(U|T) calculator 18 and P(T) calculator 19 and identifies, for each transcription, the product P(U|T)*P(T). Once all the phonetic transcriptions of the new word, contained in store 14, have been considered, score calculator 20 identifies the one with the maximum product and emits a command on line 21 for marking that phonetic transcription in store 14 as the correct one.
  • Fig.2 shows an embodiment for improving the determination of the phonetic transcript ion with the best-of-set score. In such embodiment, the output of P(U|T) calculator 18' is sent to block 22 which calculates the log of P(U|T), multipled by weight w₁. Similarly, the output of P(T) calculator 19' is sent to block 23 which determines the log of P(T), multipled by weight w₂. Calculator 20' selects the most likely phonetic transcription on the basis of the best-of-set score as in Fig.1
  • Having obtained the correct phonetic transcription of a new word, using the technique shown in the aforecited article "Automatic Construction of Fenemic Markov Models for Speech recognition", it is possible obtain the fenemic word baseform. Having thus determined either the phonetic or the fenemic word models, the new word can be included in the vocabulary of the speech recognition system.
  • A test of the invention method was performed using 2,000 words uttered by one speaker and not included in the set of words used to perform decision trees' training. The average accuracy of the phonetic transcription obtained by the invention method was found equal to that obtained when the transcriptions are made by a phonetician.
  • As examples of the method here described, it is assumed that the Italian words "alato" and "mostrarvi" are two words for which it is desired to determine the phonetic transcript ion. Using an iterative process, w₁ was set equal to 0.2 and w2 was set equal to 1. P(U|T) is expressed in a normalized format to avoid too small numerical values.
  • Example 1: "alato"
  • The list of candidate baseforms for "alato" in symbols of alphabet P2 is given hereunder:
    Baseform 1: AS L1 AA TH OA SP
    Baseform 2: AA L1 AS TH OA SP
    For baseform 1: log P(U|T) = 12.34   log P(T) = -1.60
    Figure imgb0005

    For baseform 2: log P(U|T) = 28.86   log P(T) = -0.13
    Figure imgb0006

    The final score is:
    For baseform 1: S₁ = 0.868
    For baseform 2: S₂ = 5.642
    Therefore the most likely baseform for the word "alato" is:
    AA L1 AS TH OA SP
    The selected baseform is the correct one.
  • Example 2: "mostrarvi"
  • The list of candidate baseforms for "mostrarvi" in symbols of alphabet P2 is the following:
    Baseform 1: MH OO SH TH RR AS RH VH IO SP
    Baseform 2: MH OS SH TH RR AA RH VH IO SP
    Baseform 3: MH OC SH TH RR AA RH VH IO SP
    For baseform 1: log P(U|T) = 19.68   log P(T) = -0.013
    Figure imgb0007

    For baseform 2: log P(U|T) = 2.34   log P(T) = -2.58
    Figure imgb0008

    For baseform 3: log P(U|T) = 1.27   log P(T) = -2.45
    Figure imgb0009

    The final score is:
    For baseform 1: S₁ = 3.92
    For baseform 2: S₂ = -2.11
    For baseform 3: S₃ = -2.19
    The most likely baseform for the word "mostrarvi" is:
    MH OO SH TH RR AA RH VH IO SP
    The selected baseform is the correct one.
    Figure imgb0010
    Figure imgb0011
    Figure imgb0012
    Figure imgb0013
    Figure imgb0014
    Figure imgb0015
    Figure imgb0016
    Figure imgb0017

Claims (7)

  1. A method for automatically producing the correct phonetic transcription for a word in a language, starting from the word spelling (S) and one utterance (U) of the word in said language by a speaker, characterized by the following steps:
    a) using phonotactical knowledge of said language for producing a list of all the possible phonetic transcriptions (T) of the word;
    b) calculating a probability score for each possible phonetic transcription (T) of the list obtained in step a), to generate the utterance (U) of the word by the speaker and
    c) selecting as most likely phonetic transcription (T) the one having the highest probability among all the probability scores calculated in step b).
  2. A method according to claim 1 wherein the phonotactical knowledge of step a) is implemented through two sets of rules, typical of said language, the first set determining the ways a letter of the word spelling (S) can be uttered, given its context and the second set of rules pruning all the impossibile phonetic transcriptions.
  3. A method according to claims 1 and 2, wherein the calculation of the probability for each possible phonetic transcription (T) is obtained combining the probability P(U|T) that the speaker, pronuncing the phonetic transcription (T) produces the utterance U and the a priori probability P(T) of the phonetic transcription (T).
  4. A method according to claim 3, wherein the probability P(T) is calculated as the product of the probability of each phone in the phonetic transcription (T), given its context.
  5. A method according to claim 3, wherein the highest probability score of step c) of claim 1 is computed as a linear combination of log P(U|T) and log P(T) by two optimal weights w₁ and w₂ found through an iterative process.
  6. An apparatus for automatically producing the correct phonetic transcription for a word in a language, starting from the word spelling (S) and one utterance (U) of the word in said language, including:
    - meams (11) for storing the word spelling (S)
    - means (12) for storing the spelling-to-sound rules for said language
    - means (13) for storing global rules for said language in order to prune the impossible phonetic transcriptions
    - means (14) for obtaining all the phonetic transcriptions (T) of the word according to the rules stored in said means (12) and said means (13)
    - means (15) for storing probabilistic decision trees characterized by
    - calculator means (18) for determining the probability P(U|T) that the speaker, pronouncing the sequence of phones contained in each of the possible phonetic transcriptions T stored in said means (14), utters sounds described by U
    - calculator means (19) for determining the probabilty P(T) of each of the possible phonetic transcriptions stored in said means (14), considering the left and right context of each letter of the word spelling (S), contained in said means (11), and using the probabilistic decision trees contained in said means (15) and
    - calculator means (20) for determining the most probable phonetic transcription among those stored in said means (14), selecting the one having the highest value of P(U|T)*P(T).
  7. An apparatus according to claim 6, wherein said calculator means (20) determines the most probable phonetic transcription, computing the highest probability score for the phonetic transcriptions stored in said means (14) through a linear combination of log P(U|T) and log P(T) by two optimal weights w₁ and w₂.
EP92105090A 1992-03-25 1992-03-25 Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary Withdrawn EP0562138A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP92105090A EP0562138A1 (en) 1992-03-25 1992-03-25 Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP92105090A EP0562138A1 (en) 1992-03-25 1992-03-25 Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary

Publications (1)

Publication Number Publication Date
EP0562138A1 true EP0562138A1 (en) 1993-09-29

Family

ID=8209465

Family Applications (1)

Application Number Title Priority Date Filing Date
EP92105090A Withdrawn EP0562138A1 (en) 1992-03-25 1992-03-25 Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary

Country Status (1)

Country Link
EP (1) EP0562138A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0685835A1 (en) * 1994-05-30 1995-12-06 Tecnomen Oy Speech recognition based on HMMs
WO1996035207A1 (en) * 1995-05-03 1996-11-07 Philips Electronics N.V. Speech recognition methods and apparatus on the basis of the modelling of new words
EP0852374A2 (en) * 1997-01-02 1998-07-08 Texas Instruments Incorporated Method and system for speaker-independent recognition of user-defined phrases
EP0867858A2 (en) * 1997-03-28 1998-09-30 Dragon Systems Inc. Pronunciation generation in speech recognition
EP0874353A2 (en) * 1997-03-28 1998-10-28 Dragon Systems Inc. Pronunciation generation in speech recognition
EP0949606A2 (en) * 1998-04-07 1999-10-13 Lucent Technologies Inc. Method and system for speech recognition based on phonetic transcriptions
EP0953970A2 (en) * 1998-04-29 1999-11-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word
EP0953967A2 (en) * 1998-04-30 1999-11-03 Matsushita Electric Industrial Co., Ltd. An automated hotel attendant using speech recognition
EP0984430A2 (en) * 1998-09-04 2000-03-08 Matsushita Electric Industrial Co., Ltd. Small footprint language and vocabulary independent word recognizer using registration by word spelling
EP0984428A2 (en) * 1998-09-04 2000-03-08 Matsushita Electric Industrial Co., Ltd. Method and system for automatically determining phonetic transciptions associated with spelled words
US6151575A (en) * 1996-10-28 2000-11-21 Dragon Systems, Inc. Rapid adaptation of speech models
DE19942869A1 (en) * 1999-09-08 2001-03-15 Volkswagen Ag Operating method for speech-controlled device for motor vehicle involves ad hoc generation and allocation of new speech patterns using adaptive transcription
US6363342B2 (en) * 1998-12-18 2002-03-26 Matsushita Electric Industrial Co., Ltd. System for developing word-pronunciation pairs
EP1684264A1 (en) 2005-01-19 2006-07-26 Obstfelder, Sigrid Cellular telephone and method for voice input of text therein
US7856351B2 (en) 2007-01-19 2010-12-21 Microsoft Corporation Integrated speech recognition and semantic classification

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
IBM TECHNICAL DISCLOSURE BULLETIN, vol. 32, no. 10B, March 1990, pages 15-17, Armonk, NY, US: "Automatic correction of viterbi misalignments" *
IBM TECHNICAL DISCLOSURE BULLETIN, vol. 32, no. 10B, March 1990, pages 9-10, Armonk, NY, US; "Automatic determination of phonetic Markov word models" *
ICASSP'91 (1991 INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, Toronto, CA, 14th - 17th May 1991), vol. 1, pages 169-172, IEEE, New York, US; T. YAMADA et al.: "Phonetic typewriter based on phoneme source modeling" *
ICASSP'91 (1991 INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, Toronto, CA, 14th - 17th May 1991), vol. 1, pages 305-308, IEEE, New York, US; A. ASADI et al.: "Automatic modeling for adding new words to a large-vocabulary continuous speech recognition system" *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0685835A1 (en) * 1994-05-30 1995-12-06 Tecnomen Oy Speech recognition based on HMMs
WO1996035207A1 (en) * 1995-05-03 1996-11-07 Philips Electronics N.V. Speech recognition methods and apparatus on the basis of the modelling of new words
US6151575A (en) * 1996-10-28 2000-11-21 Dragon Systems, Inc. Rapid adaptation of speech models
EP0852374A2 (en) * 1997-01-02 1998-07-08 Texas Instruments Incorporated Method and system for speaker-independent recognition of user-defined phrases
US6058363A (en) * 1997-01-02 2000-05-02 Texas Instruments Incorporated Method and system for speaker-independent recognition of user-defined phrases
EP0852374A3 (en) * 1997-01-02 1998-11-18 Texas Instruments Incorporated Method and system for speaker-independent recognition of user-defined phrases
EP0867858A3 (en) * 1997-03-28 1999-09-22 Dragon Systems Inc. Pronunciation generation in speech recognition
EP0874353A3 (en) * 1997-03-28 1999-09-22 Dragon Systems Inc. Pronunciation generation in speech recognition
EP0867858A2 (en) * 1997-03-28 1998-09-30 Dragon Systems Inc. Pronunciation generation in speech recognition
US6092044A (en) * 1997-03-28 2000-07-18 Dragon Systems, Inc. Pronunciation generation in speech recognition
EP0874353A2 (en) * 1997-03-28 1998-10-28 Dragon Systems Inc. Pronunciation generation in speech recognition
EP0949606A2 (en) * 1998-04-07 1999-10-13 Lucent Technologies Inc. Method and system for speech recognition based on phonetic transcriptions
EP0949606A3 (en) * 1998-04-07 2000-10-11 Lucent Technologies Inc. Method and system for speech recognition based on phonetic transcriptions
EP0953970A2 (en) * 1998-04-29 1999-11-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word
EP0953970A3 (en) * 1998-04-29 2000-01-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word
EP0953967A3 (en) * 1998-04-30 2000-06-28 Matsushita Electric Industrial Co., Ltd. An automated hotel attendant using speech recognition
EP0953967A2 (en) * 1998-04-30 1999-11-03 Matsushita Electric Industrial Co., Ltd. An automated hotel attendant using speech recognition
US6314165B1 (en) 1998-04-30 2001-11-06 Matsushita Electric Industrial Co., Ltd. Automated hotel attendant using speech recognition
EP0984428A2 (en) * 1998-09-04 2000-03-08 Matsushita Electric Industrial Co., Ltd. Method and system for automatically determining phonetic transciptions associated with spelled words
EP0984430A2 (en) * 1998-09-04 2000-03-08 Matsushita Electric Industrial Co., Ltd. Small footprint language and vocabulary independent word recognizer using registration by word spelling
EP0984428A3 (en) * 1998-09-04 2001-01-24 Matsushita Electric Industrial Co., Ltd. Method and system for automatically determining phonetic transciptions associated with spelled words
EP0984430A3 (en) * 1998-09-04 2003-12-10 Matsushita Electric Industrial Co., Ltd. Small footprint language and vocabulary independent word recognizer using registration by word spelling
US6363342B2 (en) * 1998-12-18 2002-03-26 Matsushita Electric Industrial Co., Ltd. System for developing word-pronunciation pairs
DE19942869A1 (en) * 1999-09-08 2001-03-15 Volkswagen Ag Operating method for speech-controlled device for motor vehicle involves ad hoc generation and allocation of new speech patterns using adaptive transcription
EP1684264A1 (en) 2005-01-19 2006-07-26 Obstfelder, Sigrid Cellular telephone and method for voice input of text therein
US7856351B2 (en) 2007-01-19 2010-12-21 Microsoft Corporation Integrated speech recognition and semantic classification

Similar Documents

Publication Publication Date Title
US6243680B1 (en) Method and apparatus for obtaining a transcription of phrases through text and spoken utterances
Wang et al. Complete recognition of continuous Mandarin speech for Chinese language with very large vocabulary using limited training data
US5293584A (en) Speech recognition system for natural language translation
Zissman et al. Automatic language identification
US5787230A (en) System and method of intelligent Mandarin speech input for Chinese computers
US6067520A (en) System and method of recognizing continuous mandarin speech utilizing chinese hidden markou models
EP0984428B1 (en) Method and system for automatically determining phonetic transcriptions associated with spelled words
US5502791A (en) Speech recognition by concatenating fenonic allophone hidden Markov models in parallel among subwords
US6910012B2 (en) Method and system for speech recognition using phonetically similar word alternatives
US5865626A (en) Multi-dialect speech recognition method and apparatus
US6694296B1 (en) Method and apparatus for the recognition of spelled spoken words
US6912499B1 (en) Method and apparatus for training a multilingual speech model set
JP2012137776A (en) Speech recognition system
JPH0581918B2 (en)
Riley et al. Automatic generation of detailed pronunciation lexicons
EP0562138A1 (en) Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary
WO2004047075A1 (en) Voice processing device and method, recording medium, and program
US5764851A (en) Fast speech recognition method for mandarin words
Haraty et al. CASRA+: A colloquial Arabic speech recognition application
Mérialdo Multilevel decoding for very-large-size-dictionary speech recognition
Sharma et al. ASR—A real-time speech recognition on portable devices
Liu et al. State-dependent phonetic tied mixtures with pronunciation modeling for spontaneous speech recognition
US6408271B1 (en) Method and apparatus for generating phrasal transcriptions
KR100848148B1 (en) Apparatus and method for syllabled speech recognition and inputting characters using syllabled speech recognition and recording medium thereof
Kita et al. Processing unknown words in continuous speech recognition

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE ES FR GB IT

17P Request for examination filed

Effective date: 19931227

17Q First examination report despatched

Effective date: 19960821

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19971001