CA2085842C - Neural network-based speech token recognition system and method - Google Patents

Neural network-based speech token recognition system and method

Info

Publication number
CA2085842C
CA2085842C CA002085842A CA2085842A CA2085842C CA 2085842 C CA2085842 C CA 2085842C CA 002085842 A CA002085842 A CA 002085842A CA 2085842 A CA2085842 A CA 2085842A CA 2085842 C CA2085842 C CA 2085842C
Authority
CA
Canada
Prior art keywords
utterance
token
speech
phoneme
plural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002085842A
Other languages
French (fr)
Other versions
CA2085842A1 (en
Inventor
Ronald A. Cole
Mark A. Fanty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oregon Graduate Institute of Science and Technology
Original Assignee
Oregon Graduate Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oregon Graduate Institute of Science and Technology filed Critical Oregon Graduate Institute of Science and Technology
Publication of CA2085842A1 publication Critical patent/CA2085842A1/en
Application granted granted Critical
Publication of CA2085842C publication Critical patent/CA2085842C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks

Abstract

Improved system and method for speaker-independent speech token recognition are described. The system is neural network-based and involves processing a sequence of spoken utterances, e.g.
separately articulated letters of a name, to identify the same based upon a highest probability match of each utterance with learned speech tokens, e.g. the letters of the English language alphabet, and based upon a highest probability match of the uttered sequence with a defined utterance library, e.g. a list of names.
First, the spoken utterance is digitized or captured and processed into a spectral representation. Second, discrete time frames of the spectral representation are classified phonetically using the spectral coefficients. Third, the time-frame outputs are used by a modified Viterbi search to locate segment boundaries, near which such segment boundaries lies the information that is needed to discriminate letters. Fourth, the segmented or bounded representation is reclassified using such information into individual hypothesized letters. Fifth, successive, hypothesized letter scores are analyzed to obtain a high probability match with a spelled word within the utterance library. The system and method comprehend finer distinctions near points of interest used to discriminate difficult-to-recognize letter pair differences such as M/N, B/D, etc. The system is described in the context of phone line reception of names spelled by remote users.

Description

NEURAL NETWORK-BASED SPEECH TOKEN RECOGNITION SYSTEM
AND METHOD
Backqround and Summary of the Invention This invention relates generally to speech token recognition systems. More particularly, the invention concerns such a system that is capable of recognizing speaker-independent, separated spoken utterances, e.g. separately vocalized letters uttered by various speakers whose idiosyncratic speech patterns the system has never been specifically trained to, that are within a library developed by neural-based learning techniques.
Speech recognition systems proliferate. Conventionally speech recognition system development is in the area of speaker-dependant systems, and has focused upon individual user adaptation, i.e. they have been designed increasingly accurately to recognize words and phrases spoken by a particular individual in accommodation of that individual's vocalization idiosyncracies.
More recent developments in neural modeling enable higher speed and increasingly fine adjustment to speech recognition algorithms with modestly improved separated speech token recognition accuracy and greatly improved versatility that result in part from the learning capabilities of neural model- or network-based systems. Some such developments have been reported by us in "Spoken Letter Recognition", Proceedings of the Third DARPA Speech and Natural Language Workshop, Somerset, PA, ~une, 1990 .

~' .A ' The English alphabet is a challenging vocabulary for computer speech recognition because of the acoustic similarity of many letter pairs (e.g B/D, B/V, P/T, T/G and M/N~. Research has led to systems that perform accurate speaker-independent recognition of spoken letters using high quality or telephone speech, as described more recently in Mark Fanty and Ron Cole, - la -~r,~
' ~
2~8584Z
a "Speaker-Tn~pPn~ent English Alphabet Recognition: Experiments with the E-Setn, ~Lo~eo~ings of the International Conference on Spok~n Language Processing, Kobe, Japan, November 1990 and Ronald Cole, Krist Roginski and Mark Fanty, "English Alphabet Recognition with Telephone Speech", Procee~ings of 2nd European Conference on Speech Communication and Technology, Genova, Italy, September 1991.
There is yet a felt need for further improvement in the ability of speech-recognition systems to become speaker independent, by which is meant systems capable of recognizing the speech of a large universe of speakers having idiosyncratic speech patterns, which requires no retraining or adaptive techniques but instead accurately can interpret words or phrases spoken by an individual whose speech has never before been heard by the system.
There also is need to further improve tPchn;ques used in such systems for discerning subtle differences among utterances--especially in the nature of separated, spoken letters--in order to increase the accuracy with which such utterances are classified.
Finally, there is need to further develop neural-network-based systems that are more readily adaptable or trainable to a different set of recogn;zed vocabulary entries characterized by a different set of sonorant or phonetic components, e.g. a foreign language having different vocalization patterns; and to a different set of spectral components, e.g. utterances received over a voice phone line.
Briefly, the invention is a neural-network-based system having five basic processing components: (1) data capture and signal representation utilizing spectral analysis; (2) phonetic classification of discrete time frames; (3) location of speech segments in hypothesized tokens or letters; (4) reclassification of hypothesized tokens or letters; and (5) recognition of spelled words from the classified letter scores. Importantly, phoneme 2Q8~842 classification involves a phoneme set that represents a substantial number of the toke~c in, for example, the English language alphabet, and the reclassification makes fine phonetic distinctions between difficult to discriminate tokens or letters in such vocabulary. The system is described in an application in which names spelled over phone lines are r~c~nized with a high degree of accuracy as being within a defined library.
Brief Description of the Drawings Fig. 1 is a schematic representation of the sampling strategy used to input spectral features to the frame-based classifier, in accordance with the preferred method of the invention.
Fig. 2 is a schematic diagram of a neural network-based classifier and associated, displayed input waveform, spectrogram and phoneme output representations illustrating various processing steps of the preferred method.
Fig. 3 is a schematic frame number representation of the M/N letter model illustrating the alignment of an utterance with a vocabulary entry.
Fig. 4 is an input waveform, spectrogram, segmentation and letter classification display window of the invented system.
Fig. 5 is a system architecture/control flow diagram of the preferred system and method of the invention.
Detailed Descri~tion of the Preferred Embodiment Generally, the speech token r~cognition system utilizes a neural model (i.e. neural simulator), or what will be referred to herein as a neural network, for developing a vocabulary of rec~nizable spoken utterances such as spelled names that appear as telephone directory entries in an automatic directory assistance application. It will be understood that such an application is only one of a myriad of applications for such a speech token 2(~B5842 L~o~l,ition system, and that all suitable applications are within the spirit of the invention.
The system in its preferred embodiment consists of a sllccescion of neural network ~lS~ OrS configured generally as described in our above-referenced "Spoken Letter Recognition"
report. As will be seen, the preferred system and method of the invention represent a significant improvement over even the most advanced of the previously described systems and methods.
Preferably, there are five processing components, or processing stages, implemented by the invented system: (1) data capture and signal representation, (2) phonetic classification of discrete time frames, (3) location of speech segments in hypothesized letters, (4) reclassification of hypothesized letters, and (5) recognition of spelled words from the classified letter scores. These components or stages will be described in the context of a system that recognizes spelled (English-language alph~het) names over phone lines.
The system answers the phone and asks the caller to spell a word. The speech is sampled at 8-kHz at 14-bit resolution. The system detects when the caller has stopped speaking. A perceptual linear predictive (PLP) analysis is performed every three milliseconds (msec) on 10-msec speech segments. While PLP analysis has been found to achieve the advantages of the invention, it will be appreciated that the invented system and method do not depend thereon, and that alternative spectral analysis techniques, e.g.
the discrete Fourier transform (DFT), may be used. (Figs. 2 and 4 show displays of the DFT spectrogram, rather than the PLP
spe~L~oyLam~ because the former are easier to read.) The result of this processing stage is to transform the continuously varying sreech waveform into a sequence of discrete time frames which contains information about the spectral energy of the signal.

The system also computes the zero-crossing rate and peak-to-peak (ptp) amplitude over 10-msec windows. The zero-crossing rate measures the number of times the signal changes sign. The ptp amplitude is the absolute difference between the largest positive and the largest negative peaks in a given window.
The next stage of analysis uses the PLP coefficients to assign twenty-two phonetic category scores to each 3-msec time frame. Classification is performed by a fully-connected, three-layer, feed-forward neural network. The twenty-two labels provide an intermediate level of description, in which preferably only a few phonetic categories are combined (see Table 1). More refined phonetic distinctions are performed during letter classification, described below.
TABLE 1: PHONETIC SEGMENTATION CATEGORIES
name description cl stop closure and silence Wcl closure between syllables in W
q glottal stop b/d tb' or td]
p/t/k tP], tt] or tk]
jh tjh] in J
X/H tkS] in X or tCh] in H
C ts] in C
f/s tf] in F or ts] in S
v/z tv] or tz]
m/n tm] or tn]
r tr]
y ty] beginning U
ow/l tow] in O or tl]
ey tey] in A
iy tiy] in E

~o name description eh teh] beginning S
aa [aa] begi nni ng R
UW t UW ] enA i r~g U
ah ~ah] in first syllable of W
w tw] begi nni ng y ay tay] in I
The neural network was trained on hand-labeled speech data from two hundred speakers who spelled names and recited the English alphabet. The input to the neural-network classifier consists of features representing PLP coefficients in a 432-msec window ~LLounding the (center) frame to be classified. The manner in which PLP coefficients are averaged across 3-msec time frames is represented in Fig. 1.
It will be understood from the discussion immediately above that Fig. 1 illustrates the averaging of selected (solid outlined) 3-msec time frames over the entire interval to determine the Gu~uL of a single, centered frame, and that such averaging produces a score for the center frame that comprehends frames prior to (to the left of in Fig. 1) and subsequent to (to the right of in Fig. 1) the center frame. Those of skill in the art will appreciate that such averaging is performed (over such an interval in which is centered the frame being scored) multiple times, with the interval taking into account time-shifted versions of the utterance. The neural network outputs at each centered time frame represent the twenty-two phonetic category scores corresponding to Table 1, with each network output, or output node, representing a higher or lower score, or probability, that the analyzed interval of the utterance represents the particular phoneme category (listed in Table 1) assigned to that network output's node.

2~858~2 Accurate frame-based phonetic classification depends on the co~Lect choice of features and the correct proportion of sampled speech frames near segment ho~ln~Aries and within segments when trAining the classifier. The selection and sampling of features was determined empirically. (Refer, for example, to our above-referenced "Sp~A~er-Tn~pen~t English Alphabet Recognition ..." and "English Alr~Ah~t Pecog~;tion with Telephone Speech"

r ~OL l,s ) Fig. 2 shows schematically how the frame-by-frame inputs to the neural network are translated into the set of twenty-two phonetic category scores (listed in Table 1) at each time frame.
Fig. 2 also shows graphically the time-aligned, sample input waveform, PLP (represented instead by DFT) energy spectra and selected phoneme uu~U~S~ in various display windows, that are produced by phonetic classification in accordance with the preferred system and method of the invention. The features derived from PLP are input to the neural network at each time frame. These features are scaled to have values between predefined lower and upper limits. These input values are then multiplied by the learned weights on the connections from the input layer to the middle layer of neurons in the network. The resulting products are summed and passed through a sigmoid function to produce the output value for each neuron in the middle layer. This process is repeated to produce values between 0 and 1 at the output layer.
The time-aligned phoneme outputs, or output values, for three phonetic categories are shown below the speech signal's input waveform depicted in Fig. 2.
The next stage of classification uses the twenty-two category scores at each time frame to find the best scoring sequence of phonetic segments corresponding to a sequence of letters. The purpose of this stage is to locate accurately the segment boundaries of the hypothesized letters. Note that some letter pairs, such as M/N and B/D, are not distinguished at this stage. The frame-by-frame ou~ s of the phonetic classifier are converted to a sequence of phonetic segments corresponding to hypothesized letters by combining the phonetic category scores with duration and phoneme sequence constraints provided by the letter models shown in Table 2.
For example, the letter model for M/N consists of optional glottalization (tq]), followed by the vowel [eh], followed by the nasal murmur tm/n]. Each of these segments has duration probabilities computed from the training data. For example, the sequence [eh m/n] shown in Fig. 3 is given a score equal to the product of the [eh] outputs in frames 136 through 165, the [m/n]
ou~ s in frames 166 through 185, the probability that [eh] in M/N
has duration 30, and the probability that [m/n] in M/N has duration 20.
Besides the twenty-one regular letter models (listed in Table 2), there are models for SILENCE and NOISE. The SILENCE
model matches the [cl] output of the frame classifier, which is trained on between-letter pauses, beginning and ending silence, and on the closure in X and H. Because background noise is often classified as [f/s] or [m/n], the NOISE model was added, which consists of either of these phonemes in isolation (no [eh]). It will be appreciated that, in Table 2, token representations consist of a succession of one or more of the target phonemes listed in Table 1, separated by spaces. The spaces between phonemes imply the sequential nature of the phonemes. They do not imply vocalization or silence between successive phonemes.
Right virgules (/) that appear between adjacent letters or phonemes in Table 2 represent plural letter or phoneme possibilities ('candidates') not resolved, if at all, until a later , 9 stage of proce6sing. For example, p/t/k as it appears as the second state of the letter K ~G~e~~Qr refers to the fact that the p, t or k neural network node (a singular node, not three different nodes) was energized at a high probability (or high score). As may be seen from Table 2, it is eno~lgh in identifying the letter K that a p/t/k phoneme was followed by an ey phoneme, thus distingll;ch;ng it from the letter pair P/T, which is characterized instead as ending with an iy phoneme. Those skilled in the art will appreciate that the individual letters within the more difficult-to-discriminate letter pairs, namely B/D, F/S, M/N, P/T and V/Z, are discriminated from one another by a separate (neural-based) classifier that examines selected features of the utterance, to be described by reference to Fig. 5.
TABLE 2: LETTER SEGMENTATION MODELS
letter states (parenthetical elements are optional) A (q) ey B/D cl b/d iy C C iy E (q) iy F/S (q) eh f/s G cl jh iy H (q) ey cl X/H
I (q) ay J cl jh ey K cl p/t/k ey L (q) eh owl M/N (q) eh m/n o (q) ow/l P/T cl p/t/k iy Q cl p/t/k y uw R (q) aa r 20858~2 letter states (Darenthet;cal elements are optional) U (q) y uw V/ Z v/ z iy W cl b/d ah Wcl (b/d) (ow/l) y uw X (q) eh cl X/H
Y (q) w ay SILENCE cl NOISE (f/Q) (m/n) A modified Viterbi search is used to find the sequence of complete letter models with the highest probability or score. Any number of SILENCE and NOISE segments are allowed between letters.
The search proceeds left to right. At each time step, there is a single active cell for each state of each letter model representing the best score of all paths which reach that state at that time.
A back-trace list provides the history of that best path. The scores for time t depend on the scores for time t - 1 and the phonetic ~uL~u~s for time t. An assumption underlying this Viterbi search is that of all the paths which converge on a state at a given time, only the one with the highest score need be kept. If the fu~,e behavior is completely determined by the current state, this assumption is correct.
Unfortunately, the addition of duration probabilities invalidates the assumption. Two paths which enter a state at different times will have different duration penalties applied sometime in the future. Strictly speaking, the best-scoring path for each entry time should be maintained. This is quite expensive, however.
As a compromise, certain times which are more likely to be segment boundaries are marked. The best path entering at each of these special times is kept, as well as the best overall path (for any entry time). Likely boundary times are determined by 20~5842 -- ll summing the differences in phonetic classification scores over all categories, and thresholding, by which is meant a determination of the extent of such differences relative to predetermined, empirically derived threshold differences. If a large number of phonetic categories have a large output (score, or probability) change around frame 14 (e.g [b] went from high in frames 12 and 13 to low in frames 15 and 16), then frame 14 is a likely boundary.
If the scores before and after frame 14 are about the same, then it is not.
The first (top) display window in Fig. 4 shows the digitized, spoken utterance waveform; the second window shows the PLP (represented instead by DFT) spectral energy graph; the third window shows the alignment for the letters in K-E-L-L-Y; and the fourth (bottom) window shows the correct letter classification of the spoken utterance "K-E-L-L-Y." Note that letter classification in accordance with the invented system and method corrected an E/C
error in the second letter segment. Note also that, for the sake of clarity in the illustration, "NOISE" segments have been removed.
The goal of letter classification is to classify correctly the hypothesized letter. A new set of feature measurements is computed using the segment boundaries found by the previous module. The rationale for this stage is that information about phonetic categories is found in certain regions of the signal, often near segment boundaries. For example, information needed to discriminate B from D is found just after consonant release, and in the first 15-msec of the vowel (phoneme iy).
Similarly, the information needed to discriminate M from N is found mainly in the vowel interval (phoneme eh) preceding the onset of nasalization. By first locating the phonetic segments, it is possible to measure, or to analyze, features that optimize discrimination of the tokens in a given vocabulary. For example, 20858~2 1~
feature measurements advantageously may be used in letter classification of sample speech frames just following vowel onset and just before a vowel nasal holn~y.
To perform letter classification, a set of features is computed for each hypothesized letter and used by a fully co,..~ed, feed-forward network with one hidden layer to reclassify the letter. Feature meaaurements are based on the phonetic boundaries provided by the segmentation. At present, the features consist of 1) duration of the initial consonant;
2) duration of the sonorant;
3) PLP coefficients averaged over thirds of the initial consonant;
4) PLP coefficients averaged over sevenths of the sonorant;
5) PLP coefficients averaged over the first three 70-msec intervals after the first sonorant;
6) PLP coefficients 6-msec and 15-msec into the sonorant;
7) PLP coefficients 30-msec and 6-msec before any sonorant-internal boundaries (e.g. [eh] and [m]), or mid-sonorant if there are no boundaries;
8) average zero crossing in seven intervals from 180-msec before the sonorant, from fourths of the sonorant, and in seven intervals from 180-msec after the sonorant; and 9) average ptp amplitude in seven intervals from 180-msec before the sonorant, from fourths of the sonorant, and in seven intervals from 180-msec after the sonorant.
The ouL~u~s of the classifier are the twenty-six letters plus the category "NOT A LETTERH ("SILENCE" or "NOISE").
Training data is generated by using the segmenter in a forced alignment scheme. A training utterance is first segmented with the Viterbi search constrained to use the known letter ` 13 sequence, resulting in accurate alignment. The letter boundaries are fixed and the utterance is L-~egmented without knowledge of the co~ect letter sequence. These phoneme boundaries--similar to those which will be encountered during use of the system--are used to generate training data for each letter. If additional letters are found during this 'free' segmentation, they are assumed to be noise and are used to train the ~NOT A LETTER" category.
The ou~ of the letter classifier is a score between 0.0 and 1.0 for each letter. These scores are treated as probabilities and the most likely name is retrieved (e.g. by multiplying letter probabilities) from the database of, for example, 50,000 last names. Preferably, the database is stored in an efficient tree structure so that common prefixes are shared.
Letter deletions and insertions are allowed with a penalty. The score for a name is the product of the letter scores for each letter in the name and any insertion or deletion penalties.
Referring now to Fig. 5, the invention in its preferred emh~;ment and by its preferred method may be understood.
Indicated at 10 in Fig. 5 is the preferred system architecture/process control flow. Speech or speech token recognition system 10 is represented as an organization of processors, or process steps, including generally speech recording means 12 for producing a representation of a spoken utterance, neural-network-based token- or phoneme-classifying means 14 coupled thereto, neural-network-based token- or phoneme-reclassifying means 16 responsive to token-classifying means 14 and connected to a plural token library 18. Speech-recording means 12 may be seen to be responsive to spoken utterances received over a communication channel such as a telephone line 20, and token reclassifying means 16 may be seen to effect a high-probability indication that the 20858~2 1~
utterance recorded by spe~c-h-recording means 12 has been rerogn;zed.
It will be appreciated that reclassified tokens produced by reclassifying means 16 may be further proc~Cce~ by utterance recognizing means 22 connected to an utterance library 24 to determine whether the token sequence produced by token reclassifying means 16 represents a recogn;zable utterance. Such an indication would be application specific, and for example, might be displayed on a monitor 26 and/or might result in some form of acknowledgement via telephone line 20 to the person whose spoken utterance has been r~coqn;zed. The acknowledgement might take the form of a spoken or played back, recorded response regarding the phone number of a person whose name was spelled by the speaker. It will be appreciated that phone lines 20 physically may be one or more conventional, dedicated or switched, simplex or duplex lines.
Token classifying means 14 is for classifying successive, equally lengthed, time frame-based segments, of such a recorded utterance representation produced by speech-recording means 12 over a first predetermined time interval into a plurality of phoneme-specific partial probabilities. It will be understood that thefirst predetermined time interval over which such token classification occurs, and that the successive segments that are classified by token classifying means 14 preferably are as illustrated in Fig. 1 as sp~nn;ng several hundred milliseconds. In Fig. 1, the solid boxes indicate intervals over which PLP
coefficients are averaged, while the dashed boxes indicate intervals that are skipped.
The plurality of phoneme-specific partial probabilities are so called because they represent incomplete token recogn;tion of a spoken utterance. For example, the p/t/k phoneme triplet recognition indicated in Table 1 may be seen to represent only a 2~5~2 ` 15 partial probability that the ~po~n token was a P, since the spoken token might just as well have been a T or a K. Similarly, the P/T
letter pair reco~nition indicated in Table 2 may be seen to Le~esent only a partial probability that the spoken token was a P, since the spoken token might just as well have been a T. Thus, phoneme -~pecific partial probabilities produced by token classifying means 14 are used herein to describe the output of token classifying means 14, e.g. the hypothesized phonemes in Table 1, as well as the intermediate letter-specific partial probabilities produced in token reclassifying means 16, e.g. the hypothesized letters in Table 2. In the latter case, as well as in the former case, further, feature-specific, processing is performed in accordance with the invention in order to discriminate each letter of the alphabet from every other to a high degree of certainty.
Token classifying means 14 includes computing means 14a for producing one or more phoneme-specific probability indices for successive segments of the spoken utterance representation. It will be appreciated that, in accordance with the preferred emho~iment of the invention, such phoneme-specific probability indices are in the form of the scores described above as being a number between 0.0 and 1.0, and that the computing means is embodied within the neural networks that implement token classifying means 14.
Reclassifying means 16 is for reclassifying the successive segments classified by token classifying means 14 over a second predetermined time interval into one or more recognizable tokens, or letters. The importance of the fact that the reclassifying time interval is different from the first is that fine phonetic distinctions, e.g. between difficult-to-discriminate letters of letter pairs such as B/D, F/S, M/N, P/T and V/Z, have I(o been found more accurately to be made when they are based upon features that are found immediately around phoneme segment boundaries. For example, feature measurements in a few millisecond interval immediately following consonant release has been found greatly to improve the ability of system 10 to discriminate B from D. Such interval is much shorter than the several hundred millisecond interval over which phoneme classifying occurs within phoneme classifying means 14 to produce the intermediate indication that the spoken utterance was tb] or [d] Isee Table 1).
Token reclassifying means 16 produces a vector containing one or more token scores and may be seen operatively to effect a high-probability indication, e.g. in excess of 96% over relatively low-noise communication channels such as a directly connected microphone (and only slightly lower, e.g. -90%, over relatively high-noise communication channels such as phone lines), indication of which one or more recognizable tokens in token library 18 is represented by the recorded utterance. Thus, token classifying means 14 may be thought of as performing the process steps of 1) segmenting a vocalized speech component to produce a representation of the component characterized by plural time-based segments, e.g.
by scoring the sequential time-based frames as to the various probabilities that they represent one of the phonemes in token library 18 and performing a modified Viterbi search to produce an optimally segmented representation of a high-probability phoneme occurrence, and 2) classifying each such segment by mapping the same using a plural target-token vocabulary-based phonetic library to produce a high probability phoneme representation of the component.
Mapping each segment using a plural target-token vocabulary-based phonetic library produces a higher probability phoneme representation of a speech component than prior classifying 2Q~58~2 1~
schemes because of the unique formulation of the target-token voc~hlllary-baæed phonetic library. The phonetic library may be seen to represent a substantial number of the target token phonetic library's tokens, i.e. the number of phoneme representations in Table 1, namely twenty-two, represents a large percentage of the phonemes in the letters in the English-language alphabet, namely twenty-six. Some prior art systems simply distinguished stops, fricatives, etc., at a first level of analysis and token identification. It is believed that empirically proven higher-probability speech recognition is a result, at least in part, of the number, as well as of the formulation, of phoneme classifications performed by token classifying means 14 of the preferred embodiment of the invention.
Those skilled in the arts will appreciate that token- or phoneme-classifying means 14 comprehends a given language alphabet-specific set of plural differentiated phonemes, e.g. the twenty-two phonemes listed in Table 1 as partially representing the twenty-six letters of the English-language alphabet. Phoneme-classifying means 14, whether singularly or jointly with phoneme-reclassifying means 16 (and based also upon inputs from token library 18 as illustrated in Fig. 5), may be thought of as operating to produce a hypothesized representation of the spoken utterance received by speech-recording means 12. For example, the spoken utterance "M/N"
preferably is represented phonetically as having plural phoneme segments (q) eh m/n (where the glottal stop may or not be present).
Thus the hypothesized representation of this particular spoken utterance has plural phoneme segments that span a succession of first determined time intervals, e.g. 3-ms time intervals, as may be seen from Fig. 3.
That feature-extracting, or feature-determining, part described above of token-or phoneme-reclassifying means 16 may be 1 8 20858~2 ~hsll7ht of as means 16~, connected to what may be thought of as token, or language ~lrhAhet model, library 18, for analyzing the hypothesized phoneme representation over a ~econ~ succession of determined time intervals that are different from the first determined time intervals to clas~ify selected ones of the phoneme Le~lescntations as being one of the plural letters in the alphabet.
Such organization of phoneme-classifying means and analyzing means effects a high-probability indication of the succession of letters represented by a spoken utterance processed thereby, especially in discriminating the usually difficult-to-discriminate ones of the sound-alike letter pairs.
System 10 in its preferred embodiment further includes what will be referred to herein as alignment means including duration-scoring means for effectively constraining the token reclassifying means in time to produce a higher probability indication whether the recorded utterance represents the one or more tokens, as determined by token classifying means 14 and token reclassifying means 16. Alignment means and duration-scoring means are best illustrated herein by reference to Fig. 3, where it may be seen, for example, that reclassifying means 16 comprehends alignment of the letter pair M/N, wherein [eh] spans frames 136 to 166 and wherein [m/n] spAnne~ frames 167 to 186. The token score preferably is equal to the product of the [eh] outputs in frames 136 through 166, the [m/n] outputs in frames 167 through 186, the probability that [eh] in M/N has a duration of 30 frames and the probability that ~m/n] in M/N has a duration of 20 frames. The result of such alignment is a score that comprehends absolute timing of the ~eh] and [m/n] components of the letter M/N.
Finally, utterance recognizing means 22 effectively multiplies the various probabilities, or scores, corresponding to the succession of high-probability letters classified by token reclassifying means 2Q858~2 16 to determine the highest-probability letter sequence thought to have been uttered, resulting, for example, in retrieval of a name from utterance library 24 that best resembles the letter sequence spelled by a remote user of system 10.
The preferred method of the invention now readily may be understood by reference to the preferred system described in detail above. One way of describing the method of the invention is to see it as representing a computer-implemented method for r~cogn;zing and identifying a vocalized component of speech by 1) capturing such a component, e.g. via speech-recording means 12; 2) dividing a time-domain representation of the captured component into plural time frames embracing sub-components of the capture component, e.g as illustrated and described in connection with Fig. 2; 3) processing each such sub-component by classifying it into a set of phonetic "label" scores using a neural network classifier presented with selected features in a temporal window surrounding the time frame, e.g. via token-classifying means 14 (wherein preferably processing includes first examining the extent of change in token probabilities as between successive ones of such sub-components to produce segmentation criteria by which said comparing is performed, i.e. performing the above-described, modified Viterbi search); 4) assembling a string of verified ones of such sub-components, e.g.
also via token-classifying means 14 as described in connection with Tables 1 and 2; 5) following the processing-by-comparing and assembling steps 3 and 4, and based upon an analysis of an assembled string of time-connected, verified sub-components, segmenting, or assigning time-based speech-pattern segmentation to, said captured component utilizing a machine-learned (i.e. neural-network-based) capability to relate such time-connected string of verified sub-components to a recognizable speech component, e.g.
via token-reclassifying means 16; and 6) analyzing, or assessing, `~ ao such assigned segmentation to identify the captured component, e.g.
via analyzing means 16a.
An alternative way of describing the method of the invention is to characterize it as a method for identifying a vocalized speech component. The method preferably includes segmenting the speech component to produce a representation of the component characterized by plural, time-based segments, as illustrated in Fig. l; classifying each segment by mapping the same, using a plural target-token vocabulary-based phonetic library to produce a high probability phoneme representation of the component, e.g. by the mapping represented by Tables 1 and 2; and reclassifying the phoneme representation by analyzing, or identifying specific features within, the component based upon the mapping to produce a higher probability identification of the component as corresponding with one token of such target-token library, e.g. via analyzing means 16a.
Those skilled in the arts will appreciate that the target-token library need not be the English-language alphabet, as illustrated herein, but instead may be any desired target library containing tokens sufficiently well defined, e.g. by mapping techn;ques similar to that illustrated in Tables 1 and 2, that the neural-network system and method of the invention can be trained and thereafter used in the recognition of speech the utterances of which are represented in such target-token library. Such target-token libraries may contain foreign language alphabets, as well as non-alphabet tokens such as words or phrases.
Another important step that may augment the above described steps is the recognition that the probability, or score, produced by neural-network-based system 10 for a given speech component may be higher for silence or noise than it is for a r~cognizable phoneme or letter, or may indicate that the spoken 2o8s842 , al utterance probably is not a letter at all. This is illustrated in Table 2 as the "SILENCE" and/or "NOISE" ou~u~ of the letter classifier, and is described in cQnnection therewith as being identified as falling within the "NOT A LETTER" category. This safeguard minimizes the probability of false-positive token identification.
Other additional steps of this preferred method of the invention include identifying specific features within one or more of the determined intervals, such determined intervals being different from the time-based segments, further to discriminate between two or more tokens in such vocabulary, as described in detail above with reference to analyzing means 16a; performing plural such component identifications wherein the various identifications have distinct probabilities that corresponding plural ones of the identifications correspond with plural tokens within predefined token library 18, i.e. producing plural token scores for each segmented token; successively repeating the segmenting, classifying and reclassifying letter identification steps as described and illustrated to recognize the spelled name "K-E-L-L-Y" the successive letters of which are within token library 18 and the letter succession of which is within utterance library 24; and performing plural such utterance identifications wherein the various identifications have distinct probabilities that corresponding plural ones of the identifications correspond with plural utterances within predefined utterance library 24, e.g.
producing plural utterance scores for the ambiguous utterances "K-E-L-L-Y" and "K-E-O-L-Y" (a lower one for "K-E-O-L-Y" and a higher one for "K-E-L-L-Y").
It now may be appreciated that the system and method of the invention produce superior speaker-independent speech recognition results than previously was possible. It also will be aa 20858~2 appreciated that such system and method as are described herein are readily trained to new token libraries, utterance libraries and communication media-based spectral energy characteristics, thus l~nAing them to adaptation in numerous applications that are within the spirit of the invention.
Accordingly, while a preferred system and method of the invention have been described herein, it is appreciated that further modifications are possible that come within the scope of the invention.

Claims (10)

1. For use with speech-recording means for producing a representation of a spoken utterance, a token recognition system comprising:
neural-network-based classifying means coupled with such speech-recording means for classifying successive segments of such utterance representation over a first predetermined time interval into a plurality of phoneme-specific partial probabilities, said neural-network-based classifying means including means for producing one or more phoneme-specific probability indices for said successive segments, and neural-network-based means connected to a plural token library and responsive to said classifying means for reclassifying said successive segments over a second predetermined time interval into one or more recognizable tokens, said reclassifying means operating to effect a high-probability indication of said one or more tokens represented by the recorded utterance.
2. The system of claim 1 which further comprises alignment means for hypothesizing token boundaries, said alignment means including duration-adjusting means for effectively compressing or expanding such utterance representation in time to produce a higher probability indication whether the recorded utterance represents said one or more tokens.
3. A computer-implemented method for recognizing and identifying a vocalized component of speech comprising:
capturing such a component;
dividing a time-domain representation of said captured component into plural time frames embracing sub-components of said captured component;

processing each such sub-component by classifying it into a set of phonetic label scores using a neural network classifier presented with selected features in a temporal window surrounding the time frame;
assembling a string of verified ones of such sub-components;
following said applying and said assembling, and based upon an analysis of an assembled string of time-connected, verified sub-components, assigning time-based speech-pattern segmentation to said captured component utilizing a machine-learned capability to relate such time-connected string of verified sub-components to a recognizable speech component; and assessing such assigned segmentation to identify said captured component.
4. A method for identifying a vocalized speech component comprising:
segmenting the speech component to produce a representation of the component characterized by plural time-based segments;
classifying each segment by mapping the same using a plural target-token vocabulary-based phonetic library to produce a high probability phoneme representation of the component; and reclassifying said phoneme representation by analyzing the component based upon said mapping to produce a higher probability identification of the component as corresponding with one token of such target-token vocabulary.
5. The method of claim 4, wherein said reclassifying includes identifying specific features within one or more determined intervals that are different from said time-based ?egments to produce token identification, thereby further to discriminate between two or more tokens in such vocabulary.
6. The method of claim 5, wherein plural ones of said token identifications are produced, said token identifications having associated therewith plural distinct probabilities that corresponding plural ones of said token identifications correspond with plural tokens within such predefined token library.
7, The method of claim 5 which further comprises repeating said segmenting, classifying and reclassifying steps for a succession of such speech components to produce a high probability utterance identification of a speech utterance as corresponding with one utterance within a predefined utterance library.
8. The method of claim 7, wherein plural ones of said utterance identifications are produced, said utterance identifications having associated therewith plural distinct probabilities that corresponding plural ones of said utterance identifications correspond with plural utterances within such predefined utterance library.
9. A speech token recognition system comprising:
speech-recording means for producing a representation of a spoken utterance;
neural-network-based phoneme-classifying means coupled with such speech-recording means, said phoneme-classifying means comprehending a given language alphabet-specific set of plural differentiated phonemes, said phonemes set representing a substantial subset of the letters in said alphabet, said phoneme-classifying means producing an hypothesized representation of such spoken utterance having plural phoneme segments spanning a succession of first determined time intervals;
neural-network-based means connected to a language alphabet model library and responsive to said phoneme-classifying means for analyzing said phoneme representation over a succession of second determined time intervals that are different from said first determined time intervals to classify selected ones of said phoneme representations as one of said plural letters in said alphabet to effect a high-probability indication of the succession of said letters represented by such spoken utterance.
10. The method of claim 3, wherein said processing includes first examining the extent of change in token probabilities as between successive ones of such sub-components to produce segmentation criteria by which said comparing is performed.
CA002085842A 1991-12-20 1992-12-18 Neural network-based speech token recognition system and method Expired - Fee Related CA2085842C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/811,819 US5621857A (en) 1991-12-20 1991-12-20 Method and system for identifying and recognizing speech
US07/811,819 1991-12-20

Publications (2)

Publication Number Publication Date
CA2085842A1 CA2085842A1 (en) 1993-06-21
CA2085842C true CA2085842C (en) 1996-05-21

Family

ID=25207682

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002085842A Expired - Fee Related CA2085842C (en) 1991-12-20 1992-12-18 Neural network-based speech token recognition system and method

Country Status (4)

Country Link
US (1) US5621857A (en)
EP (1) EP0549265A2 (en)
AU (1) AU3024892A (en)
CA (1) CA2085842C (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07210190A (en) * 1993-12-30 1995-08-11 Internatl Business Mach Corp <Ibm> Method and system for voice recognition
AU3734395A (en) * 1994-10-03 1996-04-26 Helfgott & Karas, P.C. A database accessing system
US6446038B1 (en) * 1996-04-01 2002-09-03 Qwest Communications International, Inc. Method and system for objectively evaluating speech
US5924066A (en) * 1997-09-26 1999-07-13 U S West, Inc. System and method for classifying a speech signal
US6438523B1 (en) 1998-05-20 2002-08-20 John A. Oberteuffer Processing handwritten and hand-drawn input and speech input
US6269335B1 (en) 1998-08-14 2001-07-31 International Business Machines Corporation Apparatus and methods for identifying homophones among words in a speech recognition system
US6185530B1 (en) 1998-08-14 2001-02-06 International Business Machines Corporation Apparatus and methods for identifying potential acoustic confusibility among words in a speech recognition system
US6192337B1 (en) * 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6304844B1 (en) * 2000-03-30 2001-10-16 Verbaltek, Inc. Spelling speech recognition apparatus and method for communications
WO2001090921A2 (en) * 2000-05-25 2001-11-29 Kanisa, Inc. System and method for automatically classifying text
US6625600B2 (en) 2001-04-12 2003-09-23 Telelogue, Inc. Method and apparatus for automatically processing a user's communication
US7610205B2 (en) * 2002-02-12 2009-10-27 Dolby Laboratories Licensing Corporation High quality time-scaling and pitch-scaling of audio signals
US7461002B2 (en) * 2001-04-13 2008-12-02 Dolby Laboratories Licensing Corporation Method for time aligning audio signals using characterizations based on auditory events
US7711123B2 (en) 2001-04-13 2010-05-04 Dolby Laboratories Licensing Corporation Segmenting audio signals into auditory events
US7283954B2 (en) * 2001-04-13 2007-10-16 Dolby Laboratories Licensing Corporation Comparing audio using characterizations based on auditory events
JP4290997B2 (en) * 2001-05-10 2009-07-08 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Improving transient efficiency in low bit rate audio coding by reducing pre-noise
US20020172349A1 (en) * 2001-05-17 2002-11-21 Shea Phillip N. Neural net-call progress tone detector
US20030149566A1 (en) * 2002-01-02 2003-08-07 Esther Levin System and method for a spoken language interface to a large database of changing records
KR100462989B1 (en) * 2002-10-07 2004-12-23 엘지전자 주식회사 Method of Managing Voice Token in the Voice Prompt Board
KR100486735B1 (en) * 2003-02-28 2005-05-03 삼성전자주식회사 Method of establishing optimum-partitioned classifed neural network and apparatus and method and apparatus for automatic labeling using optimum-partitioned classifed neural network
US20070171061A1 (en) * 2006-01-13 2007-07-26 Alpha Security Products, Inc. Theft deterrent device with dual sensor assembly
US8255216B2 (en) 2006-10-30 2012-08-28 Nuance Communications, Inc. Speech recognition of character sequences
WO2010003068A1 (en) * 2008-07-03 2010-01-07 The Board Of Trustees Of The University Of Illinois Systems and methods for identifying speech sound features
US20100057452A1 (en) * 2008-08-28 2010-03-04 Microsoft Corporation Speech interfaces
US20130090926A1 (en) * 2011-09-16 2013-04-11 Qualcomm Incorporated Mobile device context information using speech detection
US10438581B2 (en) 2013-07-31 2019-10-08 Google Llc Speech recognition using neural networks
US10733979B2 (en) * 2015-10-09 2020-08-04 Google Llc Latency constraints for acoustic modeling
US10320780B2 (en) * 2016-01-22 2019-06-11 Knowles Electronics, Llc Shared secret voice authentication
CN107123420A (en) * 2016-11-10 2017-09-01 厦门创材健康科技有限公司 Voice recognition system and interaction method thereof
US11100932B2 (en) * 2017-02-10 2021-08-24 Synaptics Incorporated Robust start-end point detection algorithm using neural network
US11853884B2 (en) 2017-02-10 2023-12-26 Synaptics Incorporated Many or one detection classification systems and methods
CA3058433C (en) 2017-03-29 2024-02-20 Google Llc End-to-end text-to-speech conversion
US11081106B2 (en) * 2017-08-25 2021-08-03 Microsoft Technology Licensing, Llc Contextual spoken language understanding in a spoken dialogue system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4040215A (en) * 1975-05-26 1977-08-09 Totsuka Komuten Co., Ltd. Decay-resisting construction of lower structure for wooden buildings
US4908865A (en) * 1984-12-27 1990-03-13 Texas Instruments Incorporated Speaker independent speech recognition method and system
US4977599A (en) * 1985-05-29 1990-12-11 International Business Machines Corporation Speech recognition employing a set of Markov models that includes Markov models representing transitions to and from silence
JPH0638199B2 (en) * 1985-09-02 1994-05-18 日本電気株式会社 Voice recognizer
JPS63501603A (en) * 1985-10-30 1988-06-16 セントラル インステイチユ−ト フオ ザ デフ Speech processing device and method
US4856067A (en) * 1986-08-21 1989-08-08 Oki Electric Industry Co., Ltd. Speech recognition system wherein the consonantal characteristics of input utterances are extracted
US4852170A (en) * 1986-12-18 1989-07-25 R & D Associates Real time computer speech recognition system
DE3855972T2 (en) * 1987-01-16 1998-03-05 Sharp Kk Speech recorder with compression of speech pauses
US4752179A (en) * 1987-01-27 1988-06-21 Cascade Corporation Push-pull load handler for forklift truck
US4905285A (en) * 1987-04-03 1990-02-27 American Telephone And Telegraph Company, At&T Bell Laboratories Analysis arrangement based on a model of human neural responses
US4937872A (en) * 1987-04-03 1990-06-26 American Telephone And Telegraph Company Neural computation by time concentration
US5121428A (en) * 1988-01-20 1992-06-09 Ricoh Company, Ltd. Speaker verification system
JP2739950B2 (en) * 1988-03-31 1998-04-15 株式会社東芝 Pattern recognition device
US5278911A (en) * 1989-05-18 1994-01-11 Smiths Industries Public Limited Company Speech recognition using a neural net
US5212730A (en) * 1991-07-01 1993-05-18 Texas Instruments Incorporated Voice recognition of proper names using text-derived recognition models
US5263097A (en) * 1991-07-24 1993-11-16 Texas Instruments Incorporated Parameter normalized features for classification procedures, systems and methods

Also Published As

Publication number Publication date
US5621857A (en) 1997-04-15
CA2085842A1 (en) 1993-06-21
EP0549265A3 (en) 1994-01-26
AU3024892A (en) 1993-06-24
EP0549265A2 (en) 1993-06-30

Similar Documents

Publication Publication Date Title
CA2085842C (en) Neural network-based speech token recognition system and method
Anusuya et al. Speech recognition by machine, a review
US8566088B2 (en) System and method for automatic speech to text conversion
Schultz Speaker characteristics
Samantaray et al. A novel approach of speech emotion recognition with prosody, quality and derived features using SVM classifier for a class of North-Eastern Languages
CN114550706A (en) Smart campus voice recognition method based on deep learning
Agrawal et al. Speech emotion recognition of Hindi speech using statistical and machine learning techniques
Kamble et al. Emotion recognition for instantaneous Marathi spoken words
Mengistu Automatic text independent amharic language speaker recognition in noisy environment using hybrid approaches of LPCC, MFCC and GFCC
Fanty et al. English alphabet recognition with telephone speech
Rao et al. Language identification using excitation source features
Manjunath et al. Automatic phonetic transcription for read, extempore and conversation speech for an Indian language: Bengali
Barnard et al. Real-world speech recognition with neural networks
Safavi Speaker characterization using adult and children’s speech
Alqadheeb et al. Correct pronunciation detection for classical Arabic phonemes using deep learning
Cole et al. A prototype voice-response questionnaire for the us census.
Tverdokhleb et al. Implementation of accent recognition methods subsystem for eLearning systems
Yousfi et al. Isolated Iqlab checking rules based on speech recognition system
Barczewska et al. Detection of disfluencies in speech signal
Benıtez et al. Different confidence measures for word verification in speech recognition
US7454337B1 (en) Method of modeling single data class from multi-class data
Lleida et al. Speaker and language recognition and characterization: introduction to the CSL special issue
Tsiaras et al. Video and audio based detection of filled hesitation pauses in classroom lectures
Hoseini Persian speech emotion recognition approach based on multilayer perceptron
Chang Improving wordspotting performance with limited training data

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed