CA2105034C - Speaker verification with cohort normalized scoring - Google Patents

Speaker verification with cohort normalized scoring

Info

Publication number
CA2105034C
CA2105034C CA002105034A CA2105034A CA2105034C CA 2105034 C CA2105034 C CA 2105034C CA 002105034 A CA002105034 A CA 002105034A CA 2105034 A CA2105034 A CA 2105034A CA 2105034 C CA2105034 C CA 2105034C
Authority
CA
Canada
Prior art keywords
subscriber
likelihood
signals
utterance
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002105034A
Other languages
French (fr)
Other versions
CA2105034A1 (en
Inventor
Biing-Hwang Juang
Chin-Hui Lee
Aaron Edward Rosenberg
Frank Kao-Ping Soong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Publication of CA2105034A1 publication Critical patent/CA2105034A1/en
Application granted granted Critical
Publication of CA2105034C publication Critical patent/CA2105034C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/12Score normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/38Graded-service arrangements, i.e. some subscribers prevented from establishing certain connections
    • H04M3/382Graded-service arrangements, i.e. some subscribers prevented from establishing certain connections using authorisation codes or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42204Arrangements at the exchange for service or number selection by voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition

Abstract

A facility is provided for allowing a caller to place a telephone call by merely uttering a label identifying a desired called destination and to charge the telephone call to a particular billing account by merely uttering a label identifying that account. Alternatively, the caller may place the call by dialing or uttering the telephone number of the called destination or by entering a speed dial code associated with that telephone number. The facility includes a speaker verification system which employs cohort normalized scoring. Cohort normalized scoring provides a dynamic threshold for the verification process making the process more robust to variation in training and verification utterences. Such variation may be caused by, e.g., changes in communication channel characteristics or speaker loudness level.

Description

- ~10~03~
SPEAKER VERIFICATION WITH COHORT NORM~1l7FO SCORING

Field of the Invention The invention relates generally to speech proces~ing, and more spec-ific~lly to the field of speaker verifi~,tion s Background of the Invention Telephone credit or calling cards, although convenient, are susceptible to being co~l!romised by potential lln~lthori7f~1 users. Indeed, fraudulent use of such cards has become a serious problem. To combat such fr~udnlent use, a telec("l~mu,~ications system may employ an ~lltom~tic speaker vçrific~tion system.
10 A speaker verification system recognizes an individual by verifying a claim of identity provided by the individual through an analysis of spoken utterances. In the context of a teleco~ ic~fions system, speaker verification may be employed to verify the identity of a caller who is charging a call to a credit or calling card.
Generally, a speaker verific~tiQn system operates by comparing extracted features of 15 an utterance received from an individual who claims a certain identity to one or more prototypes of speech based on (or "trained" by) utterances provided by the identifiçd person.
A problem frequently encountered in speaker verification in the teleco,~",ll"ic~tion context is that a person who has trained a verification system 20 does not always "sound the same" when undertaking a verification trial. Changes in a person's "sound" over time may be caused by, e.g., changes in the characteristics of the teleco"-",~ ic~tions ch~nnel carrying the person's voice signals. These changes may be caused by no more than the use of different telephones for the training process and the verification trial. Naturally, such changes degrade 25 verification system pelrollllance. Because of sensitivity to ch~nging channelcharacteristics, or even a speaker's loudness level, verification system performance may degrade to unacceptable levels.

Summary of the Invention Automatic speech processing can be used to provide a subscriber of a 30 teleco""~-ll,-ic~tions system with a number of enhanced functionalities, including a speaker verification functionality. An embodiment of the present invention provides a telecommunications system platform for, inter alia, the verification of a subscriber's identity based on analysis of subscriber speech utterances. The platform 21 0503Dr _~ --2--allows a caller to claim a subscriber's identity via, e.g., an associated telephone station keypad. The platform then verifies the identity of the caller to be that of the identified subscriber.
The illustrative platform accomplishes speaker verification with the S use of cohort norm~li7ed scoring employing hidden Markov models. A cohort is aset of subscribers whose hidden Markov models are, e.g., similar to the hidden Markov models of the subscriber whose identity is claimed. In accordance with the invention, an utterance supplied by a caller purported to be spoken by a claimedsubscriber is scored against hidden Markov models trained by that subscriber and10 models trained by each member of the cohort. A statistic of the cohort scores is formed. The speaker's score is normalized by the formation of a ratio of the score for the purported speaker and the statistic of the cohort scores (or by the formation of a difference of their logarithms). This norm~li7~tion provides a dynamic threshold which makes verification scoring more robust to variation in training and 15 verification utterances caused by, e.g., changes in communication channel characteristics or speaker loudness level.
In accordance with one aspect of the invention there is provided a method of verifying a claim of identity made by an individual based on a signal representing an utterance provided by the individual, the method comprising the 20 steps of: a. analysing the signal representing the utterance to form a plurality of feature signals characterizing the utterance; b. forming a first likelihood signal, based on the plurality of feature signals and one or more hidden Markov models trained using utterances spoken by a person whose identity is claimed, the firstlikelihood signal reflecting a probability that the individual's utterance was spoken 25 by the person whose identity is claimed; c. forming one or more other likelihood signals based on the plurality of feature signals and one or more hidden Markov models trained using utterances spoken by a set of one or more other speakers who are acoustically similar to the person whose identity is claimed, said one or more other speakers who are acoustically similar to the person whose identity is claimed 30 having been selected from a universe of speakers based on an acoustic similarity criterion, the criterion based on the one or more hidden Markov models of the person whose identity is claimed, the universe of speakers including one or more -~~ -2a-speakers who are not acoustically similar to the person whose identity is claimed based on the acoustic similarity criterion, the other likelihood signals reflecting probabilities that the utterance was spoken by said one or more other speakers who are acoustically similar to the person whose identity is claimed; and d. forming a 5 verification signal, based on the first likelihood signal and one or more of the other likelihood signals, and not based on hidden Markov models trained using utterances spoken by at least one of said one or more speakers who are not acoustically similar to the person whose identity is claimed, said verification signal indicating whether the individual is the person whose identity is claimed.

10 Brief Description of the D~wh In the FIGs:
FIG.lis a block diagram of a communications system in which the principles of the invention may be practised;
FIG. 2 is an illustrative example of linking particular subscriber 15 records to one another;
FIG.3is a flow chart of the operation of a subscription identification unit and a caller identification unit as it concerns the analysis of utterances for purposes of speaker verification;
FIGs. 4a-d present a flow chart of the operation of a speech 20 verification system as it concerns training of hidden Markov models and the selection of a cohort for a speaker;
FIG. 5 is a flow chart of the operation of the speech verification system as it concerns speaker verification using cohorts;
FIGs.6-8is a flow chart of a program which implements the 25 invention in the system of FIG.l;
FIG.9 shows the manner in which FIGs.6-8 should be arranged with respect to one another;

~....
~ A

210~034 FIG. 10 is a layout of a subscriber record that is stored in the reference databaseofFIG. l;
FIG. 11 shows the layout of the customer profile database of FIG. l;
FIGs. 12 and 13 show a block diagram of an ~ltern~tive embodiment of 5 the co.. ,~ ic~tions system of FIG. l; and FIG. 14 shows the manner in which FIGs. 12 and 13 should be arranged with respect to one another;
FIG. 15 shows the manner in which FIGs. 4a-d should be arranged with respect to one another.

0 Detailed Description Voice Directed Commllni(~tions System (VDCS) 100 shown in FIG. 1 includes a number of function~litiçs which operate in concert with one another to, inter alia, recogni~ a caller from the caller's speech signals received via co,---"lll-ications path 11 or 12. Such recognition is based on comparing features of 5 received speech signals with a model for such signals that was originally constructed at the time the caller subscribed to the functionalities (or services) of VDCS 100.
In particular, a telephone user, e.g., the user associated with station S 1, may subscribe to the services offered by VDCS 100 by dialing a predetermined subscription telephone number, e.g., 1-800-826-5555. When the user dials the last 20 digit forming that number, then Central Office, CO, ns associates the dialed number with public c~ tions network 200 and extends the call thereto via path 226.In doing so, CO 225 sends the dialed (called) telephone number as well as the calling telephone number to network 200. Network 200, which may be, for example, the AT&T public switched network, extends the call connection, in a conventional 2s manner, to a network 200 destination switch (not shown) that connects to VDCS100. The network 200 destination switch, in turn, extends the call via a selected one of trunks 11 and 12 connecting to switch 10 of VDCS 100. The destination switch then supplies the calling and called telephone numbers to switch 10 via the selected trunk.
Switch 10, in response to the incoming call, sends a message containing, inter alia, the calling and called telephone numbers and identity of the selected trunk to host processor 5 via bus 6. In an illustrative embodiment of the invention, host processor 5 and switch 10 may be, for example, the StarServer FT processor available from AT&T and the model SDS 1000 switch available from Summa Four, 3s respectively.

Host processor 5, responsive to the switch 10 message containing the associated subscription telephone number, directs switch 10 via bus 6 to establish a connection between the incoming call connection and one of a plurality of attendant (opel~or) positions 15, one of which is shown in the FIG. At that point, the 5 attendant 15 may co"",l~ ic~te with the station Sl caller to acquire information from the caller relating to the billing and processing of calls that the caller or "new subscriber" will subsequently place via system 100. Such information may inclllcle, for example, the caller's name, address, billing account, etc. As a result of such co,-...-..nic~tion, the caller is assigned an accou~t code compri~ing a predetermin~
lo number of digits, some of which, e.g., the first seven digits, may be selected by the caller. The ~ ing digits of the account code are selected by system 100 and are used as so-called "check" digits.
The new subscriber's account code may be a code that has already been assigned to another subscriber. If that is case, then host 5 causes that fact to be 5 displayed on termin~l 15. That is, termin~l 15 displays on its display the service profile, or record, associated with a particular one of the other subscribers that will be sharing the same account code with the new subscriber. At that point, the ~ttencl~nt may change the latter service record so that it (a) infliç~tes that the associated account code is being shared with the new subscriber, (b) points to the 20 new subscriber's service record and (c) contains the new subscriber's identity. The new subscriber's record is similarly arranged so that it points to the latter service record. An illustrative example of such linking is shown FIG. 2.
Briefly, a chain of service records are linked to one another by storing in each such record the address of the next record in the chain and the address of the previous record in the chain. Each of the records 61-2 through 61-N thus contains the address, e.g., pointer 62-2, of the next record in the chain and, except for head record 61-1, the address, e.g., pointer 63-2, of the previous record in the chain. The way in which system 100 iflentifies a caller that is sharing an account code with one or more other subscribers is discussed below.
As a further result of such communication, the new subscriber may define a number of voice-identified calling labels and associate the labels withrespective telephone numbers. For example, the subscriber may associate the label (a) "call home" with the subscriber's home telephone number, (b) "call office" with the subscriber's work or office telephone number, (c) "call Dad" with the 35 subscriber's father's telephone number, etc. Thereafter, and as will be explained below, when the subscriber places a call to system 100 for the puIpose of placing a - - s -2~05034 call to a particular location, e.g., "home", then all that the subscriber needs to do in response to a particular system 100 request is to say "call home". System 100, in response thereto, associates the spoken i(lentifier "call home" with the subscriber's home telephone number and then places an outgoing telephone call thereto via 5 switch 10 and network 200. System 100 then causes switch 10 to interconnect the outgoing call with the subscriber's incoming call.
The new subscriber may also associate particular telephone numbers with respective billing accounts. For example, the new subscriber may specify that all telephone calls that the new subscriber plac~s to his/her of fice via system 100 are 10 to be billed to a particular billing account, for example, a credit card account. As another example, the subscriber may specify that all telephone calls that the new subscriber places to a business associate via system 100 are to be billed to another account, e.g., a business telephone number. The new subscriber may also specify a default billing account for all other calls that the subscriber places via system 100, in 15 which the default billing account may be the new subscriber's system 100 service number or home telephone number.
The new subscriber may also specify voice-identified billing labels which may or may not be tied to a particular telephone number(s), but which the subscriber may use to override default or predefined billing. Specifically, such a 20 billing label may be, for example, the name of a credit card service, such as VISA; a calling card service, such as AT&T; or a particular telephone number. For example, assume that the subscriber has tied billing of office calls to an AT&T calling card number and has specified VISA as a billing label. Thereafter, the subscriber mayplace a telephone call to his/her office and, if desired, override the pre lefin~ AT&T
25 calling card billing for the call by saying "bill VISA" following the entering of the subscriber's office telephone number. More particularly, if the subscriber has specified a calling label for the of fice telephone number, then the subscriber may initiate a call to his office telephone by saying "call office". The subscriber then pauses for a predetermined duration, e.g., at least one second, to disassociate the 30 calling label from the billing label. At the end of one second, the subscriber may then say "bill VISA" to override the predefined billing for the office call.
System 100, in response thereto, (a) translates the identifier "call office"
into the subscriber's office telephone number, (b) places an outbound call to that number via switch 10 and network 200, and (c) connects the subscriber's call to the 35 outbound call. Similarly, as a result of the pause, system 100 associates theidentifier "bill VISA" with a billing function and overrides the predefined, or default 210~Q3 1 -billing priorly specified for the call. Accordingly, system 100 bills the call to the subscriber's VISA account.
The aforementioned pause may be elimin~te~l in all but a few cases by employing speaker independent "word spotting" for the words "call", (or "dial") and 5 "bill", (or "charge"). Accordingly, when system 100 spots the spoken word "call"
("dial") it cl~sifies that word and the following speech signals as a calling label. If system 100 then spots the spoken word "bill" ("charge") it cl~ifies that word and the following speech signals as an overriding billing label, as will be discussed below.
The new subscriber may associate a particular speed dialing code with a telephone number. Illustratively, a speed dialing code is termin~te~ by a predetermine(l suffix, e.g., the pound sign (#). For example, the new subscriber may specify l# and 2# as the speed dialing codes for respective telephone numbers, e.g., 1-800-555-1212 and 1-908-555-1212, respectively.
When the ~tte.nd~nt has collected and entered the new subscriber's subscription inrollna~ion, including the aforementioned labels (if any) in t~rmin~l 15, (FIG. 1) the attendant then supplies the entered information to controller 25 for delivery to controller 55 via Local Area Network (LAN) 30. In the illustrative embodiment of the invention, LAN 30 may be, for example, the well-known 20 Ethernet network.
Controller 25, more particularly, forms the subscription inr(,llllation into a message addressed to controller 55 and transmits the mess~ge over LAN 30.
Controller 55, in turn, removes the message from LAN 30 and forms the contents thereof into a subscription profile (or record) associated with the new subscriber and 25 stores the profile in customer profile database 60 in the manner discussed below. (If the new subscriber's account code is a shared code, then controller 55 effectively "links" the new subscriber's profile record to the profile record associated with the subscriber that is sharing the account code with the new subscriber.) At this point in the subscription process, the new subscriber's account 30 code, various labels, associated telephone numbers, speed dial codes, etc., are stored in database 60 as ASCII text.
As a last step in the subscription process, the ~tten(l~nt instructs the new subscriber how to register particular speech utterances characterizing the new subscriber's account code, as well as the aforementioned labels, so that the 3s registration may be used thereafter to verify the new subscriber's identity and verbal requests. To that end, the attendant sends to host 5, via terminal 15 and LAN 30, a request to invoke a voice registration session, in which the request contains the subscriber's account code and labels. Host 5, in response thereto, causes switch 10 to bridge Subscription Identification Unit (SIU) 21 onto the switch 10 connection between the subscriber's incoming call connection and ~ttencl~nt termin~l 15 to 5 perform a registration process for the new subscriber. In ~(l(lition, host 5 supplies via LAN 30 the new subscriber's account code and labels to SIU 21.
SIU 21, in particular, includes, inter alia, a number of digital signal processors, such as the AT&T DSP 32, which operate to perform a number of dirrelcnt voice processing functions including, inter alia, automatic speech 10 recognition, and text-to-speech processing for generating voice pr~ s and verbal f~csimiles of the new subscriber's labels. The automatic speech recognition function, more particularly, performs connected digit recognition and an analysis of the subscriber's utterances providing feature vectors of autocorrelation coefficients using techniques that are well-known in the speech recognition art.
Regarding the analysis of utterances by a new subscriber, SIU 21 operates according to the flow diagram presented in FIG. 3. As shown at step 600 of FIG. 3, SIU 21 ylomyls the new subscriber to say one of a predetermined number of different strings of digits (e.g., eleven strings announced one string at a time) which comprises a predetermined number of digits, e.g., five digits. For example, SIU 21 p~ pls the new subscriber to say the digit string of 0,1,0,1,2. When the new subscriber responds by saying the completed string, SIU 21 collects the new subscriber's utterances for further processing.
As shown at step 603 of FIG. 3, sru 21 performs a format conversion of the new subscriber's string utterance from 8-bit ll-law pulse-code modulated (PCM) 25 digital samples (the digital format of a subscriber's utterance as provided from the public co"I,~ ic~*ons network 200) to a signal of 16-bit linear PCM digital samples. SIU 21 then preemphasizes the digital signal, as shown at step 605, by implementing a first-order difference filter well known in the art.
Feature vectors characterizing time-slices of the preemphasized signal 30 are then formed by SIU 21, as shown at step 610. The time-slices are provided by a 45 millisecond (ms) H~mming window which is shifted every 15 ms (thus, a given 45ms time-slice overlaps adjacent time-slices by 30ms). Each time-slice is the basis of a vector of 10th order autocorrelation coefficients representing the time slice.
This vector is referred to as a feature vector. The feature analysis performed by 35 SIU 21 is well-known in the art and is described in further detail by C.-H. Lee, et al., Acoustic Modeling for Large Vocabulary Speech Recognition, 4 Computer _ -- 8 --21050~4 Speech and Language 127-65 (1990), which is hereby incorporated by reference as if set forth fully herein.
sru 21 then segments the feature vectors into sets of vectors representing the respective digits of the utterance as shown at step 615.
5 Se m~nt~tion of the subscriber's utterances into digits is performed using speaker-independent recognition of digit and non-speech signals well-known in the art.
SIU 21 passes the resulting segmented autocorrelation coeffiçient feature vectors, the ASCII represent~tion of the corresponding digit string, and the subscriber's account code to controller 45 for f~rther signal processing as will be 10 explained below (see step 620). As shown at step 625, the above process is repeated for each of the balance of the string utterances L~ p~d at step 600.
In addition to providing a basis to accomplish speaker verification, the registration procedure also involves an analysis of utterances to provide a basis for the recognition of subscriber labels. For example, assume that the new subscriber 5 specified (a) "call home" and "call office" as calling labels and (b) "bill Visa" as an overri~ling billing label. SIU 21 registers the new subscriber's voice represent~tion of those labels by passing the ASCII equivalent (text version) of the first calling label ("call home") through a text-to-speech processor and tr~n~mitting the result to the subscriber along with a request to verbally repeat the label. Responsive to the 20 subscriber's utterance of that label, SIU 21 segments the subscriber's speech signals into a series of subword unit phonemes characterizing the internal label and associates each such subword unit phoneme with a particular index value, therebyforming a series of indices, or numbers. Thus, a particular utterance of a label is modeled as a series of indices and stored in memory. Thereafter, the particular 25 utterance may be interpreted by generating such a series of subword indices for the utterance, and comparing the generated series with each priorly stored series ofindices characterizing respective labels. The stored series that compares with the generated series will then point to the telephone number, or billing account, identified by the utterance. SIU 21 then passes the resulting ASCII indices, the30 corresponding ASCII representation of the voice label and subscriber's account code to controller 45 via LAN 30. SIU 21 then repeats the foregoing process for each of the subscriber's other labels.
At this point, SIU 21 has essentially completed its part in the registration process. However, before returning control of the subscriber's telephone 35 call to attendant 15, SIU 21 waits for a confirmation message from controller 45 indicating that voice verification models and feature vectors of the subscriber's 21Q3~

speech utterances have been stored in database 50.
Controller 45, more particularly, upon receipt of the ASCII
representation of the last string of digits and resulting feature vectors of autocorrelation coefficients, passes those feature vectors and their respective digit S strings to SVS (Speech Verific~tion System) 40 for the detennin~tion of hiddenMarkov models and the selection of a "cohort" for the new subscriber. SVS 40, which may be, for example, the DSP 3 system available from AT&T, includes Real Time Host (RTH) controller 41 and a plurality of DSPs (Digital Signal Processors) 42-1 through 42-P. Illustratively, P=128. RT~41 serves as an interface between 0 DSPs 42-1 through 42-P and an extqrn~l processor, such as controller 45, such that, upon receipt of a speech processing request, RTH 41 determines which one of the DSPs 42-1 through 42-P is available (idle) and passes the request thereto along with the accompanying data Assuming that DSP 42-1 is idle, then RTH 41 passes the eleven digit strings and corresponding feature vectors of autocorrelation coefficients 15 to DSP 42- 1 for processing.
A flow diagram of illustrative processes by which DSP 42-1 generates HMMs and selects a cohort is presented in FIGs. 4a-d.
As shown in FIG. 4a at step 501, DSP 42-1 converts feature vectors of autocorrelation coefficients into feature vectors of 12 cepstral and 12 delta cepstral 20 coefficients. The 12 delta-cepstral coefficients are calculated by fitting a regression line to a sequence of five cepstral coefficients centered around each current cepstral coefficient. As before with the feature vectors of ~ltoco~relation coefficients, each feature vector of cepstral coefficients characterizes a 15ms time-slice of a subscriber's utterance. The above-described conversion of autocorrelation 25 coefficients to cepstral coefficients is well-known in the art and is described in further detail in the above incorporated reference by C.-H. Lee, et al., Acous~ic Modeling for Large Vocabulary Speech Recognition, 4 Computer Speech and Language 127-65 (1990).
Next, as shown at step 505, DSP 42-1 generates a so-called Hidden 30 Markov Model (HMM) characterizing the new subscriber's utterance of the digit~ro, in which the HMM is generated based on the associated cepstral coefficient feature vector characterizing the digit ~ro for each occurrence of that digit in the eleven strings of digits. Similarly, DSP 42-1 generates an HMM characterizing each of the other digits, i.e., digits 1 through 9 and the digit zero as "oh", if present.

210~Q34 In the illustrative embodiment of the present invention, each digit is represented by an eight state HMM. The nominal number of mixture components per state, M, is six, but the actual number can be less depending on the number of feature vectors segmented in each state.
s In addition to the HMMs mentioned above, HMMs representing two kinds of non-speech segments are provided by the embodiment. These are a one state "silence" model, trained from background segments in the registration procedure, and a three state artifact model, trained from speaker generated non-speech sounds, such as "puff" and "click" sounds.
Each HMM generated by DSP 42-1 is a left-to-right continuous density HMM of the type described in United States Patent No. 4,783,804, commonly assigned herewith and incorporated by reference as if fully set forth herein. The spectral observation probability for each state of an HMM is a continuous density probability function specified as a mixture of M G~llssi~n den~ities. The parameters 15 of the mth mixture component for the jth state of an HMM are the mean vector"ujm, the covariance matrix, U jm, and the mixture component weight, c jm . Matrix U jm is a fixed diagonal covariance matrix. The state-tr~n~ition probabilities, aij, of an HMM are fixed such that the probabilities for rem~ining in the same state and advancing to the next state are equal.
DSP 42-1 estimates HMM parameters (at step 505) by a conventional segmental K-means training procedure, such as that described by Rabiner et al., A
Segmental K-Means Training Procedure for Connected Word Recognition, Vol. 65, No. 3 AT&T Technical Journal 21-31 (May-June 1986), which is hereby incorporated by reference as if fully set forth herein. In addition to the model25 parameters, it is p~ elled that word duration means and variances be calculated from the subscriber's registration utterances for use in word duration penalties applied to verification scores.
These means and variances may be determined for each word cont~in-ocl in the training utterances. The determination of such means and variances is done in 30 the conventional fashion, such as that described by Rabiner, Wilpon, and Juang, A
Model Based Connected-Digit Recognition System Using Either Hidden Markov Models or Templates, 1 Colllpu~el Speech and Language, 167-97 (1986), which is hereby incorporated by reference as if set forth fully herein. These mean and variance statistics are stored as part of the training data.

2 1 (~ 5 ~ ? ~

Once HMMs for the new subscriber have been determined, a "cohort"
for that subscriber is determined by DSP 42-1. A cohort is a set of other subscribers whose HMMs are used in the verification process for the given subscriber. These other subscribers are selected according to a cohort selection criterion. As will be S discussed below, HMMs associated with a given subscriber's cohort are used to provide a speaker verification process which is more robust to changes in, e.g., the characteristics of the channel through which registration and verification utterances are co~ nll-~ic~ted or overall vocal effort (or "loudness").
For example, in registering with VE)CS 100, a new subscriber may use a 10 home telephone station, Sl, having a "carbon button" microphone. Such microphones have a frequency response characteristic which acts as a filter of the speech utterance co~ ic~ted to the central office i25 (and eventually VDCS
100). However, when providing utterances for purposes of speaker verification aspart of VDCS 100 use, the subscriber may use another station having a different 15 microphone, e.g., an electret. The frequency response of an electret microphone differs ~i~nifi~ntly from that of a carbon button microphone, therefore providing a dirr~l~nl filtering effect to the speech utterance of the subscriber. Under suchcirc~ t~nces, the characteristics of the channel -- which include the microphonecharacteristics -- through which the subscriber's utterances are commllnirated may 20 change signific~ntly. The accuracy of a speaker verification system in terms of both true speaker rejection rate (the so-called "Type I" error) and imposter acceptance rate (the so-called "Type II" error) degrades when registration (i.e., training) and verification utterances are exposed to disparate channel characteristics. The use of cohorts in the verification process helps alleviate the accuracy problems caused by 25 such disparate channel characteristics.
According to an illustrative cohort selection criterion, a cohort is a set of K other subscribers whose HMM[s (previously determined) are closest to or "most competitive" with those of the subscriber in question. A cohort for a new subscriber may be determined by DSP 42-1 through pair-wise comparisons of the subscriber's 30 registration utterances to the HMMs of each of a plurality of (e.g., all) other subscribers, and vice-versa, using conventional Viterbi scoring.
The cohort for the new subscriber is determined as shown at steps 507-575 of FIGs. 4a-d. At step 507, a counter for keeping track of previously registered subscribers (i.e., so that their utterances and HMMs may be addressed) is 35 initi~li7~-1 The counter therefore points to the first of a plurality of previously registered subscribers to be considered for membership in a new subscriber's cohort.

Next, at step 510, the HMMs and training utterances for the first previously registered subscriber to be considered are retrieved from database 50. The retrieved HMMs are for use in comparison to training utterances of the new subscriber. Theretrieved training utterances are for use in comparison to the HMMs of the new S subscriber.
The comparison of HMMs to training utterances of the new subscriber begins by the initi~li7~tion of a counter at step 512 in FM. 4b. This counter points to the first new subscriber utterance to be compared with the HMMs of the previously registered subscriber. Conventional Viterbi scoring is used for this c-" nl,A~ ;~on at 10 step 515. The scoring measures the likelihood of the utterance given the HMMs of the previously registered subscriber. The previously registered subscriber's HMMs are selected for use in the scoring process based on an ASCII represent~tion of the new subscriber utterance as provided by SIU 21. At step 520, the score for a given utterance produced at step 515 is divided by the number of feature vectors which5 make up the utterance. This division produces a first normalized score. The first norrn~li7P(l score is saved in an accllmul~tQr of DSP 42-1 at step 525 for later use.
Steps 515, 520, and 525 are repeated for each training utterance of the new subscriber until the last such utterance has been scored as determinç~l by decision step 530. When all the training utterances have been scored, the total accnm~ tecl 20 first norm~li7~d score determined at step 525 is divided by the total number of scored utterances at step 535 to form a first average score.
Process steps 538-560 are next pelrolllled as shown in FIG. 4c. These steps are similar to steps 512-535 discussed above. Steps 538-560 determine a second average score based on comparisons of the training utterances of the 25 previously registered subscriber in question and HMMs of the new subscriber.
ASCII represent~tioni of previously registered subscriber utterances from database 50 are used to select HMMs of the new subscriber for the comparison.
After steps 538-560 are pelroln~d, a total average score for the previously registered subscriber is determined at step 565 of FIG. 4d based on an 30 average of the first and second average scores (determined at steps 535 and 560, respectively).
The whole process described above at steps 510-565 is repeated for each of the plurality of previously registered subscribers under the control of decision step 570. Once a total average score has been determined for each of the previously 35 registered subscribers, a cohort may be selected for the new subscriber. The cohort is selected at step 575 as the K previously registered subscribers having the highest ~ - 13 -total average scores. Illustratively, K=5.
Note that in the embodiment described above and presented in FIGs. 4a-d, the number of utterances of each digit is equal, such that the selection of a cohort is not biased in favor of certain digit utterances.
It should be understood that the above described cohort selection process is merely illustrative. Other techniques for forming cohorts of a speaker are possible, inclll~ling techniques which operate on a word-by-word, rather than a speaker-by-speaker, basis. One illustrative word-by-word cohort select;on technique is similar to that described above, except that averages are formed for each word 10 rather than an average speaker utterance basis. Naturally, word-by-word cohort techniques may require more storage space as there is a distinct cohort for each word of a speaker, rather than for each speaker. Another cohort selection technique involves designating a cohort based on a random selection of previously registered subscribers.
An ~ltern~tive to the above-described cohort selection techniques is one which employs a direct comparison of the HMMs of the speaker in question, e.g., the new subscriber, with the HMMs of potential cohort members. This direct co~ison of HMMs may be performed on a word-by-word or speaker-by-speaker basis. Because no speaker utterances are involved in the determin~tion of the cohort, 20 this technique is less computationally intensive and has less storage re~luilclllents (since utterance data need not be stored) than the techniques discussed above.
As discussed in U.S. Patent No. 4,783,804 incorporated by reference above, the observation likelihood for a state in a continuous density HMM of a new subscriber may be characteri~d as a weighted sum of normal G~lssi~n densities:
M~W
bj (~t) = ~, CjnmwN(Ot~ ~jrmW Ujrlew) (l) m=l where bjneW (~t) iS the likelihood of an observation, O at a time t at state j of the HMM; N is a normal Gaussian density function; cjnmW is the "mi~ule" weight for the jth state and mth mixture component; ~ujrmW is the mean of the feature vectors provided during training; UjnmW is a covariance matrix for the feature vectors from 30 training.
Similarly, the observation likelihood for a state in a continuous density HMM of a previously registered subscriber may be characterized as:
Mple b~e(~t) = ~,C~IreN(ot~ e, U~e). (2) 1=1 21~303~1 To determine a log likelihood similarity measure, R, between two HMMs of new and previously registered subscribers, the kth state of the previously registered subscriber's HMM and the jth state of the new subscriber's HMM are compared by subsliLu~ing llkl for ~t in expression (1), and using cklre as a weighted s representation of the promin~nce (or the number) of training feature vectors represented by ,ukle. To wit:

Mple Mncw kj ~,Ckl log ~, cjneWN(~klre ~jneW unew) (3 1=1 m=l ~ _ To determine an overall similarity measure between the "new" and "pre"
HMMs, the state-to-state similarity measures, Rkjre- new, are acc-lm~ ed over an0 optimal alignment of "pre" states to "new" states, as follows:
JDeW
Rpre,new = _ ~RPkr(je~n3W

where k(j ) represents an optimal mapping of the "pre" state k to the "new" state j and Jnew is the number of states in the new subscriber's HMM (e.g., Jnew = 8). This optimal mapping is achieved in conventional fashion using a dynamic progr~mming 5 alignment with Itakura constraints and the first and last "pre" and "new" HMM states aligned.
The above-described technique is directly applicable to word-by-word cohort selection as a cohort for a given new subscriber's word may be determined as the K previously registered subscribers having the highest scores, R-Pre~new, for the 20 word in question. Moreover, this technique for determining cohorts may be used on a speaker-by-speaker basis by simply averaging similarity scores for individual HMMs over all HMMs called for by all registration utterances as discussed above.As discussed illustratively above with reference to FIGs. 4a-d, determination of a cohort for a given subscriber is based upon, inter alia, HMMs of 25 previously registered subscribers. However, it should be understood that cohorts for a given subscriber need not be determined at the time of the subscribér's registration.
Cohorts for subscribers may be determined subsequently, after HMMs associated with all subscribers are determined. Also, cohorts may be updated, according to a selection criterion, over the course of time.
When DSP 42-1 completes its task, then RTH 41 passes the resulting HMMs, training statistics, and cohort subscriber identification information to controller 45. Controller 45 then stores this information in a reference database 50 210 3 0 ~ ~
memory record that is indirectly indexed by the subscriber's account code. (It is noted that if that code is shared with another account, then controller 45 effectively "appends" the new subscriber's database 50 record to the record associated with the subscriber who is sharing the account code with the new subscriber. Similarly, 5 controller 45 notes that fact in the latter record as discussed above.) Cohortsubscriber identification inform-ation is stored as addresses of the HMMs of each subscriber of the cohort.
Controller 45 also stores in d~t~ba~e 50 the cepstral feature vectors of the training utterances, determined by DSP 42-1, as well as the ASCII equivalents of 10 these utterances, for the new subscriber. Following the foregoing, controller 45 stores the ASCII subword unit of indices, characterizing the new subscriber's utterance of the subscriber's labels, as well as the ASCII equivalents thereof in the aforementioned database 50 memory record. Controller 45 then notifies SIU 21 viaLAN 30 that processing of the subscriber's data has been completed. SIU 21, in 5 turn, sends a similar message to host processor 5, which causes host 5 to disconnect SIU 21 from the subscriber's incoming call. At that point, the attendant 15 notifies the subscriber that registration has been completed.
It should be understood that the above described training of HMMs and cohort selection could occur without maintaining a connection between the new 20 subscriber and the system 100, once the new subscriber has provided training utterances. For example, training and cohort selection could occur off-line.
At this point, a subscriber may "dial up" system 100, say his/her subscriber number, and then invoke a calling function characteri~d by one of thesubscriber's predefined labels, such as "call home." Alternatively, the subscriber 25 may request that a call be placed to a location that is not defined by one of the subscriber's predefined labels. That is, the subscriber may say the telephone number of a location that the subscriber desires to call. For example, the subscriber may say "908-555-6008" identifying, for example, station S2. System 100, in response thereto, decodes the subscriber's utterance of 908-555-6008 and places an outgoing 30 call to that location and then connects the subscriber's incoming call to that outgoing call.
Specifically, the subscriber may dial the system 100 service telephone number, e.g., 1-800-838-5555, to establish a telephone connection between station S 1 and system 100 via CO 225 and network 200. The system 200 destination 35 switch, responsive to receipt of the call and called telephone number, associates that number with a particular one of its outgoing trunk groups and presents the call to ~ - 16-210aO3~
system 100 via an idle trunk (port) of that group. Switch 10, responsive to receipt of the incoming call, notifies host 5 of that fact via LAN 30. Host 5, in response to such notification, directs switch 10 via LAN 30 to establish a connection between the incoming call and an idle one of the CIUs (Caller Identification Units) 20-1 through s 20-N, e.g., CIU 20-1. CIUs 20-1 through 20-N are idçntiç~l to SIU 21, except that the CIUs are not pro~ ed to present the registration process to a new subscriber.
Assuming that CIU 20-1 is connected to the subscriber's call, then that CIU transmits over the connection an announcement asking the subscriber "what isyour account code?" The subscriber has the option of entering his/her account code 10 (number) by saying it or by keying it in using the station S 1 keypad. If thesubscriber elects the latter option and keys in his~er account code, then, CIU 20-1 collects the "keyed in" digits. Upon receipt of the last such digit, CIU 20-1 then veri~ies the subscriber's identity by generating and transmitting over the in~oming call connection a series of randomly selected digits and then plulllp~ing the 15 subscriber to say the series of digits.
If the subscriber says his/her account code, then, similarly, CIU 20-1 collects the subscriber's utterances and uses connected digit processing to segment the utterances into speech signals characterizing respective digits of the account code. CIU 20-1 then converts each such speech segment into autocorrelation 20 coefficients and then id~ntifies the account code, but not the caller, based on those coefficients. The account code is identified with conventional speaker-independent connected digit speech recognition well known in the art. CIU 20-1 then stores the account code in its local memory. (If the subscriber entered the account code via the station Sl keypad, then CIU 20-1 decodes the resulting series of tones (i.e., Dual 25 Tone MultiFrequency signals, i.e., DTMF signals) into respective digit values and stores them as the account code. Then, as mentioned above, CIU 20-1 tr~n~mits the series of random digits and requests that the subscriber repeat those digits.
Similarly, CIU 20-1 segments and models the caller's response as feature vectors of autocorrelation coefficients.) Following the foregoing, CIU 20-1 sends a message containing the received account code and the feature vectors representing the caller's (subscriber's) spoken account code (or random digits, as the case may be) to controller 45 in order to verify the caller's identity. Controller 45, in response thereto and using the received account code as a memory index, unloads from reference database, or 35 memory, 50 the record containing the Hidden Markov Models (HMM) of the subscriber's utterances of the respective digits forrning the associated account code 21030~ l and the ~IMMs associated with the cohort of the subscriber identified with the account code. Controller 45 then sends via bus 46 the unloaded HMMs and feature vectors generated by CIU 20-1 to RTH 41 for the purpose of verifying that the subscriber HMMs and feature vectors represent speech signals spoken by the same S person. (It is noted that if the~feature vectors represent the r~nclom digits, then controller 45 sends only the HMMs priorly stored for those digits for both the subscriber and the cohort.) RTH 41, in response to the request, identifies an idle one of its associated DSPs 41-1 through 42-P, e.g., DSP 412-P, and supplies the HMMs and lo feature vectors of autocorrelation coefficients received from controller 45 to DSP
42-P. DSP 42-P operates in accordance with the flow diagram presented in FIG. 5.As shown at step 705 of FIG. 5, DSP 42-P converts the feature vectors of autocorrelation coefficients to feature vectors of cep~llum and delta-cep~lu coeffi~ient~ as described above.
Next, DSP 42-P compares the feature vectors of cepstral coefficients (representing, e.g., the spoken random digit verification utterance) to the HMMs of the cl~imefl subscriber and his/her cohort. This comparison is presented at steps 710-745 of FIG. 5. The comparison produces a score indicative of the likelihood that the verification utterance was spoken by the claimed subscriber (for purposes of 20 verifi~ti~ n, scores are based on HMMs of speech sounds not those representing non-speech sounds). The score, S, is determined by DSP 42-P according to the following expression (see step 730):
S = logp(O/I) -- stat[logp(O/Ck(I))]- (5) The likelihood p is evaluated by DSP 42-P using a frame-synchronous Viterbi 25 likelihood scoring procedure well-known in the art and described, e.g., by Lee and Rabiner, A Frame-Synchronous Network Search Algorithm for Connected Word Recognition, 37 IEEE Trans. Acoust., Speech, and Sig. Pro. 1649-58 (Nov. 1989).
The quantity p(O/I) represents likelihood that an observed set of feature vectors, O, was produced by a claimed individual, I, as represented by HMMs trained by that 30 individual (see step 710). The quantity p (~/Ck ( I) ) represents the likelihood that an observed set of feature vectors, O, was produced by the kth member of the cohortassociated with individual I, Ck (I) (see steps 715-720). The term "stat[*]" refers to a st~ti~tic~l operator, such as a minimum, maximum, or average likelihood over all subscribers who make up the cohort (there are K subscribers in the cohort).

21Q~03 l Illustratively, the statistical operator is the maximum (see step 725).
When DSP 42-P determines the value of S to be greater than a threshold, the claimed identity of a subscriber will be accepted (see steps 735-745). As a result of being accepted, the feature vectors of c~sllum and delta-c~llum coefficients 5 may be used to "update" (or further train) the HMMs of the cl~im~d and verified subscriber (see step 740). Given feature vectors, Ojm(t)~ t= 1, 2,..., Tjm~ decoded in a state j m~tching ll~ix~ule component m best, then the HMM mean"Ujm~ and component weight, cjm, are updated by DSP 42-P as follows:
Tjm Njmlljm+ ~~jm(t) ~Ljm =t=l (6) 10 and Njm +Tim (7) ~ , (Njm+Tjm) m=l where Njm is the number of training vectors used to calculate an unupdated mean and ll~iXlulc component. Vector count N jm is then updated by DSP 42-P as follows:
~,Tjm Njm = Njm+Tjm-- ~N Nim (8) j,m 15 DSP 42-P then supplies updated HMMs and a flag inflic~ting that the verification is true (i.e., positive) to RTH 41 (see step 745). RTH 41, in turn, supplies that information and the verified subscriber's account code to controller 45.
Should DSP 42-P determine the value of S to be less than or equal to the threshold, the claimed identity of a subscriber will be rejected and a flag indicating a 20 false (i.e. negative) verificadon is sent to RTH 41 (see steps 735, 750). No updating of a subscriber's HMM by DSP 42-P will occur under these circumstances.
Controller 45 is supplied with the negative verification information.
Illustratively, a static threshold may be used. Such a threshold may be set equal to zero, or biased above or below zero for a system which is less or more 25 tolerant of imposters, respectively. However, a dynamic threshold may also be used.
Such a threshold may be determined according to conventional thresholding techniques for speaker verification to achieve a desired level of performance. See, - 19 ~ 2 1 o 5 ~n ~ 4 e.g., Rosenberg, Evaluation of an Automatic Speaker Verification System Over Telephone Lines, 55 Bell System Technical Journal 723-44 (July-August 1976), which is hereby incorporated by reference as if set forth fully herein.
It should be understood that expression (5) may be used in combination 5 with other, e.g., conventional scoring techniques. So, for example, a first scoring technique may comprise the first term on the right-hand side of expression (5). If such technique produces a score, S 1, which exceeds a threshold, T 1, then the full scoring technique of expression (5) may be used to determine a second score, S 2.
This score, S2, may then be comp~d to a secQnd threshold, T2. Only if S 1 >T
10 and S2 >T2 will a cl~imed identity be verified. Such a combin~tion of scoringtechniques may enhance the ability of a verific~tion system to avoid errors caused by imposters.
The Viterbi scoring ~lÇo~ ed by DSP 42-P is constrained in conventional fashion by a grammar which allows optional non-speech segments 15 before and after the utterance and between words. For verifi~fion phrases, it is plerellcd that the Viterbi likelihood scores be post-processed by application of a duration penalty to each word likelihood.
This duration penalty reflects by how much a given verifi~tion utterance word deviates from the mean for such word as determined during the 20 registration training process. The deviation between the duration of the verifit~ti~ n utterance word and the mean for that word is measured in terms of fractions of word duration standard deviation, as determined during registration training. The application of word duration penalties is conventional and is described in the above - incorporated reference by Rabiner, Wilpon, and Juang. The likelihood p (O/I) in (5) 25 is the average per-frame (i.e., per feature vector) likelihood of the utterance excluding the non-speech segments.
The subtraction of the statistic of the cohort log likelihood scores from the log likelihood score for the claimed individual (as presented in (5) and performed at step 730 of FIG. 5) provides a "dynamic threshold" for verification. This 30 threshold provides significant tolerance to changing conditions. When the true speaker score is degraded by a change in conditions, e.g., changed channel conditions do to differences in microphones used in registration (training) and verification, the cohort score tends to be affected in the same way. Therefore, the difference of log likelihoods remains substantially stable and the changed conditions 35 do not cause severe limitations on the ability of DSP 42-P to verify a claimed identity.

-20- 21050~ll Controller 45, in response to the verification flag being positive, stores the updated HMMs in the subscriber's database 50 record. Controller 45 then sends the flag and default billing number to CIU 20-1. If the verification flag indicates a negative verification, then controller 45 returns a reply message indicating that fact 5 to CIU 20-1 via LAN 30. CIU 20-1 may then terminate the call in response to that reply message or direct the call to ~tten~3~nt 15.
If controller 45 finds that the account code, or iclentifier, that it receives from CIU 20-1 is associated with a number of subscriber records, then controller 45 unloads the pertinent HMMs from each of those linked records and passes the 10 various sets of unloaded HMMs and their respective record addresses as well as the received feature vectors of coefficients to RTH 41 for processing. RTH 41, in turn, distributes the received sets of HMMs and a copy of the feature vectors to respective idle ones of its associated DSPs 41-1 through 41-P. Each such DSP, e.g., DSP 41-1, in response thereto, generates and supplies to RTH 41 a score indicative of the level 5 of certaintv that such feature vectors compare with the set of HMMs that it receives from RTH 41. RTH 41, in turn, selects the highest score from the various scores that it receives from its associated DSPs. If RTH 41 finds that the value of the highest score greater than the threshold, then RTH 41 confirms the caller's identity andassociates the caller with the subscriber record whose address is associated with the 20 highest score. RTH 41 then causes the HMMs associated with that address to beupdated in the manner discussed above and then supplies the updated HMMs, the associated score and record address as well as the aforementioned positive flag to controller 45. Controller 45, in response thereto, stores the updated HMMs and returns the aforementioned reply message to CIU 20-1 and includes therein the 25 subscriber record associated with the aforementioned highest score. If, on the other hand, RTH 41 finds that the value of the highest score to be less than or equal to the threshold, then RTH 41 notifies controller 45 of that fact. Similarly, controller 45 returns a message indicative of that fact to CIU 20-1, as discussed above.
Assuming that the controller 45 reply message is positive, then CIU
30 20-1 transmits over the call connection an announcement requesting e.g., "what number do you wish to call?" The subscriber may respond to that request by (a) using the station S 1 keypad to "key in" a particular telephone number that the subscriber desires to call, e.g., 908-555-1234; (b) keying in one of the subscriber's predefined speed dialing codes, e.g., 231#, (c) saying the particular telephone 35 number that the subscriber desires to call; or (d) saying one of the subscriber's predefined calling labels, e.g., "call home" or "call office."

_ -21 -21030'~4 In particular, CIU 20-1, responsive to receipt of DTMF signals characterizing a telephone number decodes those signals into respective digits in the order that the signals are received via switch 10 and network 200. Upon the decoding of the last of such telephone digits, CIU 20-1 sends a message conL~ -g5 the subscriber's account code and the received telephone number to host com~ulel 5.
Host co~ .u~er 5 then generates and stores in its internal memory a billing record containing, inter alia, (a) the subscriber's service number and billing telephone number (e.g., home telephone number), (b) the telephone number that is being called and (c) current date and time. Host S then directs switch 10 to place an outbound lo telephone call via network 200 and outpulse the called telephone number. Host 5 also directs switch 10 to connect the subscriber's incoming call to the oullJoulld call.
CIU 20-1 remains bridged onto the subscriber's incoming call as a means of detecting the subscriber's possible request for a telephone operator. That is, each of the CIUs 20-1 through 20-N using the well-known functionality of independent 15 speech recognition may spot a caller saying "operator". Accordingly, if CIU 20-1 "spots" the subscribers saying the word "optildlor" during the processing of an associated call, then CIU 20-1 sends a message to that effect to host 5 via LAN 30.
Host 5, in turn, connects the subscriber to an available ~tten~nt position 15 via switch 10. However, a CIU does not respond to the word "operator" once an 20 associated call has been completed. At that point, the subscriber may enter particular signals, e.g., the pound (#) sign, as a way of requesdng the assistance of an operator.
That is, if switch 10 detects those pardcular signals after the call has been completed, then switch 10 sends the operator request message to host 5. ~ltern~tively, the subscriber, at any point during the current call, may enter particular signals, e.g., 25 **9, as a way of entering a request to place another call. Thus, if switch 10 detects the entry of those signals, then it passes a message indicadve thereof to Host 5. Host 5, in turn, terminates the outbound switch 10 connecdon and then asks the caller to enter a calling destination.
If, on the other hand, the subscriber keys in one of the subscriber's 30 speed dialing codes, e.g., 231#, then CIU 20-1 upon the receipt and decoding of the signals characterizing that code sends a message to controller 45 requesting thetelephone number associated with the speed dialing code entered by the subscriber.
Controller 45, upon receipt of the message, interrogates the subscriber's profile record stored in database 50 to obtain the requested telephone number. Controller 35 45, in turn, unloads the telephone number from database 50 and sends the number, the associated speed dialing code and subscriber's account code number to CIU 20-1 210~0~ 4 via LAN 30. CIU 20-1, in turn, sends a call request message containing that telephone number, as well as a request to establish a telephone connection thereto, to host 5. Host 5, in turn, establishes a billing record and then places a telephone call to the desired telephone number via switch 10 and network 200.
~ltern~tively, the subscriber may say the desired telephone number, e.g., 908-555-1234. If the subscriber does so, then CIU 20-1, using connected digit segmentation, segments the subscriber's speech signals characterizing the digits of that telephone number and then models such speech segments into the arol~ cntioned feature vectors of coefficients, as mentioned above. Based on those 10 feature vectors, CIU 20-1 is able to in~ let (identify) the digits spoken by the subscriber. Such hlL~ ctation is commonly referred to as speaker-independent, automatic speech recognition. Accordingly, as a result of such interpretation, CIU
20-1 identifi-os the digits forming the spoken telephone number. Similarly, CIU 20-1 then packages those digits into a call request message and sends the message to host 5 5 via LAN 30. Host 5, in response to receipt of that message, establishes an associated billing record and places a telephone call to the received telephone number, as discussed above.
As another alternative, the subscriber may say a calling label, e.g., "call office" previously defined by the subscriber. CIU 20-1 responsive thereto generates 20 the aforementioned subword unit indices from the subscriber's speech signals characterizing that label. Then, as mentioned above, CIU 20-1 com~a~es the generated series of indices with the subword unit indices of the subscriber's labels priorly stored in database 50 as discussed above.
Accordingly, if CIU 20-1 identifies the telephone number associated 25 with the spoken label, then CIU 20-1 forms a call request message containing, inter alia, the identified telephone number and sends the mess~ge to host 5. Host 5, in turn, places the requested telephone call in the manner discussed above. Each CIU
20-1 through 20-N is arranged to detect a particular keyword, e.g., the word "cancel", which a subscriber may utter to cancel a telephone number that the subscriber is30 entering. For example, if the subscriber says the word "cancel" after having entered a number of digits of a telephone number, then the CIU serving the call, e.g., cru 20-1, in response to detecting that utterance (using speaker independent speech recognition to spot that word as discussed above) discards the received digits and retransmits the aforementioned announcement.

210.-~0'3~
It can be appreciated that SVS 40 expends an appreciable amount of processing time processing the subscriber's spoken account code. To speed up that processing so that a reply may be returned to controller 45 as soon as possible, RTH
41 may be arranged so that it divides the digits forming the account code among a s number of idle DSPs 42-1 through 42-P. For example, RTH 41 could supply the spoken digits to respective idle DSPs 42-1 through 42-P as well as the associated HMMs stored in database 50. Accordingly, if nine such DSPs are idle, then each of those DSPs would process one digit of the account code.
Turning now to FMs. 6-8, there is ~shown in flow chart form a program 10 for implementing the operation of system 100. Specifically, the program is entered at block 400 in response to a new call received via switch 10. At block 400, theprogram proceeds to block 401 where it transmits a brief service-alerting signal, e.g., a tone, and then tr~n~mit~ a service name announcement, e.g., "Voice Direct". Atthat point, the program begins to monitor the call for receipt of the word "operator"
15 or the word "cancel" and proceeds in the manner discussed above if it happens to receive either word. The program then proceeds to block 402 where it pl'Olll~)~S the caller to enter his/her account code (iclenhfier). The caller, in response to the prompt, may either say the digits forming his/her account or enter the digits via the touch-tone keypad of the caller's telephone station set. At block 403, the program 20 determines if a customer record associated with the entered account number is stored in database 50 (FIG. 1). If the determination turns out to be true, then the program proceeds to block 404. Otherwise, the program proceeds to block 405.
At block 405, the program checks to see if the caller's second attempt to enter a valid account code also failed and forwards the call to an operator via block 25 406 if that is the case. Otherwise, the program proceeds to block 407 where it again ollll)ts the caller to enter his/her account code.
At block 404, the program checks to see if the entered account code is characterized by speech signals (i.e., the caller spoke the numbers) and proceeds to block 408 if that is case. Otherwise, the program proceeds to block 409 where it30 plOlll~ the caller to repeat (say) a series of random numbers. The program collects the caller's response thereto, analyzes the response to produce feature vectors characterizing the response as discussed above with reference to FIG. 3 and proceeds to block 408. At block 408, the program verifies the caller's identity as discus sed above with reference to FIG. 5. If the caller's speech cannot be verified, then the 35 program proceeds to 407. Otherwise, the program proceeds to block 410 where it plUlllpt~ the caller to enter a called destination. As mentioned above, a caller may 2 1 0 ~ ~ 3 ~
place a call by (a) saying a telephone number or call label or (b) entering the telephone number or speed dial code via the keypad of the caller' s station set, e.g., station Sl. The program then waits for an entry and proceeds to block 411 upon receipt thereof. At block 411, the program proceeds to block 418 if it finds that the 5 caller entered a telephone number using the station set keypad (i.e., the number is characteri~d by respective DTMF tones). If that is not the case, then the program proceeds to block 412 to determine if the caller spoke the telephone number. If the latter determin~tion turns out to be true, then the program proceeds to block 416 where it causes the associated CIU to translate ~he caller's speech signals into a 10 telephone number (as discussed above) and then proceeds to block 417. If the determination at block 412 turns out to be false, then the program proceeds to block 413 to de~~ e if the caller entered a speed dial code. If the program finds that the caller did not enter a speed dial code, then it proceeds to block 414 to determine if the caller entered a spoken call label. If the program finds that the caller entered 5 either a speed dial code or a call label, then the program proceeds to block 415 where it causes SVS 40 to translate the caller's entry into a telephone number, as discussed above. The program then proceeds to block 417 where it transmits the resulting telephone number to the caller and then proceeds to block 418 where it causes the telephone number to be outpulsed to network 200.
If the program at block 414 finds that the caller did not enter a call label, then the program proceeds to block 419 where it determines if the caller's latest entry represents a second attempt to obtain a valid entry from the caller. If that is the case, then the program fol ~rds the call to an operator. Otherwise, the program proceeds to block 410 to again prompt for the entry of a telephone number.
At block 418, the program places via system 100 an outgoing call and then directs system 100 to connect the incoming call to the outgoing call connection.
The program then proceeds to block 421 where it continues to monitor the call for receipt of a request for an operator or request to place another call. In an illustrative embodiment of the invention, the subscriber may enter such a request at any point in the processing of the call, i.e., between blocks 402 and 417, by saying the word"operator" or entering particular signals characterizing, for example, 0#, after the call has been completed, i.e., blocks 418 and 421. Similarly, the subscriber may enter **9 to enter a request to place another call.
During the recent past, a large number of telephone subscribers have 35 subscribed to a voice messaging service, such as voice message system 300 shown in FIG. 1. In a nutshell, the functionality provided by system 300 is similar to that 210~U~
provided by a conventional answering machine. That is, if a system 300 subscriber, e.g., the subscriber associated with station S 1, does not answer a telephone call, for whatever reason, then the calling party is invited to leave a voice message withsystem 300. However, unless the called subscriber places a call to system 300, s he/she does not know that the calling party left a voice m--ss~ge with system 300. A
number of voice mess~ging ~y~cms address that sit~1~tion by causing a lamp on the subscriber's station set to be lit as a way of indicating that the subscriber has one or more voice messages waiting. System 100 takes a dirrelcnt approach.
In particular, if the new subscriberis associated with a voice m~.ssa~ing 10 service, e.g., system 300, then, during the subscription and registration process, ~tte.n(l~nt 15 inserts in the subscriber's ~l~t~b~e 60 record and ~t~h~e 50 record (a) a flag in(lis~ting that the new subscriber is associated with a voice mess~ging service, (b) the telephone number of that service and (c) the subscriber's messaging service account code or password. Thereafter, when the new subscriber places a call 15 to system 100 for the purpose of, e.g., placing an outgoing call, then, while an associated CIU, e.g., cru 20-1, is processing the subscriber's call request, host 5 places via switch 10 and network 200 a call to voice mess~ging service 300. Whenthat system answers the call, host 5 transmits the subscriber's telephone number, pauses for a predet~rmine~l period of time and then transmits the associated account 20 code (password). System 300, in response to that information, transmits the status of voice messages that are stored in system 300 for the subscriber, in which such status could vary from no voice messages to a number of voice messages. In addition, host 5, after tr~n~mitting the account code, causes switch 10 to bridge the subscriber's incoming call to the telephone connection established between switch 10 and system 2s 300 via network 200. Thus, the subscriber may be autom~ti~ lly presented with the status of his/her system 300 voice messages.
Turning now to FIG. 10, there is shown an illustrative layout of reference database 50. In particular, database 50 includes a pair of records, e.g., 50- 1 and 50-2, for each system 100 subscriber. One record of the pair, e.g., record 50-1, 30 contains the Hidden Markov models of the subscriber's speech signals characterizing the digits zero through nine (and possibly "oh"), addresses of cohort HMMs, calllabels and associated billing labels. The record also contains various statistics relating to verifying the identity of a calling subscriber from his/her speech signals.
For example, such statistics may be used to update associated voice templates or3s models and include the number of times system 100 performed such verification, number of times that such verification failed, means and variances of verification 210~03~
utterance word durations, and various threshold values relating to such verification and recognition of digits and labels spoken by the associated subscriber. The other record of the pair, e.g., record 50-2, contains the ASCII (text) versions of theinformation contained in record 50-1, as well as associated telephone numbers and 5 speed dialing codes. It is seen from this FIG. that each such record of the pairs in~ des a field for the associated account code. The account code field also includes sub-fields (not shown) which are populated if the account code is shared with one or more subscribers. That is, the contents of the sub-fields link the associated record to the other records as discussed above.
The layout of database 60 is somewhat dirr~.~nt, as shown in FIG. 11.
As mentioned above, database 60 is used for the storage of customer subscriptioninfolmalion, in which such information is stored across a number of database 60 tables. One such table, table 60-1, is formed from a plurality of records (CUS.PROF_l through CUS.PROF_N) each containing information specific to a 5 respective subscriber. Such specific information includes, for example, the subscriber's name and address, account number, credit limit, default billing account number, billing address and a number of database 60 addresses (pointers) which point to entries in other tables, such as table 60-4. (The account number field also includes a number of sub-fields to link the associated with other records if the20 associated account code happens to be shared with other subscribers.) Table 60-4 is also formed from a plurality of records (CUS.ID_l through CUS.lD_N) each co~ g information personal to a respective subscriber which is used by ~tten(l~nt 15 to verify the identity of a subscriber. Such identity information may include, for example, a subscriber's social security number, place of birth, mother's maiden 25 name, etc.
Of the tables shown in the FIG., tables 60-1 and 60-3 are indexed using a subscriber's account number. Table 60-3, more particularly, is formed from a plurality of entries (CUS.LL_l through CUS.LBL_N), in which each such entry contains the ASCIl versions of call labels, associated telephone numbers, associated 30 label billing accounts and respective billing account numbers specified by a respective subscriber. Each such billing account number, in turn, points to an entry in table 60-2, in which the entry contains conventional billing information for the associated billing account number. Such billing information includes, for example, the name and address of the entity (or person) that is to be billed for an associated 35 call, billing cycle (e.g., monthly or quarterly), etc.

_ 210S~
Database 60 also includes table 60-5 which contains the eleven sets of digits that system 100 uses during the training phase of a subscription, as discussed above.
Turning now to FIGs. 12 and 13, there is shown in block diagram form S an ~ltt~rn~tive embodiment, one which centralizes the subscription and speakerverific~tion function~lities performed by system 100 of FIG. 1 and which uses a high speed frame-relay packet netwolk to interface such function~lities with one another.
Advantageously, a smaller, "stripped down" version of voice directed system 100 may be associated with each network 200 Opel~tor Service Position System, or 10 OSPS, one of which is shown in the FIG, namely OSPS 205. In this way, a subscriber may readily access a Voice Directed Communic~tions System Platform (VDCSP) 100-1 via an OSPS by dialing a telephone operator access code, for example, the digits "00". If a subscriber does so, for example, the subscriber associated with station set S 1, then CO 225 in response to receipt of those digits 15 extends the call to network 200, which, in turn, extends the call to one of its OSPSs, e.g., OSPS 205.
OSPS 205 in response to receipt of the call presents the caller with the option of selecting a telephone operator or the services provided by VDCSP 100-1.
If the subscriber selects the latter option, then OSPS 205 extends the telephone20 connection carrying the call to switch 10 of VDCSP 100-1. In doing so, OSPS 205 supplies to host 5 via signaling circuit 13 the identity of the trunk that is being used to connect the call to switch 10. At that point, VDSCP 100- 1 under the direction of host 5 processes the call in the manner discussed above in connection with FIG. 1.
It is seen from the FIG. that smaller VDCSP 100-1 still includes host 25 processor 5, switch 10, CIUs 20-1 through 20-N and LAN 30 which operate in the manner discussed above. It is also includes router 65- 1, which may be, for example, a conventional LAN/WAN type router available from Cisco Systems Inc, Menlo Park, CA. Router 65- 1, more particularly, provides an interface between its associated modified system 100 and high-speed packet network 700, which may be, 30 for example, AT&T's InterSpan frame relay network. That is, router 65-1 removes from LAN 30- 1 a message addressed to either Central Speaker Verification System, or CSVS, 500 or subscription system 600 and formats the message into a packet sothat it conforms with the well-known frame relay protocol and supplies the packet to an associated network 700 node for delivery to the intended destination. (It is noted 3s that Routers 65-2 and 65-3 perform a similar function.) '~Q
210'~0~ 1 Similarly, if the intended destination happens to be CSVS 500, then router 65-2 (which may be similar to router 65-1) receives the packet from an associated packet network 700 node, changes the format of the received packet sothat it conforms with the well-known TCP/IP message protocol, and supplies the 5 message to its associated LAN 30-2 for delivery to controller 70. (In FIGs. 12 and 13, LANs 30-1, 30-2 and 30-3 may also be the well-known Ethernet network).
Controller 70, in turn, supplies the message to one of its associated Voice Verification Units, or VW, 400-1 through 400-M based on a predetermined selection scheme, for example, the subscriber identifi~r (account code) cont~ined in 10 the message. That is, VWs 400-1 through 400-M are associated with respective ranges of subscriber identifiers, or account codes, e.g., 500,000 i(lentifiers. Thus, subscriber records associated with a first range of identifiers are stored in reference database 50 of VW 400-1. Subscriber records associated with a second range of identifiers are stored in reference database 50 of VVU 400-2, and so on.
Assuming that the identifier contained in the message is within the first range of identifiers, then controller 70 supplies the message to controller 45 of VW
400- 1. (It is noted that each VVU 400- 1 through 400-M operates in the manner discussed above in connection with FIG. 1.) Assuming that the message is a speaker verification request, then, as discussed above, controller 45 of VVU 400-1 20 (hereinafter controller 45) unloads from its associated reference database 50 the HMMs and cohort information associated with the identifier contained in the message. Controller 45 then supplies the HMMs and cohort information llnlo~
from database 50 and the speech models contained in the message and to associated RTH 41 for processing. Thereafter, controller 45 forms the results of such 2s processing into a message addressed to the origin~tor of the message received via network 700 and transmits the newly formed message over LAN 30-2. In addition, controller 45 unloads the voice templates of the calling and billing labels associated with received identifier from the associated reference database 50 and inserts those templates in the formed message that it transmits over LAN 30-2. Controller 70, in 30 turn, removes the message from LAN 30-2 and presents it to router 65-2. Similarly, router 65-2 reformats the message from the TCP/IP protocol to the frame relay protocol and supplies the reformatted packet message(s) to its associated network 700 node for delivery to VDCSP 100-1. Similarly, router 65-1 accepts the packet message from its associated network 700 node, reformats the message(s) so that it 35 conforms with the TCP/IP protocol and supplies the results to LAN 30-1 for delivery to a particular one of the VDCSP 100-1 elements, e.g., CIU 20-1. CIU 20-1, in turn, ~105034 unloads the verifications results from the message and proceeds in the manner discussed above. CIU 20-1 also unloads the aforementioned templates and stores them in its local memory. Arrned with those templates, CIU 20-1 may then processitself the associated calling subscriber's utterance of a calling label and/or billing s label, thereby elimin~ting the need of having to ~elrolll, that functionality in conjunction with CSVS 500 via network 700.
It is seen from the FIG. that the subscription section of system 100 (FIG. 1) discussed above, now forms subscription system 600. Subscription system600, like system 100, includes attendant positioll 15 (representing a plurality of such positions), host 5, switch 10, SIU 21, controllers 25 and 55 and database 60 which cooperate with one another in the manner discussed above in connection with FIG. 1.
Subscription system 600 still interacts with a controller 45 in the manner discussed above. However, such interaction is now via its associated router 65-3 and network 700.
The foregoing is merely illustrative of the principles of the invention.
Those skilled in the art will be able to devise numerous arrangements, which, although not explicitly shown or described herein, nevertheless embody those principles that are within the spirit and scope of the invention.

Claims (17)

1. A method of verifying a claim of identity made by an individual based on a signal representing an utterance provided by the individual, the method comprising the steps of:
a. analyzing the signal representing the utterance to form a plurality of feature signals characterizing the utterance;
b. forming a first likelihood signal, based on the plurality of feature signals and one or more hidden Markov models trained using utterances spoken by a person whose identity is claimed, the first likelihood signal reflecting a probability that the individual's utterance was spoken by the person whose identity is claimed;
c. forming one or more other likelihood signals based on the plurality of feature signals and one or more hidden Markov models trained using utterances spoken by a set of one or more other speakers who are acoustically similar to the person whose identity is claimed, said one or more other speakers who are acoustically similar to the person whose identity is claimed having been selected from a universe of speakers based on an acoustic similarity criterion, the criterion based on the one or more hidden Markov models of the person whose identity is claimed, the universe of speakers including one or more speakers who are not acoustically similar to the person whose identity is claimed based on the acoustic similarity criterion, the other likelihood signals reflecting probabilities that the utterance was spoken by said one or more other speakers who are acoustically similar to the person whose identity is claimed; and d. forming a verification signal, based on the first likelihood signal and one or more of the other likelihood signals, and not based on hidden Markov models trained using utterances spoken by at least one of said one or more speakers who are not acoustically similar to the person whose identity is claimed, said verification signal indicating whether the individual is the person whose identity is claimed.
2. The method of claim 1 wherein the individual is prompted to provide the utterance.
3. The method of claim 2 wherein the utterance comprises a series of digits.
4. The method of claim 3 wherein the digits in the series are chosen at random.
5. The method of claim 1 wherein the utterance comprises a predetermined set of one or more words.
6. The method of claim 1 wherein the utterance provided by the individual is provided with use of a communication channel having a first response characteristic and wherein utterances used to train the hidden Markov models of the person whose identity is claimed are provided with use of a communication channel having a second response characteristic.
7. The method of claim 6 wherein the communication channel having a first response characteristic comprises a microphone having a first microphone response characteristic, and wherein the communication channel having a second response characteristic comprises a microphone having a second microphone response characteristic.
8. The method of claim 1 wherein the step of analyzing the signal comprises the step of segmenting the feature signals into groups of feature signals which substantially represent words of the utterance.
9. The method of claim 8 wherein the step of segmenting is performed using a speech recognition system.
10. The method of claim 1 wherein the first and other likelihood signals are formed based on Viterbi scoring.
11. The method of claim 1 wherein the acoustic similarity criterion comprises determining a measure of acoustic similarity between the one or more hidden Markov models of the person whose identity is claimed and one or more hidden Markov models of another person.
12. The method of claim 1 wherein the acoustic similarity criterion comprises determining a measure of acoustic similarity between the person whose identity is claimed and another person based on a. a comparison of signals representing utterances spoken by the person whose identity is claimed and hidden Markov models of the other person; and b. a comparison of signals representing utterances spoken by the other person and hidden Markov models of the person whose identity is claimed.
13. The method of claim 1 wherein the step of forming a verification signal comprises the step of forming a signal reflecting a statistic of one or more other likelihood signals.
14. The method of claim 13 wherein the verification signal indicates that the individual is the person whose identity is claimed responsive to the first likelihood signal exceeding the statistic of the other likelihood signals.
15. The method of claim 13 wherein the first and other likelihood signals reflect log likelihood probabilities.
16. The method of claim 15 wherein the step of forming a verification signal further comprises the step of forming a signal reflecting a difference between the first likelihood signal and the signal reflecting a statistic of one or more other likelihood signals.
17. The method of claim 13 wherein the step of forming a verification signal further comprises the step of forming a signal reflecting a ratio of the first likelihood signal to the signal reflecting a statistic of one or more other likelihood signals.
CA002105034A 1992-10-09 1993-08-27 Speaker verification with cohort normalized scoring Expired - Fee Related CA2105034C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US95930292A 1992-10-09 1992-10-09
US959,302 1992-10-09

Publications (2)

Publication Number Publication Date
CA2105034A1 CA2105034A1 (en) 1994-04-10
CA2105034C true CA2105034C (en) 1997-12-30

Family

ID=25501891

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002105034A Expired - Fee Related CA2105034C (en) 1992-10-09 1993-08-27 Speaker verification with cohort normalized scoring

Country Status (6)

Country Link
US (1) US5675704A (en)
EP (1) EP0592150B1 (en)
JP (1) JPH06242793A (en)
CA (1) CA2105034C (en)
DE (1) DE69324988T2 (en)
ES (1) ES2133365T3 (en)

Families Citing this family (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995005656A1 (en) * 1993-08-12 1995-02-23 The University Of Queensland A speaker verification system
AUPM983094A0 (en) * 1994-12-02 1995-01-05 Australian National University, The Method for forming a cohort for use in identification of an individual
DE19630109A1 (en) * 1996-07-25 1998-01-29 Siemens Ag Method for speaker verification using at least one speech signal spoken by a speaker, by a computer
US6205424B1 (en) * 1996-07-31 2001-03-20 Compaq Computer Corporation Two-staged cohort selection for speaker verification system
US6061654A (en) * 1996-12-16 2000-05-09 At&T Corp. System and method of recognizing letters and numbers by either speech or touch tone recognition utilizing constrained confusion matrices
JP2991148B2 (en) * 1997-02-07 1999-12-20 日本電気株式会社 Method and system for creating suppression standard pattern or cohort in speaker recognition and speaker verification device including the system
US6078886A (en) * 1997-04-14 2000-06-20 At&T Corporation System and method for providing remote automatic speech recognition services via a packet network
US8209184B1 (en) 1997-04-14 2012-06-26 At&T Intellectual Property Ii, L.P. System and method of providing generated speech via a network
US6182037B1 (en) * 1997-05-06 2001-01-30 International Business Machines Corporation Speaker recognition over large population with fast and detailed matches
US6154579A (en) * 1997-08-11 2000-11-28 At&T Corp. Confusion matrix based method and system for correcting misrecognized words appearing in documents generated by an optical character recognition technique
US6219453B1 (en) 1997-08-11 2001-04-17 At&T Corp. Method and apparatus for performing an automatic correction of misrecognized words produced by an optical character recognition technique by using a Hidden Markov Model based algorithm
US6404876B1 (en) * 1997-09-25 2002-06-11 Gte Intelligent Network Services Incorporated System and method for voice activated dialing and routing under open access network control
US6141661A (en) * 1997-10-17 2000-10-31 At&T Corp Method and apparatus for performing a grammar-pruning operation
US6122612A (en) * 1997-11-20 2000-09-19 At&T Corp Check-sum based method and apparatus for performing speech recognition
US6208965B1 (en) 1997-11-20 2001-03-27 At&T Corp. Method and apparatus for performing a name acquisition based on speech recognition
US6205428B1 (en) 1997-11-20 2001-03-20 At&T Corp. Confusion set-base method and apparatus for pruning a predetermined arrangement of indexed identifiers
US6195634B1 (en) * 1997-12-24 2001-02-27 Nortel Networks Corporation Selection of decoys for non-vocabulary utterances rejection
US6205261B1 (en) 1998-02-05 2001-03-20 At&T Corp. Confusion set based method and system for correcting misrecognized words appearing in documents generated by an optical character recognition technique
CA2318262A1 (en) 1998-03-03 1999-09-10 Lernout & Hauspie Speech Products N.V. Multi-resolution system and method for speaker verification
US6202047B1 (en) * 1998-03-30 2001-03-13 At&T Corp. Method and apparatus for speech recognition using second order statistics and linear estimation of cepstral coefficients
US6157707A (en) * 1998-04-03 2000-12-05 Lucent Technologies Inc. Automated and selective intervention in transaction-based networks
US6240303B1 (en) 1998-04-23 2001-05-29 Motorola Inc. Voice recognition button for mobile telephones
US6400805B1 (en) 1998-06-15 2002-06-04 At&T Corp. Statistical database correction of alphanumeric identifiers for speech recognition and touch-tone recognition
US7937260B1 (en) 1998-06-15 2011-05-03 At&T Intellectual Property Ii, L.P. Concise dynamic grammars using N-best selection
AU752317B2 (en) * 1998-06-17 2002-09-12 Motorola Australia Pty Ltd Cohort model selection apparatus and method
US6614885B2 (en) * 1998-08-14 2003-09-02 Intervoice Limited Partnership System and method for operating a highly distributed interactive voice response system
US6269335B1 (en) 1998-08-14 2001-07-31 International Business Machines Corporation Apparatus and methods for identifying homophones among words in a speech recognition system
US6185530B1 (en) 1998-08-14 2001-02-06 International Business Machines Corporation Apparatus and methods for identifying potential acoustic confusibility among words in a speech recognition system
US6192337B1 (en) * 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6272460B1 (en) 1998-09-10 2001-08-07 Sony Corporation Method for implementing a speech verification system for use in a noisy environment
TW418383B (en) * 1998-09-23 2001-01-11 Ind Tech Res Inst Telephone voice recognition system and method and the channel effect compensation device using the same
US6743022B1 (en) * 1998-12-03 2004-06-01 Oded Sarel System and method for automated self measurement of alertness equilibrium and coordination and for ventification of the identify of the person performing tasks
US7149690B2 (en) 1999-09-09 2006-12-12 Lucent Technologies Inc. Method and apparatus for interactive language instruction
US6556969B1 (en) * 1999-09-30 2003-04-29 Conexant Systems, Inc. Low complexity speaker verification using simplified hidden markov models with universal cohort models and automatic score thresholding
US6473735B1 (en) * 1999-10-21 2002-10-29 Sony Corporation System and method for speech verification using a confidence measure
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US6754628B1 (en) * 2000-06-13 2004-06-22 International Business Machines Corporation Speaker recognition using cohort-specific feature transforms
US6505163B1 (en) * 2000-08-09 2003-01-07 Bellsouth Intellectual Property Corporation Network and method for providing an automatic recall telecommunications service with automatic speech recognition capability
US6778640B1 (en) 2000-08-09 2004-08-17 Bellsouth Intellectual Property Corporation Network and method for providing a user interface for a simultaneous ring telecommunications service with automatic speech recognition capability
US6907111B1 (en) 2000-08-09 2005-06-14 Bellsouth Intellectual Property Corporation Network and method for providing a name and number delivery telecommunications services with automatic speech recognition capability
US6826529B1 (en) 2000-08-09 2004-11-30 Bellsouth Intellectual Property Corporation Network and method for providing a call screening telecommunications service with automatic speech recognition capability
US7400712B2 (en) * 2001-01-18 2008-07-15 Lucent Technologies Inc. Network provided information using text-to-speech and speech recognition and text or speech activated network control sequences for complimentary feature access
US6934675B2 (en) * 2001-06-14 2005-08-23 Stephen C. Glinski Methods and systems for enabling speech-based internet searches
KR100406307B1 (en) 2001-08-09 2003-11-19 삼성전자주식회사 Voice recognition method and system based on voice registration method and system
US7054430B2 (en) 2001-08-23 2006-05-30 Paymentone Corporation Method and apparatus to validate a subscriber line
US20030149881A1 (en) * 2002-01-31 2003-08-07 Digital Security Inc. Apparatus and method for securing information transmitted on computer networks
US20030171931A1 (en) * 2002-03-11 2003-09-11 Chang Eric I-Chao System for creating user-dependent recognition models and for making those models accessible by a user
US20030225719A1 (en) * 2002-05-31 2003-12-04 Lucent Technologies, Inc. Methods and apparatus for fast and robust model training for object classification
US7870240B1 (en) 2002-06-28 2011-01-11 Microsoft Corporation Metadata schema for interpersonal communications management systems
US7219059B2 (en) * 2002-07-03 2007-05-15 Lucent Technologies Inc. Automatic pronunciation scoring for language learning
FR2842643B1 (en) * 2002-07-22 2004-09-03 France Telecom STANDARDIZATION OF VERIFICATION SCORE IN SPEAKER SPEECH RECOGNITION DEVICE
US8509736B2 (en) 2002-08-08 2013-08-13 Global Tel*Link Corp. Telecommunication call management and monitoring system with voiceprint verification
US7333798B2 (en) 2002-08-08 2008-02-19 Value Added Communications, Inc. Telecommunication call management and monitoring system
KR100503066B1 (en) * 2002-09-14 2005-07-21 삼성전자주식회사 Apparatus for storing and reproducing music file and method thereof
US7676366B2 (en) * 2003-01-13 2010-03-09 Art Advanced Recognition Technologies Inc. Adaptation of symbols
KR101011713B1 (en) * 2003-07-01 2011-01-28 프랑스 텔레콤 Method and system for analysis of vocal signals for a compressed representation of speakers
US7450703B1 (en) * 2004-03-23 2008-11-11 Shoretel, Inc. Acceptance of inputs from various interfaces to a telephony system
US7783021B2 (en) 2005-01-28 2010-08-24 Value-Added Communications, Inc. Digital telecommunications call management and monitoring system
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7788101B2 (en) * 2005-10-31 2010-08-31 Hitachi, Ltd. Adaptation method for inter-person biometrics variability
US8234494B1 (en) * 2005-12-21 2012-07-31 At&T Intellectual Property Ii, L.P. Speaker-verification digital signatures
US7877255B2 (en) * 2006-03-31 2011-01-25 Voice Signal Technologies, Inc. Speech recognition using channel verification
EP2013869B1 (en) * 2006-05-01 2017-12-13 Nippon Telegraph And Telephone Corporation Method and apparatus for speech dereverberation based on probabilistic models of source and room acoustics
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8560316B2 (en) * 2006-12-19 2013-10-15 Robert Vogt Confidence levels for speaker recognition
US20080201158A1 (en) 2007-02-15 2008-08-21 Johnson Mark D System and method for visitation management in a controlled-access environment
US8542802B2 (en) 2007-02-15 2013-09-24 Global Tel*Link Corporation System and method for three-way call detection
JP5024154B2 (en) * 2008-03-27 2012-09-12 富士通株式会社 Association apparatus, association method, and computer program
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US9020816B2 (en) * 2008-08-14 2015-04-28 21Ct, Inc. Hidden markov model for speech processing with training method
US9225838B2 (en) 2009-02-12 2015-12-29 Value-Added Communications, Inc. System and method for detecting three-way call circumvention attempts
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9118669B2 (en) 2010-09-30 2015-08-25 Alcatel Lucent Method and apparatus for voice signature authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US8965763B1 (en) * 2012-02-02 2015-02-24 Google Inc. Discriminative language modeling for automatic speech recognition with a weak acoustic model and distributed training
US8543398B1 (en) 2012-02-29 2013-09-24 Google Inc. Training an automatic speech recognition system using compressed word frequencies
US9390445B2 (en) 2012-03-05 2016-07-12 Visa International Service Association Authentication using biometric technology through a consumer device
US8374865B1 (en) 2012-04-26 2013-02-12 Google Inc. Sampling training data for an automatic speech recognition system based on a benchmark classification distribution
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US8805684B1 (en) 2012-05-31 2014-08-12 Google Inc. Distributed speaker adaptation
US8571859B1 (en) 2012-05-31 2013-10-29 Google Inc. Multi-stage speaker adaptation
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US8880398B1 (en) 2012-07-13 2014-11-04 Google Inc. Localized speech recognition with offload
US9123333B2 (en) 2012-09-12 2015-09-01 Google Inc. Minimum bayesian risk methods for automatic speech recognition
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US20140095161A1 (en) * 2012-09-28 2014-04-03 At&T Intellectual Property I, L.P. System and method for channel equalization using characteristics of an unknown signal
US8694315B1 (en) 2013-02-05 2014-04-08 Visa International Service Association System and method for authentication using speaker verification techniques and fraud model
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3008641A1 (en) 2013-06-09 2016-04-20 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US8812320B1 (en) 2014-04-01 2014-08-19 Google Inc. Segment-based speaker verification using dynamically generated phrases
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) * 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
JP6303971B2 (en) * 2014-10-17 2018-04-04 富士通株式会社 Speaker change detection device, speaker change detection method, and computer program for speaker change detection
US9641680B1 (en) * 2015-04-21 2017-05-02 Eric Wold Cross-linking call metadata
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10572961B2 (en) 2016-03-15 2020-02-25 Global Tel*Link Corporation Detection and prevention of inmate to inmate message relay
US9609121B1 (en) 2016-04-07 2017-03-28 Global Tel*Link Corporation System and method for third party monitoring of voice and video calls
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
GB2555661A (en) * 2016-11-07 2018-05-09 Cirrus Logic Int Semiconductor Ltd Methods and apparatus for biometric authentication in an electronic device
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10027797B1 (en) 2017-05-10 2018-07-17 Global Tel*Link Corporation Alarm control for inmate call monitoring
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10225396B2 (en) 2017-05-18 2019-03-05 Global Tel*Link Corporation Third party monitoring of a activity within a monitoring platform
US10860786B2 (en) 2017-06-01 2020-12-08 Global Tel*Link Corporation System and method for analyzing and investigating communication data from a controlled environment
US9930088B1 (en) 2017-06-22 2018-03-27 Global Tel*Link Corporation Utilizing VoIP codec negotiation during a controlled environment call
US10896673B1 (en) 2017-09-21 2021-01-19 Wells Fargo Bank, N.A. Authentication of impaired voices
CN111063359B (en) * 2019-12-24 2022-03-18 太平金融科技服务(上海)有限公司 Telephone return visit validity judging method, device, computer equipment and medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4027800A (en) * 1975-12-03 1977-06-07 The Alliance Machine Company Gantry crane with plural hoist means
JPS5876893A (en) * 1981-10-30 1983-05-10 日本電気株式会社 Voice recognition equipment
US4720863A (en) * 1982-11-03 1988-01-19 Itt Defense Communications Method and apparatus for text-independent speaker recognition
JPS59178587A (en) * 1983-03-30 1984-10-09 Nec Corp Speaker confirming system
US4910782A (en) * 1986-05-23 1990-03-20 Nec Corporation Speaker verification system
US4959855A (en) * 1986-10-08 1990-09-25 At&T Bell Laboratories Directory assistance call processing and calling customer remote signal monitoring arrangements
US4837830A (en) * 1987-01-16 1989-06-06 Itt Defense Communications, A Division Of Itt Corporation Multiple parameter speaker recognition system and methods
DE3819178A1 (en) * 1987-06-04 1988-12-22 Ricoh Kk Speech recognition method and device
US4979206A (en) * 1987-07-10 1990-12-18 At&T Bell Laboratories Directory assistance systems
GB2240203A (en) * 1990-01-18 1991-07-24 Apple Computer Automated speech recognition system
US5127043A (en) * 1990-05-15 1992-06-30 Vcs Industries, Inc. Simultaneous speaker-independent voice recognition and verification over a telephone network
GB9021489D0 (en) * 1990-10-03 1990-11-14 Ensigma Ltd Methods and apparatus for verifying the originator of a sequence of operations
JPH05257492A (en) * 1992-03-13 1993-10-08 Toshiba Corp Voice recognizing system

Also Published As

Publication number Publication date
EP0592150A1 (en) 1994-04-13
DE69324988D1 (en) 1999-06-24
ES2133365T3 (en) 1999-09-16
US5675704A (en) 1997-10-07
EP0592150B1 (en) 1999-05-19
CA2105034A1 (en) 1994-04-10
DE69324988T2 (en) 1999-09-30
JPH06242793A (en) 1994-09-02

Similar Documents

Publication Publication Date Title
CA2105034C (en) Speaker verification with cohort normalized scoring
JP2957862B2 (en) Communication system and communication method
US5325421A (en) Voice directed communications system platform
US5353336A (en) Voice directed communications system archetecture
US5594784A (en) Apparatus and method for transparent telephony utilizing speech-based signaling for initiating and handling calls
US6973426B1 (en) Method and apparatus for performing speaker verification based on speaker independent recognition of commands
EP0804850B1 (en) Automatic vocabulary generation for telecommunications network-based voice-dialing
EP0890249B1 (en) Apparatus and method for reducing speech recognition vocabulary perplexity and dynamically selecting acoustic models
US8515026B2 (en) Voice response apparatus and method of providing automated voice responses with silent prompting
JP3479304B2 (en) Voice command control and verification system
US6438520B1 (en) Apparatus, method and system for cross-speaker speech recognition for telecommunication applications
US7660716B1 (en) System and method for automatic verification of the understandability of speech
US5930336A (en) Voice dialing server for branch exchange telephone systems
JPH09186770A (en) Method for automatic voice recognition in phone call
Li et al. Automatic verbal information verification for user authentication
JP2001503156A (en) Speaker identification method
US20030110034A1 (en) Method for the voice-operated identification of the user of a telecommunication line in a telecommunications network during an interactive communication using a voice-operated conversational system
US20030081738A1 (en) Method and apparatus for improving access to numerical information in voice messages
EP1385148B1 (en) Method for improving the recognition rate of a speech recognition system, and voice server using this method
Vysotsky VoiceDialing-the first speech recognition based telephone service delivered to customer's home
JP3088625B2 (en) Telephone answering system
JP2001249688A (en) Device for automatically receiving telephone
JPH03157696A (en) Voice responding and recognizing system
JPS5860863A (en) Transmission system for tone for indicating premission to voice
MXPA97005352A (en) Automatic generation of vocabulary for dialing via voice based on telecommunication network

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed