EP0423800A2 - Speech recognition system - Google Patents

Speech recognition system Download PDF

Info

Publication number
EP0423800A2
EP0423800A2 EP90120020A EP90120020A EP0423800A2 EP 0423800 A2 EP0423800 A2 EP 0423800A2 EP 90120020 A EP90120020 A EP 90120020A EP 90120020 A EP90120020 A EP 90120020A EP 0423800 A2 EP0423800 A2 EP 0423800A2
Authority
EP
European Patent Office
Prior art keywords
speech
symbol train
word
speech recognition
phoneme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP90120020A
Other languages
German (de)
French (fr)
Other versions
EP0423800B1 (en
EP0423800A3 (en
Inventor
Shuji Morii
Shoji Hiraoka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of EP0423800A2 publication Critical patent/EP0423800A2/en
Publication of EP0423800A3 publication Critical patent/EP0423800A3/en
Application granted granted Critical
Publication of EP0423800B1 publication Critical patent/EP0423800B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0018Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis

Definitions

  • the present invention relates generally to speech recognition systems, and more particularly to such a speech recognition system for operation of an apparatus through speech recognition.
  • a banking service system as disclosed in "Electronic Technique, Vol. 25, No. 1, P 43 to 46, for example.
  • this system is arranged such that a speech inputted through a telephone set 51 or the like is transmitted through a public line 52 or the like up to a speech recognition apparatus 53 at the central processing equipment side where the inputted speech is recogized and the recoginition result is supplied to a task control apparatus.
  • Another approach involves, as illustrated in Fig.
  • a speech recognition system comprising: means responsive to an input of a speech from an external device for recognizing phonemes or syllables constituting the inputted speech to output them as a symbol sequence; means coupled to the extracting means for coding the symbol train and outputting the coded symbol train; means for transmitting the coded symbol train; means coupled through the transmitting means to the coding means for decoding the coded symbol train to restoring it to the original symbol train; and means responsive to the decoded symbol train from the decoding means for recognizing a word or a sentence on the basis of the decoded symbol train.
  • a speech recognition system according.to a first embodiment of the present invention.
  • the speech recognition is generally performed by using words, syllables, phonemes and others as basic units for recognition
  • the syllables, phonemes or the like which are units to allow the expression of a sentence and a word, are used as the basic units.
  • the embodiment will be described in terms of one case of using phonemes which are minimum and indispensable phonological units for description of a given speech.
  • the speech recognition system illustrated at numeral 1 comprises a phoneme recognizing section 2 which recognizes an inputted speech and convertsit into a phoneme-symbol sequence, each phoneme being a basic unit of the inputted speech.
  • the phoneme-symbol train is supplied to a coder 4 to be coded.
  • the coded phoneme-symbol train is supplied through a transmission line 5 to a decoder 6 which in turn decodes the coded phoneme-symbol train.
  • the decoded phoneme-symbol train is supplied to a word and sentence recognizing section 7 for recognizing a word and a sentence making up the speech.
  • the word and senstence recognizing section 7 is also coupled to a word dictionary 8 storing a phoneme notation.
  • the word and sentence recognizing section 7 performs the matching between the phoneme-symbol train outputted from the decoder 6 and the phoneme notation stored in the word dictionary 8.
  • the output of the word and sentence recognizing section 7 is supplied to a task control apparatus 2 which performs applications of the banking service, information retrieval and others.
  • the task control apparatus 2 gives instructions for the speech recognition system 1, for example, selection a different dictionary to change a word to be recognized (one dictionary has a group of words which can be recognized with one speech and the word to be recognized is changeable by selection of one of dictionaries), and start of the recognition.
  • the phoneme recognizing section 3 and the coder 4 are placed at the user side and the decoder 6, the word and sentence recognizing section 7 and the word dictionary 8 are placed at the central processing equipment side which is remotely disposed from the user side.
  • Fig. 4 shows one example of the contents of the word dictionary 8 which are mentioned with phoneme symbols.
  • the "word” column shows Japanese Kanji (Chinese) characters corresponding to, the respective word dictionary items, but not used for the actual recognition. With this arrangement, an operation will be described hereinbelow.
  • the following table 1 shows the kinds of the phonemes of the japanese language used.
  • a speech is inputted as an electric signal through a microphone, a handset and or the like to the phoneme recognizing section 3 in order to recognize the uttered phoneme.
  • the speech signal takes a signal as illustrated by (a) in Fig. 5 and, as obvious from the above-mentioned table 1, the phoneme symbol train becomes "sibuja" as illustrated by (b) in Fig. 5.
  • the recognized phoneme symbol train is supplied to the coder 4 so as to be coded and outputted in order to be suitable for the transmission line 5.
  • the coding is performed in accordance with the frequency shift keying (FSK) system, the phase shift keying (PSK) system or the like. It is also appropriate to use a digital line such as a bus-structure network (Ethernet) as the transmission line 5.
  • the decoder 6 performs a reverse process of the coding with respect to the signal transmitted through the transmission line 5 so as to restore it to the original phoneme symbol train.
  • the word and sentence recognizing section 7 performs a matching of the phoneme symbol train from the decoder 7 with the phonemes of the respective dictionary items in'the word dictionary 8 illustrated in Fig. 4.
  • the word number for the word most similar thereto i.e., "001 " in this embodiment, is outputted as the recognition result to the task control apparatus 2.
  • the word dictionary 8 can be constructed with a plurality of groups so as to be selectively used for every speech recognition process in order to limit the vocabulary.
  • sentence recognition it is required to additionally use syntax information, word-semantic information and others.
  • FIG. 6 A speech recognition system according to a second embodiment of this invention will be described hereinbelow with reference to Fig. 6, where parts corresponding to those in Fig. 3 are marked with the same numerals.
  • the speech recognition system indicated by a dotted line and illustrated at numeral 1 is included in a dialogue or interaction system comprising a terminal apparatus 11 and a central apparatus 12 which are coupled through a transmission line 5 to each other.
  • the speech recognition system 1 comprises a phoneme recognizing section 3 responsive to an inputted speech, a coder 4 coupled to the phoneme recognizing section 3, a decoder coupled through the transmission line 5 to the coder 4, a word and senstence recognizing section 7 and a word dictionary 8.
  • the speech recognizing section 3 and the coder 4 are placed at the terminal apparatus 11 side and the decoder 6, the word and sentence recognizing section and the word dictionary 8 are disposed at the central apparatus 12 side. Further, at the central apparatus 12 side are disposed a task control apparatus 2 coupled to the word and sentence recognizing section 7 and another coder 13 coupled to the task control apparatus 2, and at the terminal apparatus 11 side are disposed another decoder 14 coupled through the transmission line 5 to the coder 13 and a terminal control section 15 coupled to the decoder 14.
  • a pronounced speech by a user at the terminal apparatus 11 side is recognized by the speech recognition system 1.
  • the operation of the task control apparatus for the recognition result is transmitted through the coder 13, transmission line 5 and decoder 14 to the terminal control section 15 which in turn delivers it to the user with a speech or letters through an indicator, a loud speaker or the like.
  • a speech is again introduced into the phoneme recognizing section 3 of the speech recognition system 1.
  • a recognition start command for the speech recognition system 1 is transmitted from the task control apparatus 2 to the word and sentence recognizing section 7 and further through the terminal control section 15 to the phoneme recognizing section 3.
  • phonemes expressing a speech is recognized and a symbol train is coded and transmitted through a transmission means -to a central processing apparatus.
  • the central processing apparatus decodes it and recognizes and outputs the corresponding word or sentence.
  • a speech recognition system for recognizing a speech to be inputted so as to operate a given apparatus in accordance with the recoginized speech.
  • the speech recognition system includes a phoneme recognizing section responsive to input of a speech from an external device for extracting phonemes constituting the inputted speech to output them as a symbol train.
  • the symbol train from the phoneme recognizing section is supplied to a coder for coding the symbol train and outputting the coded symbol train through a transmission line to a decoder for decoding the coded symbol train to restoring it to the original symbol train.
  • the decoded symbol train is inputted to a word and sentence recognizing section which in turn recognizes a word or a sentence on the basis of the decoded symbol train using a word dictionary.

Abstract

A speech recognition system for recognizing a speech to be inputted so as to operate a given apparatus in accordance with the recoginized speech. The speech recognition system includes a phoneme recognizing section responsive to input of a speech from an external device for extracting phonemes constituting the inputted speech to output them as a symbol train. The symbol train from the phoneme recognizing section is supplied to a coder for coding the symbol train and outputting the coded symbol train through a transmission line to a decoder for decoding the coded symbol train to restoring it to the original symbol train. The decoded symbol train is inputted to a word and sentence recognizing section which in turn recognizes a word or a sentence on the basis of the decoded symbol train using a word dictionary.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates generally to speech recognition systems, and more particularly to such a speech recognition system for operation of an apparatus through speech recognition.
  • As a system for operating an apparatus placed at a remote side through a speech is known a banking service system as disclosed in "Electronic Technique, Vol. 25, No. 1, P 43 to 46, for example. As illustrated in Fig. 1, this system is arranged such that a speech inputted through a telephone set 51 or the like is transmitted through a public line 52 or the like up to a speech recognition apparatus 53 at the central processing equipment side where the inputted speech is recogized and the recoginition result is supplied to a task control apparatus. Another approach involves, as illustrated in Fig. 2, recognizing an inputted speech with a speech recognition apparatus 62 incorporated into a user side terminal unit 61 and coding the recognition result with a coder 63 built in the same terminal unit 61, the coded signal being supplied through a transmission line 64 to a decoder 65 and then supplied to a task control apparatus 66 placed at the central processing equipment side.
  • There are problems which arise with such types of speech recognition systems, however, in that the former is affected by the transmission property of the telephone line 52 such as the frequency range limitation of the user's speech and further affected by the line noisees introduced during the transmission so as to generally reduce the recognition performance of the speech recognition apparatus 53, and the latter prevents the reduction of the speech recognition rate due to the transmission because of no transmission of the speech itself through the telephone line 52 or the like, but having extreme difficulty to perform change of the vocabulary to be recognized and change of the operating procedure at the central processing equipment side to result in lack of flexibility concurrently with increasing the cost of the terminal side apparatus because the speech recognition apparatus 62 is disposed at the user's terminal unit 61 side.
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide a speech recognition system which is capable of improving the speech recognition rate by preventing the affection of the line noises and others and further freely setting the vocabulary to be recognized and the operating procedure at the central processing equipment side to provide flexibility.
  • In accordance with the present invention, there is provided a speech recognition system comprising: means responsive to an input of a speech from an external device for recognizing phonemes or syllables constituting the inputted speech to output them as a symbol sequence; means coupled to the extracting means for coding the symbol train and outputting the coded symbol train; means for transmitting the coded symbol train; means coupled through the transmitting means to the coding means for decoding the coded symbol train to restoring it to the original symbol train; and means responsive to the decoded symbol train from the decoding means for recognizing a word or a sentence on the basis of the decoded symbol train.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be described in further detail with reference to the accompanying drawings, in which:
    • Figs. 1 and 2 are block diagrams showing conventional speech recognition systems;
    • Fig. 3 is a block diagram showing a speech recognition system according to a first embodiment of the present invention;
    • Fig. 4 is an illustration for describing one example of a word dictionary to be used in the Fig. 3 speech recognition system;
    • Fig. 5 is a graphic illustration for describing an inputted speech signal and a phoneme recognition; and
    • Fig. 6 is a block diagram showing a speech recognition system according to a second embodiment of this invention.
    DETAILED DESCRIPTION OF THE INVENTION
  • Referring now to Fig. 3, there is illustrated a speech recognition system according.to a first embodiment of the present invention. Although the speech recognition is generally performed by using words, syllables, phonemes and others as basic units for recognition, in this invention, the syllables, phonemes or the like, which are units to allow the expression of a sentence and a word, are used as the basic units. The embodiment will be described in terms of one case of using phonemes which are minimum and indispensable phonological units for description of a given speech.
  • In Fig. 3, the speech recognition system illustrated at numeral 1 comprises a phoneme recognizing section 2 which recognizes an inputted speech and convertsit into a phoneme-symbol sequence, each phoneme being a basic unit of the inputted speech. The phoneme-symbol train is supplied to a coder 4 to be coded. The coded phoneme-symbol train is supplied through a transmission line 5 to a decoder 6 which in turn decodes the coded phoneme-symbol train. The decoded phoneme-symbol train is supplied to a word and sentence recognizing section 7 for recognizing a word and a sentence making up the speech. The word and senstence recognizing section 7 is also coupled to a word dictionary 8 storing a phoneme notation. The word and sentence recognizing section 7 performs the matching between the phoneme-symbol train outputted from the decoder 6 and the phoneme notation stored in the word dictionary 8. The output of the word and sentence recognizing section 7 is supplied to a task control apparatus 2 which performs applications of the banking service, information retrieval and others. In this embodiment, the task control apparatus 2 gives instructions for the speech recognition system 1, for example, selection a different dictionary to change a word to be recognized (one dictionary has a group of words which can be recognized with one speech and the word to be recognized is changeable by selection of one of dictionaries), and start of the recognition.
  • Here, the phoneme recognizing section 3 and the coder 4 are placed at the user side and the decoder 6, the word and sentence recognizing section 7 and the word dictionary 8 are placed at the central processing equipment side which is remotely disposed from the user side.
  • Fig. 4 shows one example of the contents of the word dictionary 8 which are mentioned with phoneme symbols. In Fig. 4, the "word" column shows Japanese Kanji (Chinese) characters corresponding to, the respective word dictionary items, but not used for the actual recognition. With this arrangement, an operation will be described hereinbelow. The following table 1 shows the kinds of the phonemes of the japanese language used.
    Figure imgb0001
  • A speech is inputted as an electric signal through a microphone, a handset and or the like to the phoneme recognizing section 3 in order to recognize the uttered phoneme. For example, in response to utterance of "SHIBUYA", the speech signal takes a signal as illustrated by (a) in Fig. 5 and, as obvious from the above-mentioned table 1, the phoneme symbol train becomes "sibuja" as illustrated by (b) in Fig. 5. According to the current speech recognition technique, it is impossible to obtain 100% phoneme recognition rate, and hence the phoneme train contains errors. The recognized phoneme symbol train is supplied to the coder 4 so as to be coded and outputted in order to be suitable for the transmission line 5. In the case that the transmission line 5 is a general public telephone line, the coding is performed in accordance with the frequency shift keying (FSK) system, the phase shift keying (PSK) system or the like. It is also appropriate to use a digital line such as a bus-structure network (Ethernet) as the transmission line 5. The decoder 6 performs a reverse process of the coding with respect to the signal transmitted through the transmission line 5 so as to restore it to the original phoneme symbol train. The word and sentence recognizing section 7 performs a matching of the phoneme symbol train from the decoder 7 with the phonemes of the respective dictionary items in'the word dictionary 8 illustrated in Fig. 4. In the case of word recognition, the word number for the word most similar thereto, i.e., "001 " in this embodiment, is outputted as the recognition result to the task control apparatus 2. Here, the word dictionary 8 can be constructed with a plurality of groups so as to be selectively used for every speech recognition process in order to limit the vocabulary. In the case of sentence recognition, it is required to additionally use syntax information, word-semantic information and others.
  • A speech recognition system according to a second embodiment of this invention will be described hereinbelow with reference to Fig. 6, where parts corresponding to those in Fig. 3 are marked with the same numerals. In Fig. 6, the speech recognition system indicated by a dotted line and illustrated at numeral 1 is included in a dialogue or interaction system comprising a terminal apparatus 11 and a central apparatus 12 which are coupled through a transmission line 5 to each other. The speech recognition system 1 comprises a phoneme recognizing section 3 responsive to an inputted speech, a coder 4 coupled to the phoneme recognizing section 3, a decoder coupled through the transmission line 5 to the coder 4, a word and senstence recognizing section 7 and a word dictionary 8. Of these sections of the speech recognition system 1, the speech recognizing section 3 and the coder 4 are placed at the terminal apparatus 11 side and the decoder 6, the word and sentence recognizing section and the word dictionary 8 are disposed at the central apparatus 12 side. Further, at the central apparatus 12 side are disposed a task control apparatus 2 coupled to the word and sentence recognizing section 7 and another coder 13 coupled to the task control apparatus 2, and at the terminal apparatus 11 side are disposed another decoder 14 coupled through the transmission line 5 to the coder 13 and a terminal control section 15 coupled to the decoder 14.
  • An operation of the above-mentioned arrangement will be described hereinbelow. As well as the above-described first embodiment, a pronounced speech by a user at the terminal apparatus 11 side is recognized by the speech recognition system 1. The operation of the task control apparatus for the recognition result is transmitted through the coder 13, transmission line 5 and decoder 14 to the terminal control section 15 which in turn delivers it to the user with a speech or letters through an indicator, a loud speaker or the like. After the operation of the task control apparatus, a speech is again introduced into the phoneme recognizing section 3 of the speech recognition system 1. Here, a recognition start command for the speech recognition system 1 is transmitted from the task control apparatus 2 to the word and sentence recognizing section 7 and further through the terminal control section 15 to the phoneme recognizing section 3. With the above-described arrangement, it is possible to provide a flexibility because the recognition vocabulary process and the operation procedure can be effected at the central processing apparatus side.
  • According to the above-described first and second embodiments, phonemes expressing a speech is recognized and a symbol train is coded and transmitted through a transmission means -to a central processing apparatus. The central processing apparatus decodes it and recognizes and outputs the corresponding word or sentence. Thus, as compared with the direct transmission of a speech, it is possible to prevent reduction of the speech recognition rate due to line noises and others and further possible to recognize a word speech and a sentence speech transmitted from a remote place. Moreover, as compared with the Fig. 2 conventional system, it is possible to reduce the cost of the terminal apparatus to be disposed at the user side.
  • It should be understood that the foregoing relates to only preferred embodiments of the present invention, and that it is intended to cover all changes and modifications of the embodiments of this invention herein used for the purposes of the disclosure, which do not constitute departures from the spirit and scope of the invention. For example, although in the above-described embodiments phonemes are used as the basic units of a language to be recognized, the present invention is not limited thereto and it is also appropriate to use syllables as the basic units. In addition, although the description is made in connection with japanese language, the recognition with respect to languages other than the japanese language can be made if recognition is performed in accordance with phonemes or others corresponding thereto.
  • A speech recognition system for recognizing a speech to be inputted so as to operate a given apparatus in accordance with the recoginized speech. The speech recognition system includes a phoneme recognizing section responsive to input of a speech from an external device for extracting phonemes constituting the inputted speech to output them as a symbol train. The symbol train from the phoneme recognizing section is supplied to a coder for coding the symbol train and outputting the coded symbol train through a transmission line to a decoder for decoding the coded symbol train to restoring it to the original symbol train. The decoded symbol train is inputted to a word and sentence recognizing section which in turn recognizes a word or a sentence on the basis of the decoded symbol train using a word dictionary.

Claims (1)

1. A speech recognition system comprising:
means responsive to an input of a speech from an external device for extracting phonemes or syllables constituting the inputted speech to output them as a symbol train;
means coupled to said extracting means for coding said symbol train and outputting the coded symbol train;
means for transmitting the coded symbol train;
means coupled through said transmitting means to said coding means for decoding the coded symbol train to restoring it to the original symbol train; and
means responsive to the decoded symbol train from said decoding means for recognizing a word or a sentence on the basis of the decoded symbol train.
EP90120020A 1989-10-19 1990-10-18 Speech recognition system Expired - Lifetime EP0423800B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP272846/89 1989-10-19
JP1272846A JPH03132797A (en) 1989-10-19 1989-10-19 Voice recognition device

Publications (3)

Publication Number Publication Date
EP0423800A2 true EP0423800A2 (en) 1991-04-24
EP0423800A3 EP0423800A3 (en) 1992-01-02
EP0423800B1 EP0423800B1 (en) 1995-02-01

Family

ID=17519590

Family Applications (1)

Application Number Title Priority Date Filing Date
EP90120020A Expired - Lifetime EP0423800B1 (en) 1989-10-19 1990-10-18 Speech recognition system

Country Status (3)

Country Link
EP (1) EP0423800B1 (en)
JP (1) JPH03132797A (en)
DE (1) DE69016568D1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0706172A1 (en) * 1994-10-04 1996-04-10 Hughes Aircraft Company Low bit rate speech encoder and decoder
US5909662A (en) * 1995-08-11 1999-06-01 Fujitsu Limited Speech processing coder, decoder and command recognizer
EP1032189A2 (en) * 1994-03-10 2000-08-30 CABLE & WIRELESS PLC Communication system
WO2001099096A1 (en) * 2000-06-20 2001-12-27 Sharp Kabushiki Kaisha Speech input communication system, user terminal and center system
EP1220202A1 (en) * 2000-12-29 2002-07-03 Alcatel System and method for coding and decoding speaker-independent and speaker-dependent speech information

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1051701B1 (en) 1998-02-03 2002-11-06 Siemens Aktiengesellschaft Method for voice data transmission
DE19933318C1 (en) * 1999-07-16 2001-02-01 Bayerische Motoren Werke Ag Method for the wireless transmission of messages between a vehicle-internal communication system and a vehicle-external central computer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4473904A (en) * 1978-12-11 1984-09-25 Hitachi, Ltd. Speech information transmission method and system
GB2183880A (en) * 1985-12-05 1987-06-10 Int Standard Electric Corp Speech translator for the deaf
EP0286035A1 (en) * 1987-04-09 1988-10-12 Eliza Corporation Speech-recognition circuitry employing phoneme Estimation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58151726A (en) * 1982-03-05 1983-09-09 Nippon Telegr & Teleph Corp <Ntt> Voice transmitting system using satellite circuit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4473904A (en) * 1978-12-11 1984-09-25 Hitachi, Ltd. Speech information transmission method and system
GB2183880A (en) * 1985-12-05 1987-06-10 Int Standard Electric Corp Speech translator for the deaf
EP0286035A1 (en) * 1987-04-09 1988-10-12 Eliza Corporation Speech-recognition circuitry employing phoneme Estimation

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1032189A2 (en) * 1994-03-10 2000-08-30 CABLE &amp; WIRELESS PLC Communication system
EP1031963A2 (en) * 1994-03-10 2000-08-30 CABLE &amp; WIRELESS PLC Communication system
US6125284A (en) * 1994-03-10 2000-09-26 Cable & Wireless Plc Communication system with handset for distributed processing
EP1031963A3 (en) * 1994-03-10 2000-10-18 CABLE &amp; WIRELESS PLC Communication system
EP1032189A3 (en) * 1994-03-10 2000-10-25 CABLE &amp; WIRELESS PLC Communication system
US6216013B1 (en) 1994-03-10 2001-04-10 Cable & Wireless Plc Communication system with handset for distributed processing
EP0706172A1 (en) * 1994-10-04 1996-04-10 Hughes Aircraft Company Low bit rate speech encoder and decoder
US5832425A (en) * 1994-10-04 1998-11-03 Hughes Electronics Corporation Phoneme recognition and difference signal for speech coding/decoding
US5909662A (en) * 1995-08-11 1999-06-01 Fujitsu Limited Speech processing coder, decoder and command recognizer
WO2001099096A1 (en) * 2000-06-20 2001-12-27 Sharp Kabushiki Kaisha Speech input communication system, user terminal and center system
US7225134B2 (en) 2000-06-20 2007-05-29 Sharp Kabushiki Kaisha Speech input communication system, user terminal and center system
EP1220202A1 (en) * 2000-12-29 2002-07-03 Alcatel System and method for coding and decoding speaker-independent and speaker-dependent speech information

Also Published As

Publication number Publication date
DE69016568D1 (en) 1995-03-16
JPH03132797A (en) 1991-06-06
EP0423800B1 (en) 1995-02-01
EP0423800A3 (en) 1992-01-02

Similar Documents

Publication Publication Date Title
CA2466652C (en) Method for compressing dictionary data
CA2043667C (en) Written language parser system
Fry Theoretical aspects of mechanical speech recognition
EP0751467A2 (en) Translation apparatus and translation method
US7676364B2 (en) System and method for speech-to-text conversion using constrained dictation in a speak-and-spell mode
EP0423800B1 (en) Speech recognition system
JPH07129594A (en) Automatic interpretation system
US20090306978A1 (en) Method and system for encoding languages
JP2002073074A (en) Method and device for recognizing numerical string in voice
Olson et al. Phonetic typewriter III
JPH08329088A (en) Speech input translation device
JPH0155507B2 (en)
JP2002189490A (en) Method of pinyin speech input
US20210407501A1 (en) Phonetic keyboard and system to facilitate communication in english
Sorin et al. Text-to-speech synthesis in the French electronic mail environment
Davis A voice interface to a direction giving program
Le Saint-Milon et al. TEXT-to-SPEECH SYNTHESIS IN THE FRENCH ELECTRONIC MAlL ENVIRONMENT
JP2817406B2 (en) Continuous speech recognition method
WO2001042875A2 (en) Language translation voice telephony
JP3183686B2 (en) Language input device
JPH038560B2 (en)
Gordos et al. Data-Base Rule-System for the MULTIVOX Text-To-Speech Converter Application for Arabic Language
Green Developments in synthetic speech
JPH05289608A (en) Conversation assisting device for deaf-mute and conversation assisting device for translation
JPS5897099A (en) Voice synthesizer

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19901018

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 19940513

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Effective date: 19950201

REF Corresponds to:

Ref document number: 69016568

Country of ref document: DE

Date of ref document: 19950316

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Effective date: 19950503

EN Fr: translation not filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 19970901

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20061018

Year of fee payment: 17

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20071018

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20071018