US20030130847A1 - Method of training a computer system via human voice input - Google Patents
Method of training a computer system via human voice input Download PDFInfo
- Publication number
- US20030130847A1 US20030130847A1 US09/871,524 US87152401A US2003130847A1 US 20030130847 A1 US20030130847 A1 US 20030130847A1 US 87152401 A US87152401 A US 87152401A US 2003130847 A1 US2003130847 A1 US 2003130847A1
- Authority
- US
- United States
- Prior art keywords
- word
- unknown word
- computer system
- spelling
- receiving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013479 data entry Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 2
- 238000013518 transcription Methods 0.000 description 2
- 230000035897 transcription Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/04—Details of speech synthesis systems, e.g. synthesiser structure or memory management
Definitions
- the present invention relates to a method of training a computer system via human voice input from a human teacher, with the computer system including a speech recognition engine.
- a large concatenated voice system with a large vocabulary is capable of speaking a number of different words. For each word in the vocabulary of the large concatenated voice system, the system has been trained so that a particular word has a corresponding phonetic sequence.
- manual data entry is usually used to train the systems. This is usually done by first training a data entry person the advanced skill sets required to program the phonetic knowledge into specific elements of the computer program for storage and future use. This type of training technique is tedious, prone to errors, and has a tendency to be academic in entry style rather than capturing a true example of how a word is pronounced or what a word, phrase, or sentence means or translates to.
- a method of training a computer system via human voice input from a human teacher has a text to speech engine and a speech recognition engine.
- the method comprises presenting a text spelling of an unknown word, and receiving a human voice pronunciation of the unknown word from the human teacher.
- the method further comprises determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word.
- the text spelling is associated with the phonetic spelling to allow the text to speech engine to correctly pronounce the unknown word in the future, when presented with the text spelling of the unknown word.
- the phonetic spelling determined for the unknown word with the speech recognition engine may include a sequence of phonemes names and/or known words.
- the computer system after presenting the text spelling of the unknown word, the computer system, using speech output, requests to receive the human voice pronunciation of the unknown word.
- the request from the computer system takes a form of an ongoing dialog between the computer system and the human teacher.
- the method further comprises establishing a plurality of request statements.
- Each request statement has an information content level.
- the information content levels range from a low information content level to a high information content level.
- the plurality of request statements are used by the computer system during the ongoing dialog.
- presenting, receiving, determining, and associating are repeated for a plurality of unknown words.
- the information content level for the request statements in the ongoing dialog progressively lessens as presenting, receiving, determining, and associating are repeated.
- a method of training a computer system via human voice input from a human teacher has a speech recognition engine.
- the method comprises receiving a human voice pronunciation of an unknown word from the human teacher.
- the method further comprises determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word, and receiving a known word that is related in meaning to the unknown word.
- the known word is associated with the phonetic spelling of the unknown word to allow the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word.
- receiving the known word further comprises receiving a human voice pronunciation of the known word from the human teacher.
- receiving the known word further comprises receiving a text spelling of the known word.
- a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of training a computer system via human voice input from a human teacher.
- the computer system has a text to speech engine and a speech recognition engine.
- the medium further comprises instructions for presenting a text spelling of an unknown word, and instructions for receiving a human voice pronunciation of the unknown word from the human teacher.
- the medium further comprises instructions for determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word.
- the medium further comprises instructions for associating the text spelling with the phonetic spelling. This association allows the text to speech engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word.
- a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of training a computer system via human voice input from a human teacher.
- the computer system has a speech recognition engine.
- the medium further comprises instructions for receiving a human voice pronunciation of an unknown word from the human teacher, and instructions for determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word.
- the medium further comprises instructions for receiving a known word that is related in meaning to the unknown word, and instructions for associating the known word with the phonetic spelling of the unknown word. The association allows the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word.
- a system and method to train computer systems via human voice input are provided.
- Automatic phonetic transcription may be used to enable human teaching of semi-intelligent computer systems correct pronunciation for speech output and word, phrase, and sentence meanings.
- speech output from and human speech input to a computer may be used to ask human teachers questions and accept input from the human teacher to improve performance of the computer system.
- FIG. 1 illustrates a computer system and a method of training the computer system in accordance with the present invention
- FIG. 2 illustrates a method of training the computer system in accordance with the present invention
- FIG. 3 illustrates a method of the present invention
- FIG. 4 illustrates another method of the present invention.
- System 10 includes a computer 12 , a text to speech engine 14 , and a speech recognition engine 16 .
- Speech recognition engine 16 uses word recognizer 18 and/or database with phonetics 20 to determine the phonetic spelling of an unknown word based on human voice pronunciation of the unknown word.
- System 10 includes speaker 22 and microphone 24 .
- computer system 10 is trained via human voice input from a human teacher.
- computer 12 is presented with a text spelling of an unknown word.
- the text spelling of the unknown word may be presented to computer 12 in a variety of ways.
- computer 12 may manually receive the text spelling of the unknown word, or may, in any other way, come across the text spelling of the unknown word.
- a human voice pronunciation of the unknown word is received by system 10 at microphone 24 from a human teacher.
- Speech recognition engine 16 determines a phonetic spelling of the unknown word based on the human voice pronunciation of the unknown word.
- the phonetic spelling may include a sequence of phonemes names and/or known words as determined by word recognizer 18 and/or database with phonetics 20 .
- system 10 using speech output at speaker 22 , requests to receive the human voice pronunciation of the known word.
- the request by the computer system to receive the human voice pronunciation of the unknown word takes a form of an ongoing dialog between the computer system and the human teacher as illustrated by example in FIG. 2.
- speech output from and speech input to a computer is used to ask human teachers questions and accept input from the human teacher to improve performance of the computer system.
- the improved performance can be: how the computer is performing an operation such as pronouncing a word or assembling a sentence or phrase, or how the computer is translating information.
- a natural dialog with the computer can be set so that realistic data can be captured. For example, if the word “bozotron” is being pronounced by the system, the computer can ask the teacher for advice on how to pronounce the word.
- the computer would have a list of ways to ask the questions with a variable for the questionable data. Further, the computer may develop its own questions.
- an example of an ongoing natural dialog between a human teacher and a computer is generally indicated at 30 .
- the computer has been presented with the text spelling of the unknown word and is requesting to receive the human voice pronunciation of the unknown word.
- the teacher responds to the computer.
- the computer responds to the teacher and shows the teacher the text spelling of the unknown word.
- the teacher and the computer maintain an ongoing dialog, discussing the unknown word.
- the teacher provides the computer system with the human voice pronunciation of the unknown word.
- the computer stops translating the phonetic codes from the speech recognition engine and takes the direct phonetic code from the speech recognition front end.
- the computer determines the phonetic spelling of the unknown word with the speech recognition engine 16 (FIG. 1) based on the human voice pronunciation of the unknown word.
- the computer switches back to the native language of the teacher and confirms the pronunciation with similar dialog using the new phonetic capture from the teacher. Thereafter, the text spelling of the unknown word is associated with the phonetic spelling determined by the speech recognition engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word.
- each statement or request statement (because the statements are used to ultimately request to receive the human voice pronunciation of the unknown word from the human teacher) has an information content level.
- the information content levels range from a low information content level to a high information content level.
- the plurality of request statements are used by the computer system during the ongoing dialog.
- the computer system progressively lessens the information content level for the request statements used in the ongoing dialog.
- the computer may explain that it has several words that it does not know how to pronounce.
- request statements having high information content levels are used until the text spelling of the unknown word is associated with a phonetic spelling.
- the computer system may repeat the same steps, this time for the second unknown word, but this time using request statements having a slightly lower information content level.
- the process may again be repeated for the third word. This time, for the third word, an even lower information content level may be used for the request statements.
- a first method of the present invention includes, at block 60 , presenting a text spelling of an unknown word.
- a plurality of request statements having information content levels ranging from low to high information content are established.
- the computer system requests to receive human voice pronunciation of the unknown word. The request takes the form of an ongoing dialog (for example, FIG. 2) of request statements of progressively declining information content level.
- the information content level may decline during the ongoing dialog for a single unknown word, or may progressively decline during an ongoing dialog in which multiple unknown words are processed.
- the computer system receives human voice pronunciation of the unknown word.
- the computer system determines the phonetic spelling of the unknown word using a sequence of phonemes and/or known words.
- the text spelling of the unknown word is associated with the determined phonetic spelling of the unknown word to allow the text to speech engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word again.
- FIG. 4 Another embodiment of the present invention is illustrated in FIG. 4.
- the human voice pronunciation of an unknown word is received from the human teacher.
- a phonetic spelling of the unknown word is determined with the speech recognition and is based on the human voice pronunciation of the unknown word.
- a known word is received. The known word is related in meaning to the unknown word.
- the known word is associated with the phonetic spelling of the unknown word to allow the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word. That is, the embodiment illustrated in FIG. 4, associates a known word with phonetic spellings of unknown words.
- the method illustrated in FIG. 4 may be utilized to provide a smart lookup system.
- the teacher may request the computer system to look up information relating to “car parts.”
- the computer system may respond by stating “I don't have any listing for car parts.”
- the teacher may respond by stating “Do you have any listings for automobile parts or auto parts?”
- the computer may respond “Yes, I have listings for auto parts.”
- the teacher may respond “For future reference, car parts are the same thing as auto parts.” (Block 84 .)
- the computer system associates the known word “auto parts” with the phonetic spelling of the unknown word “car parts.”
- the computer would then respond “I do not have any listing specifically for car parts, however, I do have listings for auto parts which are known to me to be related in meaning to car parts.”
- receiving the known word may include receiving a human voice pronunciation of the known word from the human teacher or receiving a text spelling of the known word.
- the known word “auto parts” corresponding to the unknown word “car parts” may be provided by human voice input or by text input.
- a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of the present invention. That is, the methods as described in FIGS. 1 - 4 may be implemented, in accordance with the present invention, via instructions stored on a computer readable storage medium.
- a computer readable storage medium has instructions stored thereon including instructions for presenting a text spelling of an unknown word, and instructions for receiving a human voice pronunciation of the unknown word from the human teacher.
- the medium also includes instructions for determining a phonetic spelling of the unknown word.
- the medium even further includes instructions for associating the text spelling with the phonetic spelling.
- the method illustrated in FIG. 4 may be implemented via instructions on a computer readable storage medium.
- the medium includes instructions for receiving a human voice pronunciation of an unknown word from a human teacher, and instructions for determining a phonetic spelling of the unknown word.
- the medium further includes instructions for receiving a known word that is related in meaning to the unknown word, and instructions for associating the known word with the phonetic spelling of the unknown word.
Abstract
Description
- 1. Field of the Invention
- The present invention relates to a method of training a computer system via human voice input from a human teacher, with the computer system including a speech recognition engine.
- 2. Background Art
- A large concatenated voice system with a large vocabulary is capable of speaking a number of different words. For each word in the vocabulary of the large concatenated voice system, the system has been trained so that a particular word has a corresponding phonetic sequence. In large concatenated voice systems and other so-called artificial intelligence systems, manual data entry is usually used to train the systems. This is usually done by first training a data entry person the advanced skill sets required to program the phonetic knowledge into specific elements of the computer program for storage and future use. This type of training technique is tedious, prone to errors, and has a tendency to be academic in entry style rather than capturing a true example of how a word is pronounced or what a word, phrase, or sentence means or translates to.
- Although the use of manual data entry to train large concatenated voice systems has been used in many applications that have been commercially successful, manual data entry training techniques have some shortcomings. As such, there is a need for a method of training a computer system that overcomes the shortcomings of the prior art.
- It is, therefore, an object of the present invention to provide a method of training a computer system via human voice input from a human teacher.
- In carrying out the above object, a method of training a computer system via human voice input from a human teacher is provided. The computer system has a text to speech engine and a speech recognition engine. The method comprises presenting a text spelling of an unknown word, and receiving a human voice pronunciation of the unknown word from the human teacher. The method further comprises determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word. The text spelling is associated with the phonetic spelling to allow the text to speech engine to correctly pronounce the unknown word in the future, when presented with the text spelling of the unknown word.
- It is appreciated that the phonetic spelling determined for the unknown word with the speech recognition engine may include a sequence of phonemes names and/or known words. In a preferred embodiment, after presenting the text spelling of the unknown word, the computer system, using speech output, requests to receive the human voice pronunciation of the unknown word. The request from the computer system takes a form of an ongoing dialog between the computer system and the human teacher. More preferably, the method further comprises establishing a plurality of request statements. Each request statement has an information content level. The information content levels range from a low information content level to a high information content level. The plurality of request statements are used by the computer system during the ongoing dialog. Most preferably, presenting, receiving, determining, and associating are repeated for a plurality of unknown words. The information content level for the request statements in the ongoing dialog progressively lessens as presenting, receiving, determining, and associating are repeated.
- Further, in carrying out the present invention, a method of training a computer system via human voice input from a human teacher is provided. The computer system has a speech recognition engine. The method comprises receiving a human voice pronunciation of an unknown word from the human teacher. The method further comprises determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word, and receiving a known word that is related in meaning to the unknown word. The known word is associated with the phonetic spelling of the unknown word to allow the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word.
- Preferably, receiving the known word further comprises receiving a human voice pronunciation of the known word from the human teacher. Alternatively, receiving the known word further comprises receiving a text spelling of the known word.
- Still further, in carrying out the present invention, a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of training a computer system via human voice input from a human teacher is provided. The computer system has a text to speech engine and a speech recognition engine. The medium further comprises instructions for presenting a text spelling of an unknown word, and instructions for receiving a human voice pronunciation of the unknown word from the human teacher. The medium further comprises instructions for determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word. And further, the medium further comprises instructions for associating the text spelling with the phonetic spelling. This association allows the text to speech engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word.
- Even further, in carrying out the present invention, a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of training a computer system via human voice input from a human teacher is provided. The computer system has a speech recognition engine. The medium further comprises instructions for receiving a human voice pronunciation of an unknown word from the human teacher, and instructions for determining a phonetic spelling of the unknown word with the speech recognition engine based on the human voice pronunciation of the unknown word. The medium further comprises instructions for receiving a known word that is related in meaning to the unknown word, and instructions for associating the known word with the phonetic spelling of the unknown word. The association allows the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word.
- The advantages associated with embodiments of the present invention are numerous. In accordance with the present invention, a system and method to train computer systems via human voice input are provided. Automatic phonetic transcription may be used to enable human teaching of semi-intelligent computer systems correct pronunciation for speech output and word, phrase, and sentence meanings. Further, speech output from and human speech input to a computer may be used to ask human teachers questions and accept input from the human teacher to improve performance of the computer system.
- The above object and other objects, features, and advantages of the present invention will be readily appreciated by one of ordinary skill in the art in the following detailed description of the preferred embodiment when taken in connection with the accompanying drawings.
- FIG. 1 illustrates a computer system and a method of training the computer system in accordance with the present invention;
- FIG. 2 illustrates a method of training the computer system in accordance with the present invention;
- FIG. 3 illustrates a method of the present invention; and
- FIG. 4 illustrates another method of the present invention.
- With reference now to FIG. 1, a computer system is generally indicated at10.
System 10 includes acomputer 12, a text tospeech engine 14, and aspeech recognition engine 16.Speech recognition engine 16 uses word recognizer 18 and/or database withphonetics 20 to determine the phonetic spelling of an unknown word based on human voice pronunciation of the unknown word.System 10 includesspeaker 22 andmicrophone 24. - In accordance with the present invention,
computer system 10 is trained via human voice input from a human teacher. First,computer 12 is presented with a text spelling of an unknown word. The text spelling of the unknown word may be presented tocomputer 12 in a variety of ways. For example,computer 12 may manually receive the text spelling of the unknown word, or may, in any other way, come across the text spelling of the unknown word. Thereafter, a human voice pronunciation of the unknown word is received bysystem 10 atmicrophone 24 from a human teacher.Speech recognition engine 16 determines a phonetic spelling of the unknown word based on the human voice pronunciation of the unknown word. It is appreciated that the phonetic spelling may include a sequence of phonemes names and/or known words as determined byword recognizer 18 and/or database withphonetics 20. Further, in a preferred implementation, after the text spelling of the unknown word is presented,system 10, using speech output atspeaker 22, requests to receive the human voice pronunciation of the known word. - In a preferred embodiment, the request by the computer system to receive the human voice pronunciation of the unknown word takes a form of an ongoing dialog between the computer system and the human teacher as illustrated by example in FIG. 2.
- That is, in accordance with the present invention, speech output from and speech input to a computer is used to ask human teachers questions and accept input from the human teacher to improve performance of the computer system. The improved performance can be: how the computer is performing an operation such as pronouncing a word or assembling a sentence or phrase, or how the computer is translating information. A natural dialog with the computer can be set so that realistic data can be captured. For example, if the word “bozotron” is being pronounced by the system, the computer can ask the teacher for advice on how to pronounce the word. The computer would have a list of ways to ask the questions with a variable for the questionable data. Further, the computer may develop its own questions.
- As best shown in FIG. 2, an example of an ongoing natural dialog between a human teacher and a computer is generally indicated at30. At
block 32, the computer has been presented with the text spelling of the unknown word and is requesting to receive the human voice pronunciation of the unknown word. Atblock 34, the teacher responds to the computer. Atblock 36, the computer responds to the teacher and shows the teacher the text spelling of the unknown word. Atblocks block 46, the teacher provides the computer system with the human voice pronunciation of the unknown word. At this point, the computer stops translating the phonetic codes from the speech recognition engine and takes the direct phonetic code from the speech recognition front end. That is, the computer determines the phonetic spelling of the unknown word with the speech recognition engine 16 (FIG. 1) based on the human voice pronunciation of the unknown word. Atblock 48, the computer switches back to the native language of the teacher and confirms the pronunciation with similar dialog using the new phonetic capture from the teacher. Thereafter, the text spelling of the unknown word is associated with the phonetic spelling determined by the speech recognition engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word. - It is appreciated that a plurality of statements are established for use by the computer during the dialog with the human teacher. In a preferred implementation, each statement or request statement (because the statements are used to ultimately request to receive the human voice pronunciation of the unknown word from the human teacher) has an information content level. The information content levels range from a low information content level to a high information content level. The plurality of request statements are used by the computer system during the ongoing dialog.
- Preferably, during the ongoing dialog, the computer system progressively lessens the information content level for the request statements used in the ongoing dialog. For example, at
block 32, the computer may explain that it has several words that it does not know how to pronounce. Thereafter, for the first unknown word, request statements having high information content levels are used until the text spelling of the unknown word is associated with a phonetic spelling. Thereafter, the computer system may repeat the same steps, this time for the second unknown word, but this time using request statements having a slightly lower information content level. And again, after the second unknown word text spelling has been associated with a phonetic spelling, the process may again be repeated for the third word. This time, for the third word, an even lower information content level may be used for the request statements. The use of progressively lower information content levels for the request statements provides a more natural conversation flow between the human teacher and the computer system. For example, by the time the computer is asking to receive the human voice pronunciation of a tenth word, it is no longer necessary for the computer to say “I have a new word that I do not know how to pronounce. Do you have time to listen to my question?” Instead, the computer may say “Want to hear the next one?” or “Got time for another?” - It is appreciated that embodiments of the present invention provide a method of training a computer system via human voice input from a human teacher. Automatic phonetic transcription is used to enable human teaching of semi-intelligent computer systems correct pronunciation for speech output and word, phrase, and sentence meanings. As shown in FIG. 3, a first method of the present invention includes, at
block 60, presenting a text spelling of an unknown word. Atblock 62, a plurality of request statements having information content levels ranging from low to high information content are established. Atblock 64, the computer system requests to receive human voice pronunciation of the unknown word. The request takes the form of an ongoing dialog (for example, FIG. 2) of request statements of progressively declining information content level. The information content level may decline during the ongoing dialog for a single unknown word, or may progressively decline during an ongoing dialog in which multiple unknown words are processed. Atblock 66, the computer system receives human voice pronunciation of the unknown word. Atblock 68, the computer system determines the phonetic spelling of the unknown word using a sequence of phonemes and/or known words. Atblock 70, the text spelling of the unknown word is associated with the determined phonetic spelling of the unknown word to allow the text to speech engine to correctly pronounce the unknown word in the future when presented with the text spelling of the unknown word again. - Another embodiment of the present invention is illustrated in FIG. 4. At
block 80, the human voice pronunciation of an unknown word is received from the human teacher. Atblock 82, a phonetic spelling of the unknown word is determined with the speech recognition and is based on the human voice pronunciation of the unknown word. Atblock 84, a known word is received. The known word is related in meaning to the unknown word. Atblock 86, the known word is associated with the phonetic spelling of the unknown word to allow the speech recognition engine to correctly recognize the unknown word in the future as related in meaning to the known word. That is, the embodiment illustrated in FIG. 4, associates a known word with phonetic spellings of unknown words. For example, the method illustrated in FIG. 4 may be utilized to provide a smart lookup system. For example, the teacher may request the computer system to look up information relating to “car parts.” The computer system may respond by stating “I don't have any listing for car parts.” The teacher may respond by stating “Do you have any listings for automobile parts or auto parts?” The computer may respond “Yes, I have listings for auto parts.” The teacher may respond “For future reference, car parts are the same thing as auto parts.” (Block 84.) Thereafter, the computer system associates the known word “auto parts” with the phonetic spelling of the unknown word “car parts.” In the future, if a user were to ask the computer system “Do you have any listings for car parts?” the computer would then respond “I do not have any listing specifically for car parts, however, I do have listings for auto parts which are known to me to be related in meaning to car parts.” - It is appreciated that in the method illustrated in FIG. 4, receiving the known word may include receiving a human voice pronunciation of the known word from the human teacher or receiving a text spelling of the known word. For example, the known word “auto parts” corresponding to the unknown word “car parts” may be provided by human voice input or by text input.
- It is appreciated that in accordance with the present invention, methods may be implemented via a computer readable storage medium having instructions stored thereon that direct a computer to perform a method of the present invention. That is, the methods as described in FIGS.1-4 may be implemented, in accordance with the present invention, via instructions stored on a computer readable storage medium. For example, to implement the method of FIG. 3, a computer readable storage medium has instructions stored thereon including instructions for presenting a text spelling of an unknown word, and instructions for receiving a human voice pronunciation of the unknown word from the human teacher. The medium also includes instructions for determining a phonetic spelling of the unknown word. The medium even further includes instructions for associating the text spelling with the phonetic spelling.
- In addition, the method illustrated in FIG. 4 may be implemented via instructions on a computer readable storage medium. The medium includes instructions for receiving a human voice pronunciation of an unknown word from a human teacher, and instructions for determining a phonetic spelling of the unknown word. The medium further includes instructions for receiving a known word that is related in meaning to the unknown word, and instructions for associating the known word with the phonetic spelling of the unknown word.
- In addition, it is appreciated that all optional features and preferred features described herein for methods of the present invention may also be implemented as instructions on a computer readable storage medium.
- While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrate and describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/871,524 US7127397B2 (en) | 2001-05-31 | 2001-05-31 | Method of training a computer system via human voice input |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/871,524 US7127397B2 (en) | 2001-05-31 | 2001-05-31 | Method of training a computer system via human voice input |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030130847A1 true US20030130847A1 (en) | 2003-07-10 |
US7127397B2 US7127397B2 (en) | 2006-10-24 |
Family
ID=25357644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/871,524 Expired - Lifetime US7127397B2 (en) | 2001-05-31 | 2001-05-31 | Method of training a computer system via human voice input |
Country Status (1)
Country | Link |
---|---|
US (1) | US7127397B2 (en) |
Cited By (88)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040098266A1 (en) * | 2002-11-14 | 2004-05-20 | International Business Machines Corporation | Personal speech font |
US20040111271A1 (en) * | 2001-12-10 | 2004-06-10 | Steve Tischer | Method and system for customizing voice translation of text to speech |
US20050114131A1 (en) * | 2003-11-24 | 2005-05-26 | Kirill Stoimenov | Apparatus and method for voice-tagging lexicon |
US20060069567A1 (en) * | 2001-12-10 | 2006-03-30 | Tischer Steven N | Methods, systems, and products for translating text to speech |
US20090228277A1 (en) * | 2008-03-10 | 2009-09-10 | Jeffrey Bonforte | Search Aided Voice Recognition |
US7945445B1 (en) * | 2000-07-14 | 2011-05-17 | Svox Ag | Hybrid lexicon for speech recognition |
GB2480649A (en) * | 2010-05-26 | 2011-11-30 | Lin Sun | Non-native language spelling correction |
US20120016675A1 (en) * | 2010-07-13 | 2012-01-19 | Sony Europe Limited | Broadcast system using text to speech conversion |
CN103065621A (en) * | 2012-11-20 | 2013-04-24 | 高剑青 | Voice recognition based on phonetic symbols |
US20130238317A1 (en) * | 2012-03-08 | 2013-09-12 | Hon Hai Precision Industry Co., Ltd. | Vocabulary look up system and method using same |
US20140330568A1 (en) * | 2008-08-25 | 2014-11-06 | At&T Intellectual Property I, L.P. | System and method for auditory captchas |
WO2014197334A2 (en) * | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US8954325B1 (en) * | 2004-03-22 | 2015-02-10 | Rockstar Consortium Us Lp | Speech recognition in automated information services systems |
US9412392B2 (en) | 2008-10-02 | 2016-08-09 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10319250B2 (en) | 2016-12-29 | 2019-06-11 | Soundhound, Inc. | Pronunciation guided by automatic speech recognition |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11751733B2 (en) | 2007-08-29 | 2023-09-12 | Omachron Intellectual Property Inc. | Portable surface cleaning apparatus |
US9211044B2 (en) | 2011-03-04 | 2015-12-15 | Omachron Intellectual Property Inc. | Compact surface cleaning apparatus |
US11690489B2 (en) | 2009-03-13 | 2023-07-04 | Omachron Intellectual Property Inc. | Surface cleaning apparatus with an external dirt chamber |
CA2907064C (en) | 2009-03-13 | 2018-01-02 | Wayne Ernest Conrad | Portable surface cleaning apparatus |
US8346561B2 (en) * | 2010-02-23 | 2013-01-01 | Behbehani Fawzi Q | Voice activatable system for providing the correct spelling of a spoken word |
US9232881B2 (en) | 2011-03-04 | 2016-01-12 | Omachron Intellectual Property Inc. | Surface cleaning apparatus with removable handle assembly |
US8646146B2 (en) | 2011-03-04 | 2014-02-11 | G.B.D. Corp. | Suction hose wrap for a surface cleaning apparatus |
US8689395B2 (en) * | 2011-03-04 | 2014-04-08 | G.B.D. Corp. | Portable surface cleaning apparatus |
WO2014079258A1 (en) * | 2012-11-20 | 2014-05-30 | Gao Jianqing | Voice recognition based on phonetic symbols |
US10546580B2 (en) | 2017-12-05 | 2020-01-28 | Toyota Motor Engineering & Manufacuturing North America, Inc. | Systems and methods for determining correct pronunciation of dictated words |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682539A (en) * | 1994-09-29 | 1997-10-28 | Conrad; Donovan | Anticipated meaning natural language interface |
US5724481A (en) * | 1995-03-30 | 1998-03-03 | Lucent Technologies Inc. | Method for automatic speech recognition of arbitrary spoken words |
US5852801A (en) * | 1995-10-04 | 1998-12-22 | Apple Computer, Inc. | Method and apparatus for automatically invoking a new word module for unrecognized user input |
US6041300A (en) * | 1997-03-21 | 2000-03-21 | International Business Machines Corporation | System and method of using pre-enrolled speech sub-units for efficient speech synthesis |
US6078885A (en) * | 1998-05-08 | 2000-06-20 | At&T Corp | Verbal, fully automatic dictionary updates by end-users of speech synthesis and recognition systems |
US6092044A (en) * | 1997-03-28 | 2000-07-18 | Dragon Systems, Inc. | Pronunciation generation in speech recognition |
US6125341A (en) * | 1997-12-19 | 2000-09-26 | Nortel Networks Corporation | Speech recognition system and method |
US6144938A (en) * | 1998-05-01 | 2000-11-07 | Sun Microsystems, Inc. | Voice user interface with personality |
US6233553B1 (en) * | 1998-09-04 | 2001-05-15 | Matsushita Electric Industrial Co., Ltd. | Method and system for automatically determining phonetic transcriptions associated with spelled words |
US6321196B1 (en) * | 1999-07-02 | 2001-11-20 | International Business Machines Corporation | Phonetic spelling for speech recognition |
US20020055844A1 (en) * | 2000-02-25 | 2002-05-09 | L'esperance Lauren | Speech user interface for portable personal devices |
US6411932B1 (en) * | 1998-06-12 | 2002-06-25 | Texas Instruments Incorporated | Rule-based learning of word pronunciations from training corpora |
US6598018B1 (en) * | 1999-12-15 | 2003-07-22 | Matsushita Electric Industrial Co., Ltd. | Method for natural dialog interface to car devices |
US6598020B1 (en) * | 1999-09-10 | 2003-07-22 | International Business Machines Corporation | Adaptive emotion and initiative generator for conversational systems |
US20030182111A1 (en) * | 2000-04-21 | 2003-09-25 | Handal Anthony H. | Speech training method with color instruction |
US6629071B1 (en) * | 1999-09-04 | 2003-09-30 | International Business Machines Corporation | Speech recognition system |
US6694296B1 (en) * | 2000-07-20 | 2004-02-17 | Microsoft Corporation | Method and apparatus for the recognition of spelled spoken words |
US6721706B1 (en) * | 2000-10-30 | 2004-04-13 | Koninklijke Philips Electronics N.V. | Environment-responsive user interface/entertainment device that simulates personal interaction |
US6823313B1 (en) * | 1999-10-12 | 2004-11-23 | Unisys Corporation | Methodology for developing interactive systems |
-
2001
- 2001-05-31 US US09/871,524 patent/US7127397B2/en not_active Expired - Lifetime
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682539A (en) * | 1994-09-29 | 1997-10-28 | Conrad; Donovan | Anticipated meaning natural language interface |
US5724481A (en) * | 1995-03-30 | 1998-03-03 | Lucent Technologies Inc. | Method for automatic speech recognition of arbitrary spoken words |
US5852801A (en) * | 1995-10-04 | 1998-12-22 | Apple Computer, Inc. | Method and apparatus for automatically invoking a new word module for unrecognized user input |
US6041300A (en) * | 1997-03-21 | 2000-03-21 | International Business Machines Corporation | System and method of using pre-enrolled speech sub-units for efficient speech synthesis |
US6092044A (en) * | 1997-03-28 | 2000-07-18 | Dragon Systems, Inc. | Pronunciation generation in speech recognition |
US6125341A (en) * | 1997-12-19 | 2000-09-26 | Nortel Networks Corporation | Speech recognition system and method |
US6144938A (en) * | 1998-05-01 | 2000-11-07 | Sun Microsystems, Inc. | Voice user interface with personality |
US6078885A (en) * | 1998-05-08 | 2000-06-20 | At&T Corp | Verbal, fully automatic dictionary updates by end-users of speech synthesis and recognition systems |
US6411932B1 (en) * | 1998-06-12 | 2002-06-25 | Texas Instruments Incorporated | Rule-based learning of word pronunciations from training corpora |
US6233553B1 (en) * | 1998-09-04 | 2001-05-15 | Matsushita Electric Industrial Co., Ltd. | Method and system for automatically determining phonetic transcriptions associated with spelled words |
US6321196B1 (en) * | 1999-07-02 | 2001-11-20 | International Business Machines Corporation | Phonetic spelling for speech recognition |
US6629071B1 (en) * | 1999-09-04 | 2003-09-30 | International Business Machines Corporation | Speech recognition system |
US6598020B1 (en) * | 1999-09-10 | 2003-07-22 | International Business Machines Corporation | Adaptive emotion and initiative generator for conversational systems |
US6823313B1 (en) * | 1999-10-12 | 2004-11-23 | Unisys Corporation | Methodology for developing interactive systems |
US6598018B1 (en) * | 1999-12-15 | 2003-07-22 | Matsushita Electric Industrial Co., Ltd. | Method for natural dialog interface to car devices |
US20020055844A1 (en) * | 2000-02-25 | 2002-05-09 | L'esperance Lauren | Speech user interface for portable personal devices |
US20030182111A1 (en) * | 2000-04-21 | 2003-09-25 | Handal Anthony H. | Speech training method with color instruction |
US6694296B1 (en) * | 2000-07-20 | 2004-02-17 | Microsoft Corporation | Method and apparatus for the recognition of spelled spoken words |
US6721706B1 (en) * | 2000-10-30 | 2004-04-13 | Koninklijke Philips Electronics N.V. | Environment-responsive user interface/entertainment device that simulates personal interaction |
Cited By (114)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US7945445B1 (en) * | 2000-07-14 | 2011-05-17 | Svox Ag | Hybrid lexicon for speech recognition |
US20060069567A1 (en) * | 2001-12-10 | 2006-03-30 | Tischer Steven N | Methods, systems, and products for translating text to speech |
US7483832B2 (en) * | 2001-12-10 | 2009-01-27 | At&T Intellectual Property I, L.P. | Method and system for customizing voice translation of text to speech |
US20040111271A1 (en) * | 2001-12-10 | 2004-06-10 | Steve Tischer | Method and system for customizing voice translation of text to speech |
US20040098266A1 (en) * | 2002-11-14 | 2004-05-20 | International Business Machines Corporation | Personal speech font |
US20050114131A1 (en) * | 2003-11-24 | 2005-05-26 | Kirill Stoimenov | Apparatus and method for voice-tagging lexicon |
US8954325B1 (en) * | 2004-03-22 | 2015-02-10 | Rockstar Consortium Us Lp | Speech recognition in automated information services systems |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US20090228277A1 (en) * | 2008-03-10 | 2009-09-10 | Jeffrey Bonforte | Search Aided Voice Recognition |
US8380512B2 (en) * | 2008-03-10 | 2013-02-19 | Yahoo! Inc. | Navigation using a search engine and phonetic voice recognition |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US20140330568A1 (en) * | 2008-08-25 | 2014-11-06 | At&T Intellectual Property I, L.P. | System and method for auditory captchas |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9412392B2 (en) | 2008-10-02 | 2016-08-09 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
GB2480649B (en) * | 2010-05-26 | 2017-07-26 | Sun Lin | Non-native language spelling correction |
GB2480649A (en) * | 2010-05-26 | 2011-11-30 | Lin Sun | Non-native language spelling correction |
US9263027B2 (en) * | 2010-07-13 | 2016-02-16 | Sony Europe Limited | Broadcast system using text to speech conversion |
US20120016675A1 (en) * | 2010-07-13 | 2012-01-19 | Sony Europe Limited | Broadcast system using text to speech conversion |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US20130238317A1 (en) * | 2012-03-08 | 2013-09-12 | Hon Hai Precision Industry Co., Ltd. | Vocabulary look up system and method using same |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
CN103065621A (en) * | 2012-11-20 | 2013-04-24 | 高剑青 | Voice recognition based on phonetic symbols |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A3 (en) * | 2013-06-07 | 2015-01-29 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197334A2 (en) * | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10319250B2 (en) | 2016-12-29 | 2019-06-11 | Soundhound, Inc. | Pronunciation guided by automatic speech recognition |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
Also Published As
Publication number | Publication date |
---|---|
US7127397B2 (en) | 2006-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7127397B2 (en) | Method of training a computer system via human voice input | |
US8788256B2 (en) | Multiple language voice recognition | |
US8371857B2 (en) | System, method and device for language education through a voice portal | |
CN110648690B (en) | Audio evaluation method and server | |
CN110489756B (en) | Conversational human-computer interactive spoken language evaluation system | |
US20020198715A1 (en) | Artificial language generation | |
US11145222B2 (en) | Language learning system, language learning support server, and computer program product | |
CN109461436A (en) | A kind of correcting method and system of speech recognition pronunciation mistake | |
KR101487005B1 (en) | Learning method and learning apparatus of correction of pronunciation by input sentence | |
CN110797010A (en) | Question-answer scoring method, device, equipment and storage medium based on artificial intelligence | |
CN106328146A (en) | Video subtitle generation method and apparatus | |
Ahsiah et al. | Tajweed checking system to support recitation | |
JP2010282058A (en) | Method and device for supporting foreign language learning | |
KR102269126B1 (en) | A calibration system for language learner by using audio information and voice recognition result | |
US20010056345A1 (en) | Method and system for speech recognition of the alphabet | |
Cámara-Arenas et al. | Automatic pronunciation assessment vs. automatic speech recognition: A study of conflicting conditions for L2-English | |
CN113486970A (en) | Reading capability evaluation method and device | |
Rudžionis et al. | Recognition of voice commands using hybrid approach | |
KR20210059995A (en) | Method for Evaluating Foreign Language Speaking Based on Deep Learning and System Therefor | |
KR101854379B1 (en) | English learning method for enhancing memory of unconscious process | |
KR101487006B1 (en) | Learning method and learning apparatus of correction of pronunciation for pronenciaion using linking | |
KR101487007B1 (en) | Learning method and learning apparatus of correction of pronunciation by pronunciation analysis | |
CN109035896B (en) | Oral training method and learning equipment | |
Filighera et al. | Towards A Vocalization Feedback Pipeline for Language Learners | |
KR102361205B1 (en) | method for operating pronunciation correction system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QWEST COMMUNICATIONS INTERNATIONAL INC., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CASE, ELIOT M.;REEL/FRAME:011881/0561 Effective date: 20010522 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:QWEST COMMUNICATIONS INTERNATIONAL INC.;REEL/FRAME:044652/0829 Effective date: 20171101 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: SECURITY INTEREST;ASSIGNOR:QWEST COMMUNICATIONS INTERNATIONAL INC.;REEL/FRAME:044652/0829 Effective date: 20171101 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553) Year of fee payment: 12 |
|
AS | Assignment |
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, NEW YORK Free format text: NOTES SECURITY AGREEMENT;ASSIGNOR:QWEST COMMUNICATIONS INTERNATIONAL INC.;REEL/FRAME:051692/0646 Effective date: 20200124 |
|
AS | Assignment |
Owner name: QWEST COMMUNICATIONS INTERNATIONAL INC., LOUISIANA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMPUTERSHARE TRUST COMPANY, N.A, AS SUCCESSOR TO WELLS FARGO BANK, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:066885/0917 Effective date: 20240322 |