Recherche Images Maps Play YouTube Actualités Gmail Drive Plus »
Connexion
Les utilisateurs de lecteurs d'écran peuvent cliquer sur ce lien pour activer le mode d'accessibilité. Celui-ci propose les mêmes fonctionnalités principales, mais il est optimisé pour votre lecteur d'écran.

Brevets

  1. Recherche avancée dans les brevets
Numéro de publicationUS7831431 B2
Type de publicationOctroi
Numéro de demandeUS 11/554,830
Date de publication9 nov. 2010
Date de dépôt31 oct. 2006
Date de priorité31 oct. 2006
Autre référence de publicationUS20080103779
Numéro de publication11554830, 554830, US 7831431 B2, US 7831431B2, US-B2-7831431, US7831431 B2, US7831431B2
InventeursRitchie Winson Huang, David Michael Kirsch
Cessionnaire d'origineHonda Motor Co., Ltd.
Exporter la citationBiBTeX, EndNote, RefMan
Liens externes: USPTO, Cession USPTO, Espacenet
Voice recognition updates via remote broadcast signal
US 7831431 B2
Résumé
A method and a system are provided for dynamically updating voice recognition commands available for controlling a device in a vehicle. A receiver unit of a voice recognition system, located in the vehicle, receives a remotely transmitted broadcast signal. A processor of the voice recognition system extracts voice recognition data from a remaining portion of the broadcast signal and updates voice recognition commands stored in a memory unit, coupled to the processor, with the extracted voice recognition data. A voice input device of the voice recognition system receives a spoken command from a user. A voice recognition engine, coupled to the voice input device and the memory unit, determines whether the spoken command matches one of the stored voice recognition commands in the memory unit. If a match occurs, a recognized voice command is generated. The recognized voice command is sent to an affected device in the vehicle.
Images(8)
Previous page
Next page
Revendications(21)
1. A method for remotely and dynamically updating voice recognition commands available for controlling a device in a vehicle, the method comprising:
(a) receiving, locally, a broadcast signal from a remote source, the broadcast signal comprising voice recognition data;
(b) filtering, locally, the received broadcast signal by separating the voice recognition data from a remainder of the broadcast signal;
(c) updating a local database containing previously stored voice recognition data with the received voice recognition data;
(d) receiving, locally, a spoken command from a local input device;
(e) determining whether the received spoken command matches the voice recognition data stored in the updated local database; and
(f) generating, locally, a recognized voice command based at least in part on matching the received spoken command with the voice recognition data stored in the updated local database.
2. The method as recited in claim 1, wherein updating the local database containing previously stored voice recognition data with the received voice recognition data further comprises determining a portion of the voice recognition data that is new and adding the new voice recognition data to the local database.
3. The method as recited in claim 1, wherein updating the local database containing previously stored voice recognition data with the received voice recognition data further comprises determining a portion of the voice recognition data that is changed and modifying the voice recognition data in the local database with the changed voice recognition data.
4. The method as recited in claim 1, wherein receiving, locally, a broadcast signal further comprises receiving, locally, a satellite signal.
5. The method as recited in claim 1, wherein receiving, locally, a broadcast signal further comprises receiving, locally, a modified broadcast signal.
6. The method as recited in claim 5, wherein receiving, locally, a broadcast signal comprising voice recognition data further comprises receiving, locally, the voice recognition data in a subcarrier of the modified broadcast signal.
7. The method as recited in claim 1, wherein receiving, locally, a broadcast signal further comprises receiving, locally, a dedicated broadcast signal.
8. The method as recited in claim 1, wherein receiving, locally, a broadcast signal containing voice recognition data comprises receiving, locally, voice recognition data further comprising phonetic data for station name identification.
9. The method as recited in claim 1, further comprising sending the recognized voice command to an affected device in the vehicle after the generating step.
10. The method as recited in claim 9, wherein sending the recognized voice command to the affected device comprises sending the recognized voice command to a device selected from a group consisting of a radio, an air conditioning unit, power windows, door locks, and a navigation unit.
11. A system for dynamically updating voice recognition commands available for controlling a device in a vehicle, the system comprising:
a broadcast system, to be located remotely from the vehicle, for sending a broadcast signal comprising voice recognition data; and
an in-vehicle voice recognition system to be located within the vehicle, the in-vehicle voice recognition system comprising:
a receiver unit adapted to receive the broadcast signal;
a memory unit containing a database of stored voice recognition commands;
a processor coupled to the receiver unit and the memory unit, the processor being adapted to extract the voice recognition data from a remaining portion of the broadcast signal and further adapted to update the stored voice recognition commands stored in the memory unit with the extracted voice recognition data;
a voice input device adapted to receive a spoken command from a user; and
a voice recognition engine coupled to the voice input device and the memory unit, the voice recognition engine being adapted to determine whether the spoken command matches one of the stored voice recognition commands in the memory unit.
12. The system as recited in claim 11, wherein the voice input device comprises a microphone.
13. The system as recited in claim 11, wherein the voice recognition data comprises station name identification.
14. The system as recited in claim 11, wherein the broadcast system comprises a satellite radio broadcast system.
15. The system as recited in claim 11, wherein the broadcast signal comprises a modified broadcast signal.
16. The system as recited in claim 15, wherein the voice recognition data is contained in a subcarrier of the modified broadcast signal.
17. The system as recited in claim 11, wherein the broadcast signal comprises a dedicated broadcast signal.
18. The system as recited in claim 11, wherein the voice recognition engine is further adapted to send the recognized voice command to an affected device in the vehicle.
19. The system as recited in claim 18, wherein the affected device is selected from a group consisting of a radio, an air conditioning unit, power windows, door locks, and a navigation unit.
20. The system as recited in claim 11, wherein the processor is further adapted to determine a portion of the voice recognition data that is new and add the new voice recognition data to the database.
21. The system as recited in claim 11, wherein the processor is further adapted to determine a portion of the voice recognition data that is changed and modify the voice recognition data in the database with the changed voice recognition data.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention generally relates to a system and method for dynamically updating voice recognition commands stored in a vehicle. More specifically, the present invention relates to dynamically updating the voice recognition commands for various in-vehicle devices.

2. Description of Related Art

Automobiles equipped with speech-recognition and text-to-speech capabilities simplify tasks that would otherwise require a driver to take away his/her attention from driving. The uses of speech recognition range from controlling internal car temperature and radio volume to driver authentication and theft detection.

Current voice recognition systems offered on production automobiles allow a user (e.g., driver or passenger) to use dedicated, on-board voice recognition commands to control in-vehicle functions. For example, for in-vehicle radio or entertainment system controls, several voice recognition commands are available to the driver/passenger for choosing a specific preset radio station, radio frequency or multimedia source (e.g., CD or DVD). All of these voice recognition commands must, however, already be stored in the memory of the control system of the vehicle. These voice recognition commands cannot be updated without having to replace the storage media. In other words, the voice database for storing these voice recognition commands resides on a static system. If new features or commands are introduced, the storage media must be replaced—limiting the ability of the system to be updated on a continual basis.

FIG. 1 illustrates a conventional in-vehicle voice recognition system 10. This conventional system 10 generally includes a voice recognition engine 12, a database 14 and a microphone 16. The available voice recognition commands are stored within the database 14, and are typically stored on a DVD that is provided with the vehicle. As discussed above, to load a new voice command in a conventional database of the vehicle would require issuing a new DVD, for example, and loading the information on the DVD into the vehicle.

The microphone 16 converts the utterance by the driver (e.g., “air conditioning on”) into pulse code modulation (PCM) data, which is then transmitted to the voice recognition engine 12. The voice recognition engine 12 compares the PCM data to the available voice recognition commands stored in the database 14. If the voice recognition engine 12 matches the PCM data to a voice command, the voice recognition engine 12 sends the voice command, or recognized utterance 20, to the target in-vehicle device (e.g., air conditioner) and the function is executed (e.g., the air conditioner turns on).

When a conventional voice recognition system recognizes a command, the system creates a file format called PCM data. This PCM data is basically a voice file of the utterance. In order for the voice recognition engine 12 to recognize a human utterance, the engine 12 must translate this PCM file into a recognizable format. This translated phonetic data is commonly referred to in the voice recognition industry as an ESR baseform. ESR baseforms are the fundamental linguistic representations for how the system will recognize a voice recognition command. These ESR baseforms are matched with a database of available commands in some sort of storage medium and as a result, a command is executed if the command is correctly matched. The voice recognition engine 12 will perform all of the translating and processing. This technology is well known within the voice recognition industry.

Today, vehicles often include a satellite or digital radio receiver, which offers an uninterrupted, near CD quality radio broadcast. For example, a person could drive from San Francisco, Calif., to Washington, D.C., without ever having to change the radio station. The driver would never hear static interfering with his/her favorite radio station, and the music would be interrupted by few or no commercials. XM Satellite radio and Sirius Satellite radio have both launched such a service. Currently, a driver cannot use a voice command to select a digital radio channel by name. Instead, the driver may only audibly select a digital radio station by the station number. With more than 100 channels typically available through a satellite radio, choosing the digital station by channel number is difficult.

New digital radio stations are regularly added to the existing radio broadcast services. Even if the driver could use a voice command to select a radio station by name, the voice recognition commands would need to be updated every time a new station is added to the broadcast system. Otherwise, a driver would not be able to select the newly added radio station(s) as easily as the radio stations that existed when the satellite radio was purchased.

Therefore, there is a need for a system for dynamically updating the voice recognition database of a vehicle to accommodate the rapid expansion and penetration of voice recognition into the automotive industry.

SUMMARY OF THE INVENTION

The present invention provides a system and method for dynamically updating voice recognition commands stored in a vehicle, which in turn provides a user friendly in-vehicle voice recognition system.

In accordance with one aspect of the embodiments described herein, there is provided a method for remotely and dynamically updating voice recognition commands available for controlling a device in a vehicle comprising the steps of: (a) receiving a broadcast signal comprising voice recognition data; (b) filtering the received broadcast signal by separating the voice recognition data from a remainder of the broadcast signal; (c) updating a database containing previously stored voice recognition data with the received voice recognition data; (d) receiving a spoken command from an input device; (e) determining whether the received spoken command matches the voice recognition data stored in the database; and (f) generating a recognized voice command based at least in part on matching the received spoken command with the voice recognition data stored in the database.

In accordance with another aspect of the embodiments described herein, there is provided a system for dynamically updating voice recognition commands available for controlling a device in a vehicle having a broadcast system for sending a broadcast signal comprising voice recognition data and an in-vehicle voice recognition system. The in-vehicle voice recognition system comprises a receiver unit, a memory unit, a processor, a voice input device, and a voice recognition engine. The receiver unit is adapted to receive the broadcast signal. The memory unit contains a database of stored voice recognition commands. The processor is coupled to the receiver unit and the memory unit and is adapted to extract the voice recognition data from a remaining portion of the broadcast signal. The processor is further adapted to update the stored voice recognition commands stored in the memory unit with the extracted voice recognition data. The voice input device is adapted to receive a spoken command from a user. The voice recognition engine is coupled to the voice input device and the memory unit. The voice recognition engine is adapted to determine whether the spoken command matches one of the stored voice recognition commands in the memory unit.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a schematic diagram of a voice recognition system, according to the prior art;

FIG. 2 is a schematic diagram of one embodiment of a voice recognition system, according to the present invention;

FIG. 3 a is a schematic diagram of an embodiment of a communication system pursuant to aspects of the invention;

FIG. 3 b is a schematic diagram of a navigation device in communication with a mobile unit according to an embodiment of the invention;

FIG. 4 is a block diagram of an embodiment of a multi-packet dedicated broadcast data message;

FIG. 5 is a diagram illustrating a subcarrier of a radio signal; and

FIG. 6 is a schematic diagram illustrating an embodiment of the modified broadcast data stream.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

FIGS. 2-6 illustrate several embodiments of a system for dynamically updating the voice recognition commands stored in a voice recognition unit of the vehicle. While the following description of the system is directed to an application of voice recognition commands for controlling in-vehicle radio functions, it should be appreciated that the system would apply equally well to voice recognition commands for controlling other in-vehicle devices, such as air-conditioning, power windows, door locks and any other device within the vehicle.

FIG. 2 illustrates one exemplary embodiment of a voice recognition system 100. In this embodiment, the in-vehicle voice recognition system 100 includes, among other things, a microphone 102, a voice recognition engine 104, a receiver unit 106 and a database 108. The database 108, similar to the database 14 in FIG. 1, stores the voice recognition commands available to the driver. The database 14 in FIG. 1, however, stores a static set of voice recognition commands that cannot be expanded without replacing the entire memory of the database. In contrast, the database 108 is stored in an updateable memory, as will be described in more detail later.

The receiver unit 106 may be located on a vehicle and allows the voice recognition commands stored in the database 108 to be updated remotely. The receiver unit 106 supports the receipt of content from a remote location that is broadcast over a one-to-many communication network. One-to-many communication systems include systems that can send information from one source to a plurality of receivers, such as a broadcast network. Broadcast networks include television, radio, and satellite networks. For example, the voice recognition commands may be updated by a remote broadcast signal such as the satellite radio broadcast service by XM. The one-to-many communication network may comprise a broadcast center that is further in communication with one or more communication satellites that relay a dedicated broadcast signal or a modified broadcast signal to a receiver unit 106 located in a vehicle. In the preferred embodiment, the broadcast center and the satellites are part of a satellite radio broadcasting system (e.g., XM Satellite Radio).

It will be understood that the dedicated broadcast signal and modified broadcast signal may be broadcast via any suitable information broadcast system (e.g., FM radio, AM radio, or the like), and is not limited to the satellite radio broadcast system. In the embodiment of FIG. 2, the receiver unit 106 of the system 100 receives a broadcast signal 110 that contains voice recognition data. The present system dynamically updates voice recognition commands through two types of broadcast signals: (1) a dedicated broadcast signal, and (2) a modified broadcast signal that will be explained in further detail later.

With reference to FIG. 3 a, there is provided an embodiment of a system for the exchange of information between a remote location 216 and a vehicle 201. The remote location 216 is a server system for outputting vehicle broadcast data that is controlled by the vehicle manufacturer. The vehicle 201 includes a navigation device 208 and a mobile unit 202. The navigation device 208 is an electronic system used to provide driving directions, display of messages to the vehicle operator, and audio playback of messages or satellite radio broadcasting. The navigation device 208 is operatively coupled to the mobile unit 202 and supports the receipt of content from the remote location 216 that is broadcast over a one-to-many communication network 200. One-to-many communication systems include systems that can send information from one source to a plurality of receivers, such as a broadcast network. Broadcast networks include television, radio, and satellite networks.

In a preferred embodiment of the invention, voice recognition data is generated at the remote location 216 or may be generated at an alternate location and the voice recognition data is subsequently broadcast from the remote location 216 over the one-to-many communication network 200 to the vehicle 201. The mobile unit 202 receives the broadcasted message and may transmit the voice recognition data to the navigation device 208 for updating of the database of available voice recognition commands, which will be described in further detail.

The remote location 216 includes a remote server 218, a remote transmitter 222, and a remote memory 224, that are each in communication with one another. The remote transmitter 222 communicates with the navigation device 208 and mobile unit 202 by way of the broadcast 200 communication network. The remote server 218 supports the routing of message content over the broadcast network 200. The remote server 218 comprises an input unit, such as a keyboard, that allows the vehicle manufacturer to enter voice recognition data into memory 224 and a processor unit that controls the communication over the one-to-many communication network 200.

The server 218 is in communication with the vehicle over a one-to-many communication network 200. In the present embodiment, the one-to-many communication network 200 comprises a broadcast center that is further in communication with one or more communication satellites that relay the vehicle safety message as a broadcast message to a mobile unit 202 in the owner's vehicle 201. In the present embodiment, the broadcast center and the satellites are part of a satellite radio broadcasting system (e.g., XM Satellite Radio). It will be understood that the safety message can be broadcast via any suitable information broadcast system (e.g., FM radio, AM radio, or the like), and is not limited to the satellite radio broadcast system. In one embodiment, the mobile unit 202 relays the safety message to an onboard computer system, such as the vehicle's navigation system 208, which in turn updates the database of available voice recognition commands.

FIG. 3 b shows an expanded view of both the navigation device 208 and the mobile unit 202 contained on the vehicle 201. The navigation device 208 may include an output unit 214, a receiver unit 215, an input unit 212, a voice recognition engine 210, a navigation memory unit 209, a navigation processor unit 213, and an RF transceiver unit 211 that are all in electrical communication with one another. The navigation memory unit 209 may include a database of voice recognition phonetic data or alternately, the database may be stored in memory not contained in the navigation device 208. The database of voice recognition phonetic data may be updated in the vehicle by way of the input unit 212, which can include at least one of a keyboard, a touch sensitive display, jog-dial control, and a microphone. The database of voice recognition phonetic data may also be updated by way of information received through the receiver unit 215 and/or the RF transceiver unit 211.

The receiver unit 215 receives information from the remote location 216 and, in one embodiment, is in communication with the remote location by way of a one-to-many communication network 200 (see FIG. 3 a). The information received by the receiver 215 may be processed by the navigation processor unit 213. The processed information may then be displayed by way of the output unit 214, which includes at least one of a display and a speaker. In one embodiment, the receiver unit 215, the navigation processor unit 213 and the output unit 214 are provided access to only subsets of the received broadcast information.

In the embodiment shown in FIG. 3 b, the mobile unit 202 includes a wireless receiver 204, a mobile unit processor 206, and an RF transceiver unit 207 that are in communication with one another. The mobile unit 202 receives communication from the remote location 216 by way of the receiver 204.

In one embodiment, the navigation device 208 and mobile unit 202 are in communication with one another by way of RF transceiver units 207 and 211. Both the navigation device 208 and the mobile unit 202 include RF transceiver units 211, 207, which, in one embodiment, comply with the Bluetooth® wireless data communication format or the like. The RF transceiver units 211, 207 allow the navigation device 208 and the mobile unit 202 to communicate with one another.

The voice recognition data is transmitted from the remote location 216 to the navigation device 208 by way of the broadcast network 200. At the vehicle, the voice recognition data may be stored in the memory 209 of the navigation device 208. Further details regarding embodiments of information exchange systems can be found in U.S. patent application Ser. No. 11/100,868, filed Apr. 6, 2005, titled “Method and System for Controlling the Exchange of Vehicle Related Messages,” the disclosure of which is incorporated in its entirety herein by reference.

In embodiments that involve broadcasting the voice recognition data to affected vehicle owners, one or a few messages may be transmitted over a one-to-many communication network 200 that each comprise a plurality of one-to-one portions (shown in FIG. 4), as opposed to transmitting a separate message for each vehicle. Each one-to-one portion will typically be applicable to a single affected vehicle and allows for the broadcast of targeted vehicle information over a one-to-many network 200 using less bandwidth than if each message was sent individually. When broadcasting a message over a one-to-many communication network 200, all vehicles 201 within range of the network 200 may receive the message, however the message will be filtered by the mobile unit 202 of each vehicle 201 and only vehicles 201 specified in the one-to-one portions of the message will store the message for communication to the vehicle owner. In one embodiment, each one-to-one portion comprises a filter code section. The filter code section can comprise a given affected vehicle's vehicle identification number (VIN) or another suitable vehicle identifier known in the art. The vehicle identifier will typically comprise information relating to the vehicle type, model year, mileage, sales zone, etc., as explained in further detail in U.S. patent application Ser. No. 11/232,311, filed Sep. 20,2005, titled “Method and System for Broadcasting Data Messages to a Vehicle,” the content of which is incorporated in its entirety into this disclosure by reference.

One embodiment of the present invention, receives voice recognition updates from a dedicated broadcast data stream. The dedicated data stream utilizes a specialized channel connection such as the connection described for transmitting traffic data described in further detail in U.S. patent application Ser. No. 11/266,879, filed Nov. 4, 2005, titled Data Broadcast Method for Traffic Information, the disclosure of which is incorporated in its entirety herein by reference. For example, the XM Satellite Radio signal uses 12.5 MHz of the S band: 2332.5 to 2345.0 MHz. XM has agreed to provide portions of the available radio bandwidth to certain companies to utilize for specific applications. The transmission of messages over the negotiated bandwidth would be considered to be a dedicated data stream. In a preferred embodiment, only certain vehicles would be equipped to receive the dedicated broadcast signal or data set. For example, the dedicated broadcast signal may only be received by Honda vehicles through a particular Honda satellite channel connection and a satellite radio receiver. However, the broadcast signal may comprise, by way of example only, a digital signal, FM signal, WiFi, cell, a satellite signal, a peer-to-peer network and the like. In an embodiment of the invention, voice recognition data is embedded into the dedicated broadcast message received at the vehicle.

To install a new voice recognition command in the vehicle, the dedicated radio signal, containing one or a plurality of new or updated voice recognition phonetics, is transmitted to each on-board vehicle receiver unit 204. With a dedicated signal, the in-vehicle hardware/software architecture would be able to accept this signal. In a preferred embodiment, other vehicles or even older vehicles without a receiver unit 204 would not be able to receive, let alone process the data.

In an exemplary embodiment, after the mobile unit receiver 204 receives a broadcast signal, the receiver 204 transmits the dedicated broadcast signal to the on-board vehicle processor 206. The broadcast signal is then deciphered or filtered by the processor 206. For example, the processor 206 filters out the voice recognition phonetics from the other portions of the dedicated broadcast signal (e.g., traffic information, the radio broadcast itself, etc.). The other portions of the broadcast signal are sent to the appropriate in-vehicle equipment (e.g., satellite radio receiver, navigation unit, etc.).

In the present embodiment, the voice recognition phonetics data is sent by the processor 206 to the navigation device 208, and is stored in the on-board memory 209 of the device. This updated voice recognition data, once stored in the on-board memory 209, is then available to the voice recognition engine 210. The on-board memory 209 may comprise any type of electronic storage device such as, but not limited to, a hard disk, flash memory, and the like. The on-board memory 209 may be separate from the navigation device 208 or integrated into it. The function of the on-board memory 209 can be dedicated to storing only voice recognition phonetic data or may comprise a multi-function storage capacity by also storing other content such as digital music and navigation-related information.

The navigation device 208 preferably includes an electronic control unit (ECU) (not shown). The ECU processes the voice recognition phonetic data received by the receiver 204 so that the voice recognition commands stored in the on-board memory 209 can be used by the system. In operation, voice recognition data is transmitted to the vehicle and is stored in the on-board memory 209. The ECU organizes and formats the data stored in the memory 209 into a format that is readable by the system, and in particular, so that the voice recognition engine 210 can read the data.

The voice recognition engine 210 receives voice command signals (e.g., “select National Public Radio” or “select NPR”) from an input device 212 such as a microphone. The voice recognition engine 210 may be integral to the navigation device 208 or may be a separate device. The voice recognition engine 210 can identify voice recognition commands in addition to tuning commands for the satellite radio receiver. For example, the voice recognition engine 210 can be used to identify a volume command, fade command, balance command or other functional commands of the vehicle radio system. The voice recognition engine 210 may also be used to control other in-vehicle devices such as the air conditioning, power windows and so on. A storage module (not shown) that is configured to store information relating to the programming information for digital channels received by the receiver unit 204 may be coupled to the voice recognition engine 210.

For example, a satellite radio broadcast may add a CNN digital channel to the radio lineup after a vehicle has been purchased. In a conventional satellite radio system, the driver would only be able to manually select the new CNN digital channel. The voice recognition system 10 would not include a CNN voice command pre-stored in the database 14. In the present invention, the receiver 204 would receive a broadcast signal containing a voice recognition command for “CNN.” After the CNN voice command was stored in the memory 209, the driver would be able to say, for example, “select radio channel CNN,” and the voice recognition engine 210 would identify the words “radio channel” based on a fixed command set stored in a fixed command table of the memory 209. The variable part—“CNN”— is also compared with phonemes in the channel table of available channels.

The voice recognition engine 210 would then match the utterance by the driver or command “CNN” with the “CNN” string of phonemes stored in the memory 209 and adjusts the tuner to the channel number corresponding to CNN. The CNN signal transmitted by the broadcast service (e.g., XM Satellite radio) is then received by the radio of the vehicle. Voice recognition systems are currently available and known within the automobile industry and therefore, additional disclosure of the operation of the voice recognition engine is not required.

Broadcasting the updated voice recognition data through a dedicated broadcast signal to the vehicles on the road provides each vehicle with accurate, concise up-to-date data. For specific functions such as selecting digital channels and categories, updating the voice recognition commands keeps the voice recognition commands available to the driver (or a passenger) current should the lineup change by the vendor. A byproduct of this improvement is the application of voice recognition technology in areas where voice recognition commands could previously not be used due to the possible change in names or options.

A second embodiment of the present invention receives voice recognition updates from a modified broadcast signal. In an exemplary modified broadcast signal, voice recognition data may be transmitted in a subcarrier of the radio signal such as in a Radio Data System (RDS) signal shown in FIG. 5. The subcarrier is a portion of the channel range. The outlying portions of the radio frequency range are often used for additional transmission (i.e., text data). Song titles, radio station names, and stock information are commonly transferred today. It should be appreciated that the subcarrier may be used to carry voice recognition data in any radio signal (e.g., FM, AM, XM, Sirius). The embodiment of the invention transmits text data pertaining to word phonetics by using the extra subcarrier range.

An exemplary modified broadcast signal may be a standard radio audio signal 322 such that the radio signal is modified or combined 323 to also include voice recognition phonetic data 320 as shown in FIG. 6. Combining multiple data streams into a single signal prior to broadcast is known within the electronic art and therefore, does not require further description. In this embodiment, the modified broadcast signal updates the voice recognition commands stored in a navigation device 324. The modified broadcast signal, similar to the dedicated broadcast signal shown in FIG. 4, may transmit signals through various channels (e.g., radio, satellite, WiFi, etc.).

The embodiment of FIG. 5 specifically illustrates transmitting voice recognition phonetic data in connection with radio station name updates. New digital channels are continuously being offered to satellite radio owners, and the channel lineup is subject to change at any time. In this embodiment, any time the satellite radio broadcast adds, for example, a new radio station channel, voice recognition data for the new station channel may be immediately broadcast to all vehicles capable of receiving the modified broadcast signal. The system may broadcast other updates too. This method allows the commands for the radio channels and categories to be up-to-date soon after there is a line-up change.

The receiver unit 304 of the vehicle constantly receives the voice recognition data 320 along with the radio audio signal 322. The receiver unit 304 separates the voice recognition phonetic data 320 from the radio audio signal 322 as is conventionally done with channel, category, and song information, and is known within the art. The voice recognition phonetic data 320 is sent to the navigation device 324 and stored in the memory 329. The newly stored voice recognition phonetic data 320 may then be referenced whenever the user (e.g., driver or passenger) searches for a specific digital radio channel or category using the voice recognition features of the satellite radio. The voice recognition phonetic data 320 may also comprise voice recognition commands for other equipment in the vehicle, such as the air conditioning system, power windows, and so on. If the vehicle manufacturer intends to add a new voice command feature to the vehicle, the new voice command may simply be transmitted to the vehicle. Once the voice command is stored in the memory 329, the driver may use the voice command to control the item of equipment.

It should be appreciated that the above-described methods for dynamically updating in-vehicle voice recognition commands are for explanatory purposes only and that the invention is not limited thereby. Having thus described a preferred embodiment of a method and system for dynamically updating voice recognition commands, it should be apparent to those skilled in the art that certain advantages of the described method and system have been achieved. It should also be appreciated that various modifications, adaptations, and alternative embodiments thereof may be made within the scope and spirit of the present invention. It should also be apparent that many of the inventive concepts described above would be equally applicable to the use of other voice recognition systems.

Citations de brevets
Brevet cité Date de dépôt Date de publication Déposant Titre
US440185213 janv. 198230 août 1983Nissan Motor Company, LimitedVoice response control system
US446875620 mai 198228 août 1984Chan Computer CorporationMethod and apparatus for processing languages
US525532618 mai 199219 oct. 1993Alden StevensonInteractive audio control system
US527456027 mars 199128 déc. 1993Audio Navigation Systems, Inc.Sensor free vehicle navigation system utilizing a voice input/output interface for routing a driver from his source point to his destination point
US534553827 janv. 19926 sept. 1994Krishna NarayannanVoice activated control apparatus
US554378924 juin 19946 août 1996Shields Enterprises, Inc.Computerized navigation system
US55923858 nov. 19957 janv. 1997Mitsubishi Denki Kabushiki KaishaVehicle cruise control system with voice command
US55923896 mars 19957 janv. 1997Ans, LlpNavigation system utilizing audio CD player for data storage
US56384252 nov. 199410 juin 1997Bell Atlantic Network Services, Inc.Automated directory assistance system using word recognition and phoneme processing method
US56662933 juil. 19959 sept. 1997Bell Atlantic Network Services, Inc.Downloading operating system software through a broadcast channel
US56779905 mai 199514 oct. 1997Panasonic Technologies, Inc.System and method using N-best strategy for real time recognition of continuously spelled names
US56872219 sept. 199411 nov. 1997Hitachi, Ltd.Information processing apparatus having speech and non-speech communication functions
US569927512 avr. 199516 déc. 1997Highwaymaster Communications, Inc.System and method for remote patching of operating code located in a mobile unit
US57488409 mai 19955 mai 1998Audio Navigation Systems, Inc.Methods and apparatus for improving the reliability of recognizing words in a large database when the words are spelled or spoken
US575223020 août 199612 mai 1998Ncr CorporationMethod and apparatus for identifying names with a speech recognition program
US57748593 janv. 199530 juin 1998Scientific-Atlanta, Inc.Information system having a speech interface
US579711621 janv. 199718 août 1998Canon Kabushiki KaishaMethod and apparatus for recognizing previously unrecognized speech by requesting a predicted-category-related domain-dictionary-linking word
US58060182 juin 19948 sept. 1998Intellectual Property Development Associates Of Connecticut, IncorporatedMethods and apparatus for updating navigation information in a motorized vehicle
US582900031 oct. 199627 oct. 1998Microsoft CorporationMethod and system for correcting misrecognized spoken words or phrases
US590386413 nov. 199511 mai 1999Dragon SystemsSpeech recognition
US595016031 oct. 19967 sept. 1999Microsoft CorporationMethod and system for displaying a variable number of alternative words during speech recognition
US595668411 oct. 199621 sept. 1999Sony CorporationVoice recognition apparatus, voice recognition method, map displaying apparatus, map displaying method, navigation apparatus, navigation method and car
US59959282 oct. 199630 nov. 1999Speechworks International, Inc.Method and apparatus for continuous spelling speech recognition with early identification
US60061859 mai 199721 déc. 1999Immarco; PeterSystem and device for advanced voice recognition word spotting
US600935528 janv. 199728 déc. 1999American Calcar Inc.Multimedia information and control system for automobiles
US604983013 mai 199711 avr. 2000Sony CorporationPeripheral software download of a broadcast receiver
US606432311 oct. 199616 mai 2000Sony CorporationNavigation apparatus, navigation method and automotive vehicles
US609463517 sept. 199725 juil. 2000Unisys CorporationSystem and method for speech enabled application
US610450029 avr. 199815 août 2000Bcl, Computer Inc.Networked fax routing via email
US610863118 sept. 199822 août 2000U.S. Philips CorporationInput system for at least location and/or street names
US611217413 nov. 199729 août 2000Hitachi Microcomputer System Ltd.Recognition dictionary system structure and changeover method of speech recognition system for car navigation
US612192430 déc. 199719 sept. 2000Navigation Technologies CorporationMethod and system for providing navigation systems with updated geographic data
US612261220 nov. 199719 sept. 2000At&T CorpCheck-sum based method and apparatus for performing speech recognition
US612534729 sept. 199326 sept. 2000L&H Applications Usa, Inc.System for controlling multiple user application programs by spoken input
US613786313 déc. 199624 oct. 2000At&T Corp.Statistical database correction of alphanumeric account numbers for speech recognition and touch-tone recognition
US6185537 *3 déc. 19976 févr. 2001Texas Instruments IncorporatedHands-free audio memo system and method
US623009831 août 19988 mai 2001Toyota Jidosha Kabushiki KaishaMap data processing apparatus and method, and map data processing system
US623013210 mars 19988 mai 2001Daimlerchrysler AgProcess and apparatus for real-time verbal input of a target address of a target address system
US62301362 déc. 19988 mai 2001Matsushita Electric Industrial Co., Ltd.Voice control system and navigation system using same
US623696719 juin 199822 mai 2001At&T Corp.Tone and speech recognition in communications systems
US624034713 oct. 199829 mai 2001Ford Global Technologies, Inc.Vehicle accessory control with integrated voice and manual activation
US625312214 juin 199926 juin 2001Sun Microsystems, Inc.Software upgradable dashboard
US62531741 juil. 199826 juin 2001Sony CorporationSpeech recognition system that restarts recognition operation when a new speech signal is entered using a talk switch
US625663017 juin 19993 juil. 2001Phonetic Systems Ltd.Word-containing database accessing system for responding to ambiguous queries, including a dictionary of database words, a dictionary searcher and a database searcher
US626933514 août 199831 juil. 2001International Business Machines CorporationApparatus and methods for identifying homophones among words in a speech recognition system
US629832412 nov. 19982 oct. 2001Microsoft CorporationSpeech recognition system with changing grammars and grammar help command
US634337918 mars 199929 janv. 2002Sony CorporationReceiver and program updating method
US636277912 sept. 200026 mars 2002Navigation Technologies Corp.Method and system for providing navigation systems with updated geographic data
US636334716 févr. 199926 mars 2002Microsoft CorporationMethod and system for displaying a variable number of alternative words during speech recognition
US63855355 avr. 20017 mai 2002Alpine Electronics, Inc.Navigation system
US63855822 mai 20007 mai 2002Pioneer CorporationMan-machine system equipped with speech recognition device
US64050278 déc. 199911 juin 2002Philips Electronics N.A. CorporationGroup call for a wireless mobile communication device using bluetooth
US64051729 sept. 200011 juin 2002Mailcode Inc.Voice-enabled directory look-up based on recognized spoken initial characters
US64118933 juil. 200125 juin 2002Siemens AgMethod for selecting a locality name in a navigation system by voice input
US645692926 mars 200124 sept. 2002Mitsubishi Denki Kabushiki KaishaNavigation system and method for vehicles
US64704962 août 199922 oct. 2002Matsushita Electric Industrial Co., Ltd.Control program downloading method for replacing control program in digital broadcast receiving apparatus with new control program sent from digital broadcast transmitting apparatus
US647373427 mars 200029 oct. 2002Motorola, Inc.Methodology for the use of verbal proxies for dynamic vocabulary additions in speech interfaces
US648078611 mai 200112 nov. 2002Alpine Electronics, Inc.Method and system for route guiding
US648753224 sept. 199826 nov. 2002Scansoft, Inc.Apparatus and method for distinguishing similar-sounding utterances speech recognition
US648755925 sept. 200126 nov. 2002Navigation Technologies CorporationMethod for updating a geographic database
US65051556 mai 19997 janv. 2003International Business Machines CorporationMethod and system for automatically adjusting prompt feedback based on predicted recognition accuracy
US65263809 août 199925 févr. 2003Koninklijke Philips Electronics N.V.Speech recognition system having parallel large vocabulary recognition engines
US65358941 juin 200018 mars 2003Sun Microsystems, Inc.Apparatus and method for incremental updating of archive files
US654633428 juin 20008 avr. 2003Mitsubishi Denki Kabushiki KaishaCar navigation map update system and car navigation terminal
US658417924 oct. 199724 juin 2003Bell CanadaMethod and apparatus for improving the utility of speech recognition
US658443921 mai 199924 juin 2003Winbond Electronics CorporationMethod and apparatus for controlling voice controlled devices
US658778620 août 19931 juil. 2003Audio Navigation Systems, Inc.Sensor free vehicle navigation system utilizing a voice input/output interface for routing a driver from his source point to his destination point
US660666031 août 199912 août 2003Accenture LlpStream-based communication in a communication services patterns environment
US661480422 mars 19992 sept. 2003Webtv Networks, Inc.Method and apparatus for remote update of clients by a server via broadcast satellite
US661513117 mai 20002 sept. 2003Televigation, Inc.Method and system for an efficient operating environment in a real-time navigation system
US665099728 sept. 200118 nov. 2003Robert Bosch GmbhSystem and method for interfacing mobile units using a cellphone
US665495519 déc. 199625 nov. 2003International Business Machines CorporationAdding speech recognition libraries to an existing program at runtime
US667166624 févr. 199830 déc. 2003Qinetiq LimitedRecognition system
US667514729 mars 20006 janv. 2004Robert Bosch GmbhInput method for a driver information system
US669112819 avr. 200110 févr. 2004Navigation Technologies Corp.Navigation system with distributed computing architecture
US66942539 oct. 200117 févr. 2004Hewlett-Packard Development Company, L.P.Navigation device for receiving satellite broadcast distribution of map data
US669429517 mai 199917 févr. 2004Nokia Mobile Phones Ltd.Method and a device for recognizing speech
US66942963 nov. 200017 févr. 2004Microsoft CorporationMethod and apparatus for the recognition of spelled spoken words
US669779613 janv. 200024 févr. 2004Agere Systems Inc.Voice clip search
US670815011 sept. 200016 mars 2004Zanavi Informatics CorporationSpeech recognition apparatus and speech recognition navigation apparatus
US671147424 juin 200223 mars 2004G. Victor TreyzAutomobile personal computer systems
US671830429 juin 20006 avr. 2004Kabushiki Kaisha ToshibaSpeech recognition support method and apparatus
US672170210 déc. 200113 avr. 2004Infineon Technologies AgSpeech recognition method and device
US672519712 oct. 199920 avr. 2004Koninklijke Philips Electronics N.V.Method of automatic recognition of a spelled speech utterance
US673207728 mai 19964 mai 2004Trimble Navigation LimitedSpeech recognizing GIS/GPS/AVL system
US67515959 mai 200115 juin 2004Bellsouth Intellectual Property CorporationMulti-stage large vocabulary speech recognition system and method
US675726215 sept. 200029 juin 2004Motorola, Inc.Service framework supporting remote service discovery and connection
US678906524 janv. 20017 sept. 2004Bevocal, IncSystem, method and computer program product for point-to-point voice-enabled driving directions
US679909823 déc. 200228 sept. 2004Beltpack CorporationRemote control system for a locomotive using voice commands
US6820055 *26 avr. 200116 nov. 2004Speche CommunicationsSystems and methods for automated audio transcription, translation, and transfer with text display software for manipulating the text
US68368224 févr. 199928 déc. 2004Pioneer Electronic CorporationApparatus for and method of retrieving information
US6876970 *13 juin 20015 avr. 2005Bellsouth Intellectual Property CorporationVoice-activated tuning of broadcast channels
US6889191 *3 déc. 20013 mai 2005Scientific-Atlanta, Inc.Systems and methods for TV navigation with compressed voice-activated commands
US690739716 sept. 200214 juin 2005Matsushita Electric Industrial Co., Ltd.System and method of media file access and retrieval using speech recognition
US691791730 août 200012 juil. 2005Samsung Electronics Co., LtdApparatus and method for voice recognition and displaying of characters in mobile telecommunication system
US692840417 mars 19999 août 2005International Business Machines CorporationSystem and methods for acoustic and language modeling for automatic speech recognition with large vocabularies
US69312634 mars 200216 août 2005Matsushita Mobile Communications Development Corporation Of U.S.A.Voice activated text strings for electronic devices
US69998745 nov. 200314 févr. 2006Nissan Motor Co., Ltd.Navigation device and related method
US700697118 sept. 200028 févr. 2006Koninklijke Philips Electronics N.V.Recognition of a speech utterance available in spelled form
US701026314 déc. 19997 mars 2006Xm Satellite Radio, Inc.System and method for distributing music and data
US702061215 oct. 200128 mars 2006Pioneer CorporationFacility retrieval apparatus and method
US703147725 janv. 200218 avr. 2006Matthew Rodger MellaVoice-controlled system for providing digital audio content in an automobile
US703962912 juil. 20002 mai 2006Nokia Mobile Phones, Ltd.Method for inputting data into a system
US7072686 *9 août 20024 juil. 2006Avon Associates, Inc.Voice controlled multimedia and communications device
US7321857 *10 janv. 200522 janv. 2008Scientific-Atlanta, Inc.Systems and methods for TV navigation with compressed voice-activated commands
US7529677 *21 janv. 20055 mai 2009Itt Manufacturing Enterprises, Inc.Methods and apparatus for remotely processing locally generated commands to control a local device
US7577665 *19 janv. 200618 août 2009Jumptap, Inc.User characteristic influenced search results
US20030064755 *1 oct. 20013 avr. 2003General Motors CorporationMethod and apparatus for generating DTMF tones using voice-recognition commands during hands-free communication in a vehicle
US20040215464 *9 janv. 200228 oct. 2004Nelson Warren FredVoice activated-automotive window display unit
US20050143134 *30 déc. 200330 juin 2005Lear CorporationVehicular, hands-free telephone system
US20050144007 *17 déc. 200430 juin 2005Bellsouth Intellectual Property CorporationVoice-activated tuning of channels
US20060047386 *31 août 20042 mars 2006International Business Machines CorporationTouch gesture based interface for motor vehicle
US20070061211 *3 févr. 200615 mars 2007Jorey RamerPreventing mobile communication facility click fraud
Référencé par
Brevet citant Date de dépôt Date de publication Déposant Titre
US8452533 *7 sept. 201028 mai 2013Toyota Motor Engineering & Manufacturing North America, Inc.System and method for extracting a destination from voice data originating over a communication network
US20070276586 *25 mai 200729 nov. 2007Samsung Electronics Co., Ltd.Method of setting a navigation terminal for a destination and an apparatus therefor
US20120059579 *7 sept. 20108 mars 2012Toyota Motor Engineering & Manufacturing North America, Inc.System and method for extracting a destination from voice data originating over a communication network
WO2012174515A1 *18 juin 201220 déc. 2012Agero Connected Services, Inc.Hybrid dialog speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same
Classifications
Classification aux États-Unis704/270.1, 704/275, 704/231, 455/556.1, 704/246, 704/270, 455/563, 455/412.1, 701/36, 701/469
Classification internationaleG10L21/00
Classification coopérativeG10L15/28, G10L15/30
Classification européenneG10L15/28
Événements juridiques
DateCodeÉvénementDescription
31 oct. 2006ASAssignment
Owner name: HONDA MOTOR CO., LTD., JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUANG, RITCHIE WINSON;KIRSCH, DAVID MICHAEL;REEL/FRAME:018459/0424
Effective date: 20061024