US20150170641A1 - System and method for providing a natural language content dedication service - Google Patents
System and method for providing a natural language content dedication service Download PDFInfo
- Publication number
- US20150170641A1 US20150170641A1 US14/631,772 US201514631772A US2015170641A1 US 20150170641 A1 US20150170641 A1 US 20150170641A1 US 201514631772 A US201514631772 A US 201514631772A US 2015170641 A1 US2015170641 A1 US 2015170641A1
- Authority
- US
- United States
- Prior art keywords
- content
- utterance
- dedication
- voice
- natural language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G06F17/28—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
Definitions
- the invention generally relates to providing a natural language content dedication service in a voice services environment, and in particular, to detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language to customize the content for recipients of the dedications, and delivering the customized content to the recipients of the dedications.
- voice recognition software which has the potential to enable users to exploit features that would otherwise be unfamiliar, unknown, or difficult to use.
- Navteq Corporation which provides data used in a variety of applications such as automotive navigation and web-based applications, demonstrates that voice recognition often ranks among the features most desired by consumers of electronic devices. Even so, existing voice user interfaces, when they actually work, still require significant learning on the part of the user.
- existing voice user interfaces fall short in utilizing information distributed across different domains, devices, and applications in order to resolve natural language voice-based inputs.
- existing voice user interfaces suffer from being constrained to a finite set of applications for which they have been designed, or to devices on which they reside.
- technological advancement has resulted in users often having several devices to suit their various needs, existing voice user interfaces do not adequately free users from device constraints. For example, users may be interested in services associated with different applications and devices, but existing voice user interfaces tend to restrict users from accessing the applications and devices as they see fit.
- users typically can only practicably carry a finite number of devices at any given time, yet content or services associated with users' devices other than those currently being used may be desired in various circumstances.
- a system and method for providing a natural language content dedication service may generally operate in a voice services environment that includes one or more electronic devices that can receive multi-modal natural language device interactions.
- providing the natural language content dedication service may generally include detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language to customize the content for recipients of the dedications, and delivering the customized content to the recipients of the dedications.
- the natural language content dedication service may operate in a hybrid processing environment, which may generally include a plurality of multi-modal devices configured to cooperatively interpret and process natural language utterances included in the multi-modal device interactions.
- a virtual router may receive messages that include encoded audio corresponding to natural language utterances contained in the multi-modal device interactions, which may be received at one or more of the plurality of multi-modal devices in the hybrid processing environment.
- the virtual router may analyze the encoded audio to select a cleanest sample of the natural language utterances and communicate with one or more other devices in the hybrid processing environment to determine an intent of the multi-modal device interactions. The virtual router may then coordinate resolving the multi-modal device interactions based on the intent of the multi-modal device interactions.
- a method for providing the natural language content dedication service may comprise detecting a multi-modal device interaction at an electronic device, wherein the multi-modal device interaction may include at least a natural language utterance.
- One or more messages containing information relating to the multi-modal device interaction may then be communicated to the virtual router through a messaging interface.
- the electronic device may then receive one or more messages (e.g., from the virtual router through message interface), wherein the messages may contain information relating to an intent of the multi-modal device interaction.
- the multi-modal device interaction may be resolved at the electronic device based on the information contained in the one or more messages received from the virtual router.
- a system for providing the natural language content dedication service may generally include a voice-enabled client device that can communicate with a content dedication system through the messaging interface.
- the content dedication system include a voice-enabled server, which may be configured to communicate with the virtual router through another messaging interface, or the content dedication system may alternatively include the virtual router.
- the content dedication system may further include a billing system for processing transactions relating to the content dedication service.
- the natural language content dedication service may be provided on any suitable voice-enabled client device having a suitable combination of input and output devices that can receive and respond to multi-modal device interactions that include natural language utterances, and the input and output devices may be further arranged to receive and respond to any other suitable type of input and output.
- operating the natural language content dedication service may generally include a user of the voice-enabled client device listening to music, watching video, or otherwise interacting with content and providing a multi-modal natural language request to engage in a transaction to dedicate the music, video, or other content.
- the voice-enabled client device may be included within the hybrid processing environment that includes the plurality of multi-modal devices, whereby the content dedication request may relate to content played on a different device from the voice-enabled client device, although the content dedication request may relate to any suitable content (i.e., the request need not necessarily relate to played content, as users may provide natural language to request content dedications for any suitable content, including a particular song or video that the user may be thinking about).
- the voice-enabled client device may invoke an Automatic Speech Recognizer (ASR) to generate a preliminary interpretation of the utterance.
- ASR Automatic Speech Recognizer
- the ASR may then provide the preliminary interpretation of the utterance to a conversational language processor, which may attempt to determine an intent for the multi-modal interaction.
- the conversational language processor may determine a most likely context for the interaction from the preliminary interpretation of the utterance, any accompanying non-speech inputs in the multi-modal interaction that relate to the utterance, contexts associated with prior requests, short-term and long-term shared knowledge, or any other suitable information for interpreting the multi-modal interaction.
- a content dedication application may be invoked to resolve the content dedication request.
- the conversational language processor may search one or more data repositories that contain content information to identify content matching criteria contained in the content dedication request. Moreover, the conversational language processor may further cooperate with other devices in the hybrid processing environment to search for or otherwise identify the content requested for dedication (e.g., in response to a local data repository not yielding adequate results, another device in the hybrid processing environment having a larger content data repository than the client device may be invoked). The conversational language processor may then receive appropriate results identifying the content requested for dedication and present the results to the user through the output device (e.g., displaying information about the content, playing a sample clip of the content, displaying options to purchase the content, recommending similar content, etc.).
- the output device e.g., displaying information about the content, playing a sample clip of the content, displaying options to purchase the content, recommending similar content, etc.
- the results presented through the output device may further include an option to confirm the content dedication, wherein the user may confirm the content dedication in a natural language utterance, a non-speech input, or any suitable combination thereof.
- the content dedication application may then be invoked to process the content dedication request.
- the content dedication application may capture a natural language utterance that contains the dedication to accompany the content.
- the user may then provide the dedication utterance through the voice-enabled input device, and the dedication utterance may then be converted into an electronic signal that the content dedication application captures for the dedication.
- the content dedication application may prompt the user to provide any additional tags for the dedicated content (e.g., an image to insert as album art in the dedicated content, an utterance to insert or transcribe into metadata tags for the dedicated content, a non-speech or data input to insert in the metadata tags for the dedicated content, etc.).
- the content dedication application may further prompt the user to identify a recipient of the dedication, wherein the user may provide any suitable multi-modal input that includes information identifying the recipient of the content dedication.
- the content dedication application may then route the request to the content dedication system, which may process a transaction for the content dedication.
- processing the transaction for the content dedication may include the content dedication system receiving encoded audio corresponding to the dedication utterance through the messaging interface.
- the content dedication system may then insert the encoded audio corresponding to the dedication utterance within the dedicated content, verbally annotate the dedicated content with the encoded audio, and/or transcribe the dedication utterance into a textual annotation for the dedicated content.
- any utterances to insert into the metadata tags for the dedicated content may provide further verbal annotations for the dedicated content, and any such utterances may also be transcribed into text to provide further textual annotations for the dedicated content.
- the content dedication system may then invoke a content dedication application hosted on the voice-enabled server to process the request.
- the content dedication application hosted on the voice-enabled server may identify the content requested for dedication (e.g., if the content dedication application on the voice-enabled client device was unable to suitably identify the requested content, or the information communicated from the voice-enabled client device may identify the requested content if the content dedication application on the voice-enabled client device was able to suitably identify the requested content, etc.).
- the content dedication system may then communicate with the billing system to process an appropriate transaction for the dedication request based on a selected purchase option for the dedication request.
- the content dedication system may support various purchase options to provide users with flexibility in requesting content dedications.
- a buy-to-own purchase option may include the content dedication system purchasing full rights to the content from an appropriate content provider.
- the billing system may then charge the user of the voice-enabled client device an appropriate amount that encompasses the cost for purchasing the rights to the content from the content provider and a service charge for customizing the content with the dedication and any additional tags and subsequently delivering the customized content to the dedication recipient.
- the user may be charged in a similar manner under a pay-to-play purchase option, except that the rights purchased from the content provider may be limited (e.g., to a predetermined number of plays).
- the cost for purchasing the content from the content provider may be somewhat less under the pay-to-play purchase option, such that the billing system may charge the user somewhat less under the pay-to-play purchase option.
- the user may pay a periodic service charge to the content dedication system that permits the user to make content dedications based on terms of the subscription (e.g., a predetermined number an unlimited number of content dedications may be made in a subscription period depending on the particular terms of the user's subscription).
- terms of the subscription e.g., a predetermined number an unlimited number of content dedications may be made in a subscription period depending on the particular terms of the user's subscription.
- other purchase options may be suitably employed, as will be apparent.
- a service provider associated with the content dedication system may negotiate agreements with content providers to determine the manner in which revenues for content transactions will be shared between the content dedication system and the content providers. For example, an agreement may permit a particular content provider to keep all of the revenue for transactions that include purchasing content from the content provider, while the service provider associated with the content dedication system agrees to recoup any such costs from users. In another example, an agreement may share revenue for content transactions between a content provider and the service provider.
- the content dedication system may insert the natural language dedication utterance into the dedicated content, verbally annotate the dedicated content with the dedication utterance, or otherwise associate the content with the dedication utterance. Furthermore, the content dedication system may determine whether any additional tags have been specified for the dedicated content and insert such additional tags into the dedicated content, as appropriate (e.g., inserting an image or picture into metadata tags corresponding to album art for the dedicated content, transcribing natural language utterances, non-voice, and/or data inputs into text and inserting such text into the metadata tags for the dedicated content, etc.).
- the content dedication system may then send a content dedication message to the recipient of the dedication, wherein the message may include a link that the recipient can select to stream, download, or otherwise access the dedicated content, the dedication utterance, etc.).
- the content dedication message may generally notify the recipient that content has been dedicated to the recipient and provide various mechanisms for the recipient to access the content dedication, as will be apparent.
- the purchase option selected for the content dedication includes the buy-to-own purchase option or another purchase option that confers full rights to the dedicated content
- the recipient may then own the full rights to the dedicated content, otherwise the recipient may own rights with respect to the dedicated content based on whatever terms the selected purchase option provides.
- FIG. 1 illustrates a block diagram of an exemplary voice-enabled device that can be used for hybrid processing in a natural language voice services environment, according to one aspect of the invention.
- FIG. 2 illustrates a block diagram of an exemplary system for hybrid processing in a natural language voice service environment, according to one aspect of the invention.
- FIGS. 4-5 illustrate flow diagrams of exemplary methods for hybrid processing in a natural language voice services environment, according to one aspect of the invention.
- FIG. 6 illustrates a block diagram of an exemplary system for providing a natural language content dedication service, according to one aspect of the invention.
- FIGS. 7-8 illustrate flow diagrams of exemplary methods for providing a natural language content dedication service, according to one aspect of the invention.
- FIG. 1 illustrates a block diagram of an exemplary voice-enabled device 100 that can be used for hybrid processing in a natural language voice services environment.
- the voice-enabled device 100 illustrated in FIG. 1 may generally include an input device 112 , or a combination of input devices 112 , which may enable a user to interact with the voice-enabled device 100 in a multi-modal manner.
- the input devices 112 may generally include any suitable combination of at least one voice input device 112 (e.g., a microphone) and at least one non-voice input device 112 (e.g., a mouse, touch-screen display, wheel selector, etc.).
- the multi-modal interactions may include at least one natural language utterance, wherein the natural language utterance may be converted into an electronic signal.
- the electronic signal may then be provided to an Automatic Speech Recognizer (ASR) 120 , which may also be referred to as a speech recognition engine 120 and/or a multi-pass speech recognition engine 120 .
- ASR Automatic Speech Recognizer
- the ASR 120 may generate one or more preliminary interpretations of the utterance and provide the preliminary interpretation to a conversational language processor 130 .
- the multi-modal interactions may include one or more non-voice interactions with the one or more input devices 112 (e.g., button pushes, multi-touch gestures, point of focus or attention focus selections, etc.).
- the voice-click module may extract context from the non-voice interactions and provide the context to the conversational language processor 130 for use in generating an interpretation of the utterance (i.e., via the dashed line illustrated in FIG. 1 ).
- the conversational language processor 130 may analyze the utterance and any accompanying non-voice interactions to determine an intent of the multi-modal interactions with the voice-enabled device 100 .
- the voice-enabled device 100 may include various natural language processing components that can support free-form utterances and/or other forms of non-voice device interactions, which may liberate the user from restrictions relating to the manner of formulating commands, queries, or other requests.
- the user may provide the utterance to the voice input device 112 using any manner of speaking, and may further provide other non-voice interactions to the non-voice input device 112 to request any content or service available through the voice-enabled device 100 .
- the utterance in response to receiving the utterance at the voice input device 112 , the utterance may be processed using techniques described in U.S. patent application Ser. No.
- the voice-enabled device 100 may be coupled to one or more additional systems that may be configured to cooperate with the voice-enabled device 100 to interpret or otherwise process the multi-modal interactions that include combinations of natural language utterances and/or non-voice device interactions.
- the one or more additional systems may include one or more multi-modal voice-enabled devices having similar natural language processing capabilities to the voice-enabled device 100 , one or more non-voice devices having data retrieval and/or task execution capabilities, and a virtual router that coordinates interaction among the voice-enabled device 100 and the additional systems.
- the voice-enabled device 100 may include an interface to an integrated natural language voice services environment that includes a plurality of multi-modal devices, wherein the user may request content or services available through any of the multi-modal devices.
- the conversational language processor 130 may include a constellation model 132 b that provides knowledge relating to content, services, applications, intent determination capabilities, and other features available in the voice services environment, as described in co-pending U.S. patent application Ser. No. 12/127,343, entitled “System and Method for an Integrated, Multi-Modal, Multi-Device Natural Language Voice Services Environment,” filed May 27, 2008, the contents of which are hereby incorporated by reference in their entirety.
- the voice-enabled device 100 may have access to shared knowledge relating to natural language processing capabilities, context, prior interactions, domain knowledge, short-term knowledge, long-term knowledge, and cognitive models for the various systems and multi-modal devices, providing a cooperative environment for resolving the multi-modal interactions received at the voice-enabled device 100 .
- the input devices 112 and the voice-click module coupled thereto may be configured to continually monitor for one or more multi-modal interactions received at the voice-enabled device 100 .
- the input devices 112 and the voice-click module may continually monitor for one or more natural language utterances and/or one or more distinguishable non-voice device interactions, which may collectively provide the relevant context for retrieving content, executing tasks, invoking services or commands, or processing any other suitable requests.
- the input devices 112 and/or the voice-click module may signal the voice-enabled device 100 that an utterance and/or a non-voice interaction have been received.
- the non-voice interaction may provide context for sharpening recognition, interpretation, and understanding of an accompanying utterance, and moreover, the utterance may provide further context for enhancing interpretation of the accompanying non-voice interaction. Accordingly, the utterance and the non-voice interaction may collectively provide relevant context that various natural language processing components may use to determine an intent of the multi-modal interaction that includes the utterance and the non-voice interaction.
- processing the utterance included in the multi-modal interaction may be initiated at the ASR 120 , wherein the ASR 120 may generate one or more preliminary interpretations of the utterance.
- the ASR 120 may be configured to recognize one or more syllables, words, phrases, or other acoustic characteristics from the utterance using one or more dynamic recognition grammars and/or acoustic models.
- the ASR 120 may use the dynamic recognition grammars and/or the acoustic models to recognize a stream of phonemes from the utterance based on phonetic dictation techniques, as described in U.S.
- the dynamic recognition grammars and/or the acoustic models may include unstressed central vowels (e.g., “schwa”), which may reduce a search space for recognizing the stream of phonemes for the utterance.
- unstressed central vowels e.g., “schwa”
- the ASR 120 may be configured as a multi-pass speech recognition engine 120 , as described in U.S. patent application Ser. No. 11/197,504, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” which issued as U.S. Pat. No. 7,640,160 on Dec. 29, 2009, the contents of which are hereby incorporated by reference in their entirety.
- the multi-pass speech recognition 120 may be configured to initially invoke a primary speech recognition engine to generate a first transcription of the utterance, and further to optionally subsequently invoke one or more secondary speech recognition engines to generate one or more secondary transcriptions of the utterance.
- the first transcription may be generated using a large list dictation grammar
- the secondary transcriptions may be generated using virtual dictation grammars having decoy words for out-of-vocabulary words, reduced vocabularies derived from a conversation history, or other dynamic recognition grammars.
- the secondary speech recognition engines may be invoked to sharpen the interpretation of the primary speech recognition engine.
- the multi-pass speech recognition engine 120 may interpret the utterance using any suitable combination of techniques that results in a preliminary interpretation derived from a plurality of transcription passes for the utterance (e.g., the secondary speech recognition engines may be invoked regardless of the confidence level for the first transcription, or the primary speech recognition engine and/or the secondary speech recognition engines may employ recognition grammars that are identical or optimized for a particular interpretation context, etc.).
- the dynamic recognition grammars used in the ASR 120 may be optimized for different languages, contexts, domains, memory constraints, and/or other suitable criteria.
- the voice-enabled device 100 may include one or more applications 134 that provide content or services for a particular context or domain, such as a navigation application 134 .
- the dynamic recognition grammars may be optimized for various physical, temporal, directional, or other geographical characteristics (e.g., as described in co-pending U.S. patent application Ser. No. 11/954,064, entitled “System and Method for Providing a Natural Language Voice User Interface in an Integrated Voice Navigation Services Environment,” filed Dec.
- an utterance containing the word “traffic” may be subject to different interpretations depending on whether the user intended a navigation context (i.e., traffic on roads), a music context (i.e., the 1960's rock band), or a movie context (i.e., the Steven Soderbergh film).
- the recognition grammars used in the ASR 120 may be dynamically adapted to optimize accurate recognition for any given utterance (e.g., in response to incorrectly interpreting an utterance to contain a particular word or phrase, the incorrect interpretation may be removed from the recognition grammar to prevent repeating the incorrect interpretation).
- the ASR 120 may provide the preliminary interpretations to the conversational language processor 130 .
- the conversational language processor 130 may generally include various natural language processing components, which may be configured to model human-to-human conversations or interactions.
- the conversational language processor 130 may invoke one or more of the natural language processing components to further analyze the preliminary interpretations of the utterance and any accompanying non-voice interactions to determine the intent of the multi-modal interactions received at the voice-enabled device 100 .
- the conversational language processor 120 may invoke an intent determination engine 130 a configured to determine the intent of the multi-modal interactions received at the voice-enabled device 100 .
- the intent determination engine 130 a may invoke a knowledge-enhanced speech recognition engine that provides long-term and short-term semantic knowledge for determining the intent, as described in co-pending U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, the contents of which are hereby incorporated by reference in their entirety.
- the semantic knowledge may be based on a personalized cognitive model derived from one or more prior interactions with the user, a general cognitive model derived from one or more prior interactions with various different users, and/or an environmental cognitive model derived from an environment associated with the user, the voice-enabled device 100 , and/or the voice services environment (e.g., ambient noise characteristics, location sensitive information, etc.).
- a personalized cognitive model derived from one or more prior interactions with the user e.g., a general cognitive model derived from one or more prior interactions with various different users
- an environmental cognitive model derived from an environment associated with the user, the voice-enabled device 100 , and/or the voice services environment (e.g., ambient noise characteristics, location sensitive information, etc.).
- the intent determination engine 132 a may invoke a context tracking engine 132 d to determine the context for the multi-modal interactions. For example, any context derived from the natural language utterance and/or the non-voice interactions in the multi-modal interactions may be pushed to a context stack associated with the context tracking engine 132 d , wherein the context stack may include various entries that may be weighted or otherwise ranked according to one or more contexts identified from the cognitive models and the context for the current multi-modal interactions. As such, the context tracking engine 132 d may determine one or more entries in the context stack that match information associated with the current multi-modal interactions to determine a most likely context for the current multi-modal interactions. The context tracking engine 132 d may then provide the most likely context to the intent determination engine 132 a , which may determine the intent of the multi-modal interactions in view of the most likely context.
- the intent determination engine 132 a may reference the constellation model 132 b to determine whether to invoke any of the various systems or multi-modal devices in the voice services environment.
- the constellation model 132 b may provide intent determination capabilities, domain knowledge, semantic knowledge, cognitive models, and other information available through the various systems and multi-modal devices.
- the intent determination engine 132 a may reference the constellation model 132 b to determine whether one or more of the other systems and/or multi-modal devices should be engaged to participate in determining the intent of the multi-modal interactions.
- the intent determination engine 132 a may forward information relating to the multi-modal interactions to such systems and/or multi-modal devices, which may then determine the intent of the multi-modal interactions and return the intent determination to the voice-enabled device 100 .
- the conversational language processor 130 may be configured to engage the user in one or more cooperative conversations to resolve the intent or otherwise process the multi-modal interactions, as described in co-pending U.S. patent application Ser. No. 11/580,926, entitled “System and Method for a Cooperative Conversational Voice User Interface,” filed Oct. 16, 2006, the contents of which are hereby incorporated by reference in their entirety.
- the conversational language processor 130 may generally identify a conversational goal for the multi-modal interactions, wherein the conversational goal may be identifying from analyzing the utterance, the non-voice interactions, the most likely context, and/or the determined intent.
- the conversational goal identified for the multi-modal interactions may generally control the cooperative conversation between the conversational language processor 130 and the user.
- the conversational language processor 130 may generally engage the user in one or more query conversations, didactic conversations, and/or exploratory conversations to resolve or otherwise process the multi-modal interactions.
- the conversational language processor 130 may engage the user in a query conversation in response to identifying that the conversational goal relates to retrieving discrete information or performing a particular function.
- the user may lead the conversation towards achieving the particular conversational goal, while the conversational language processor 130 may initiate one or more queries, tasks, commands, or other requests to achieve the goal and thereby support the user in the conversation.
- the conversational language processor 130 may engage the user in a didactic conversation to resolve the ambiguity or uncertainty (e.g., where noise or malapropisms interfere with interpreting the utterance, multiple likely contexts cannot be disambiguated, etc.).
- the conversational language processor 130 may lead the conversation to clarify the intent of the multi-modal interaction (e.g., generating feedback provided through an output device 114 ), while the user may regulate the conversation and provide additional multi-modal interactions to clarify the intent.
- the conversational language processor 130 may engage the user in an exploratory conversation to resolve the goal.
- the conversational language processor 130 and the user may share leader and supporter roles, wherein the ambiguous or uncertain goal may be improvised or refined over a course of the conversation.
- the conversational language processor 130 may generally engage in one or more cooperative conversations to determine the intent and resolve a particular goal for the multi-modal interactions received at the voice-enabled device 100 .
- the conversational language processor 130 may then initiate one or more queries, tasks, commands, or other requests in furtherance of the intent and the goal determined for the multi-modal interactions.
- the conversational language processor 130 may invoke one or more agents 132 c having capabilities for processing requests in a particular domain or application 134 , a voice search engine 132 f having capabilities for retrieving information requested in the multi-modal interactions (e.g., from one or more data repositories 136 , networks, or other information sources coupled to the voice-enabled device 100 ), or one or more other systems or multi-modal devices having suitable processing capabilities for furthering the intent and the goal for the multi-modal interactions (e.g., as determined from the constellation model 132 b ).
- agents 132 c having capabilities for processing requests in a particular domain or application 134
- a voice search engine 132 f having capabilities for retrieving information requested in the multi-modal interactions (e.g., from one or more data repositories 136 , networks, or other information sources coupled to the voice-enabled device 100 ), or one or more other systems or multi-modal devices having suitable processing capabilities for furthering the intent and the goal for the multi-modal interactions (e.g.
- the conversational language processor 130 may invoke an advertising application 134 in relation to the queries, tasks, commands, or other requests initiated to process the multi-modal interactions, wherein the advertising application 134 may be configured to select one or more advertisements that may be relevant to the intent and/or the goal for the multi-modal interactions, as described in co-pending U.S. patent application Ser. No. 11/671,526, entitled “System and Method for Selecting and Presenting Advertisements Based on Natural Language Processing of Voice-Based Input,” filed Feb. 6, 2007, the contents of which are hereby incorporated by reference in their entirety.
- the conversational language processor 130 may format the results for presentation to the user through the output device 114 .
- the results may be formatted into a natural language utterance that can be converted into an electronic signal and provided to the user through a speaker coupled to the output device 114 , or the results may be visually presented on a display coupled to the output device 114 , or in any other suitable manner (e.g., the results may indicate whether a particular task or command was successfully performed, or the results may include information retrieved in response to one or more queries, or the results may include a request to frame a subsequent multi-modal interaction if the results are ambiguous or otherwise incomplete, etc.).
- the conversational language processor 130 may include a misrecognition engine 132 e configured to determine whether the conversational language processor 130 incorrectly determined the intent for the multi-modal interactions.
- the misrecognition engine 132 e may determine that the conversational language processor 130 incorrectly determined the intent in response to one or more subsequent multi-modal interactions provided proximately in time to the prior multi-modal interactions, as described in U.S. patent application Ser. No. 11/200,164, entitled “System and Method of Supporting Adaptive Misrecognition in Conversational Speech,” which issued as U.S. Pat. No. 7,620,549 on Nov. 17, 2009, the contents of which are hereby incorporated by reference in their entirety.
- the misrecognition engine 132 e may monitor for one or more subsequent multi-modal interactions that include a stop word, override a current request, or otherwise indicate an unrecognized or misrecognized event. The misrecognition engine 132 e may then determine one or more tuning parameters for various components associated with the ASR 120 and/or the conversational language processor 130 to improve subsequent interpretations.
- the voice-enabled device 100 may generally include various natural language processing components and capabilities that may be used for hybrid processing in the natural language voice services environment.
- the voice-enabled device 100 may be configured to determine the intent for various multi-modal interactions that include any suitable combination of natural language utterances and/or non-voice interactions and process one or more queries, tasks, commands, or other requests based on the determined intent.
- one or more other systems and/or multi-modal devices may participate in determining the intent and processing the queries, tasks, commands, or other requests for the multi-modal interactions to provide a hybrid processing methodology, wherein the voice-enabled device 100 and the various other systems and multi-modal devices may each perform partial processing to determine the intent and otherwise process the multi-modal interactions in a cooperative manner.
- hybrid processing in the natural language voice services environment may include one or more techniques described in U.S. Provisional Patent Application Ser. No. 61/259,827, entitled “System and Method for Hybrid Processing in a Natural Language Voice Services Environment,” filed on Nov. 10, 2009, the contents of which are hereby incorporated by reference in their entirety.
- FIG. 2 illustrates a block diagram of an exemplary system for hybrid processing in a natural language voice service environment.
- the system illustrated in FIG. 2 may generally include a voice-enabled client device 210 similar to the voice-enabled device described above in relation to FIG. 1 .
- the voice-enabled client device 210 may include any suitable combination of input and output devices 215 a respectively arranged to receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions.
- the voice-enabled client device 210 may include an Automatic Speech Recognizer (ASR) 220 a configured to generate one or more preliminary interpretations of natural language utterances received at the input device 215 a , and further configured to provide the preliminary interpretations to a conversational language processor 230 a.
- ASR Automatic Speech Recognizer
- the conversational language processor 230 a on the voice-enabled client device 210 may include one or more natural language processing components, which may be invoked to determine an intent for the multi-modal interactions received at the voice-enabled client device 210 .
- the conversational language processor 230 a may then initiate one or more queries, tasks, commands, or other requests to resolve the determined intent.
- the conversational language processor 230 a may invoke one or more applications 234 a to process requests in a particular domain, query one or more data repositories 236 a to retrieve information requested in the multi-modal interactions, or otherwise engage in one or more cooperative conversations with a user of the voice-enabled client device 210 to resolve the determined intent.
- the voice-enabled client device 210 may also cooperate with one or more other systems or multi-modal devices having suitable processing capabilities for initiating queries, tasks, commands, or other requests to resolve the intent of the multi-modal interactions.
- the voice-enabled client device 210 may use a messaging interface 250 a to communicative with a virtual router 260 , wherein the messaging interface 250 a may generally include a light client (or thin client) that provides a mechanism for the voice-enabled client device 210 to transmit input to and receive output from the virtual router 260 .
- the virtual router 260 may further include a messaging interface 250 b providing a mechanism for communicating with one or more additional voice-enabled devices 270 a - n , one or more non-voice devices 280 a - n , and a voice-enabled server 240 .
- messaging interface 250 a and messaging interface 250 b are components that are distinct from the devices to which they are communicatively coupled, it will be apparent that such illustration is for ease of description only, as the messaging interfaces 250 a - b may be provided as on-board components that execute on the various devices illustrated in FIG. 2 to facilitate communication among the various devices in the hybrid processing environment.
- the messaging interface 250 a that executes on the voice-enabled client device 210 may transmit input from the voice-enabled client device 210 to the virtual router 260 within one or more XML messages, wherein the input may include encoded audio corresponding to natural language utterances, preliminary interpretations of the natural language utterances, data corresponding to multi-touch gestures, point of focus or attention focus selections, and/or other multi-modal interactions.
- the virtual router 260 may then further process the input using a conversational language processor 230 c having capabilities for speech recognition, intent determination, adaptive misrecognition, and/or other natural language processing.
- the conversational language processor 230 c may include knowledge relating to content, services, applications, natural language processing capabilities, and other features available through the various devices in the hybrid processing environment.
- the virtual router 260 may further communicate with the voice-enabled devices 270 , the non-voice devices 280 , and/or the voice-enabled server 240 through the messaging interface 250 b to coordinate processing for the input received from the voice-enabled client device 210 .
- the virtual router 260 may identify one or more of the devices that have suitable features and/or capabilities for resolving the intent of the input received from the voice-enabled client device 210 .
- the virtual router 260 may then forward one or more components of the input to the identified devices through respective messaging interfaces 250 b , wherein the identified devices may be invoked to perform any suitable processing for the components of the input forwarded from the virtual router 260 . In one implementation, the identified devices may then return any results of the processing to the virtual router 260 through the respective messaging interfaces 250 b , wherein the virtual router 260 may collate the results of the processing and return the results to the voice-enabled client device 210 through the messaging interface 250 a.
- the virtual router 260 may communicate with any of the devices available in the hybrid processing environment through messaging interfaces 250 a - b to coordinate cooperative hybrid processing for multi-modal interactions or other natural language inputs received from the voice-enabled client device 210 .
- the cooperative hybrid processing may be used to enhance performance in embedded processing architectures in which the voice-enabled client device 210 includes a constrained amount of resources (e.g., the voice-enabled client device 210 may be a mobile device having a limited amount of internal memory or other dedicated resources for natural language processing).
- one or more components of the voice-enabled client device 210 may be configured to optimize efficiency of on-board natural language processing to reduce or eliminate bottlenecks, lengthy response times, or degradations in performance.
- optimizing the efficiency of the on-board natural language processing may include configuring the ASR 220 a to use a virtual dictation grammar having decoy words for out-of-vocabulary words, reduced vocabularies derived from a conversation history, or other dynamic recognition grammars (e.g., grammars optimized for particular languages, contexts, domains, memory constraints, and/or other suitable criteria).
- the on-board applications 234 a and/or data repositories 236 a may be associated with an embedded application suite providing particular features and capabilities for the voice-enabled client device 210 .
- the voice-enabled client device 210 may be embedded within an automotive telematics system, a personal navigation device, a global positioning system, a mobile phone, or another device in which users often request location-based services.
- the on-board applications 234 a and the data repositories 236 a in the embedded application suite may be optimized to provide certain location-based services that can be efficiently processed on-board (e.g., destination entry, navigation, map control, music search, hands-free dialing, etc.).
- the components of the voice-enabled client device 210 may be optimized for efficiency in embedded architectures, a user may nonetheless request any suitable content, services, applications, and/or other features available in the hybrid processing environment, and the other devices in the hybrid processing environment may collectively provide natural language processing capabilities to supplement the embedded natural language processing capabilities for the voice-enabled client device 210 .
- the voice-enabled client device 210 may perform preliminary processing for a particular multi-modal interaction using the embedded natural language processing capabilities (e.g., the on-board ASR 220 a may perform advanced virtual dictation to partially transcribe an utterance in the multi-modal interaction, the on-board conversational language processor 230 a may determine a preliminary intent of the multi-modal interaction, etc.), wherein results of the preliminary processing may be provided to the virtual router 260 for further processing.
- the embedded natural language processing capabilities e.g., the on-board ASR 220 a may perform advanced virtual dictation to partially transcribe an utterance in the multi-modal interaction, the on-board conversational language processor 230 a may determine a preliminary intent of the multi-modal interaction, etc.
- the voice-enabled client device 210 may also communicate input corresponding to the multi-modal interaction to the virtual router 260 in response to determining that on-board capabilities cannot suitably interpret the interaction (e.g., if a confidence level for a partial transcription does not satisfy a particular threshold), or in response to determining that the interaction should be processed off-board (e.g., if a preliminary interpretation indicates that the interaction relates to a local search request requiring large computations to be performed on the voice-enabled server 240 ).
- the virtual router 260 may capture the input received from the voice-enabled client device 210 and coordinate further processing among the voice-enabled devices 270 and the voice-enabled server 240 that provide natural language processing capabilities in addition to the non-voice devices 280 that provide capabilities for retrieving data or executing tasks. Furthermore, in response to the virtual router 260 invoking one or more of the voice-enabled devices 270 , the input provided to the voice-enabled devices 270 may be optimized to suit the processing requested from the invoked voice-enabled devices 270 (e.g., to avoid over-taxing processing resources, a particular voice-enabled device 270 may be provided a partial transcription or a preliminary interpretation and resolve the intent for a given context or domain).
- the input provided to the voice-enabled devices 270 may further include encoded audio corresponding to natural language utterances and any other data associated with the multi-modal interaction.
- the voice-enabled server 240 may have a natural language processing architecture similar to the voice-enabled client device 210 , except that the voice-enabled server 240 may include substantial processing resources that obviate constraints that the voice-enabled client device 210 may be subject to.
- the voice-enabled server 240 when the voice-enabled server 240 cooperates in the hybrid processing for the multi-modal interaction, the encoded audio corresponding to the natural language utterances and the other data associated with the multi-modal interaction may be provided to the voice-enabled server 240 to maximize a likelihood of the voice-enabled server 240 correctly determining the intent of the multi-modal interaction (e.g., the ASR 220 b may perform multi-pass speech recognition to generate an accurate transcription for the natural language utterance, the conversational language processor 230 b may arbitrate among intent determinations performed in any number of different contexts or domains, etc.).
- the hybrid processing techniques performed in the environment illustrated in FIG. 2 may generally include various different devices, which may or may not include natural language capabilities, cooperatively determining the intent of a particular multi-modal interaction and taking action to resolve the intent.
- the voice-enabled client device 210 may include a suitable amount of memory or other resources that can be dedicated to natural language processing (e.g., the voice-enabled client device 210 may be a desktop computer or other device that can process natural language without substantially degraded performance).
- one or more of the components of the voice-enabled client device 210 may be configured to optimize the on-board natural language processing in a manner that could otherwise cause bottlenecks, lengthy response times, or degradations in performance in an embedded architecture.
- optimizing the on-board natural language processing may include configuring the ASR 220 a to use a large list dictation grammar in addition to and/or instead of the virtual dictation grammar used in embedded processing architectures.
- cooperative hybrid processing techniques may be substantially similar regardless of whether the voice-enabled client device 210 has an embedded or non-embedded architecture.
- cooperative hybrid processing may include the voice-enabled client device 210 optionally performing preliminary processing for a natural language multi-modal interaction and communicating input corresponding to the multi-modal interaction to the virtual router 260 for further processing through the messaging interface 250 a .
- the cooperative hybrid processing may include the virtual router 260 coordinating the further processing for the input among the various devices in the hybrid environment through messaging interface 250 b , and subsequently returning any results of the processing to the voice-enabled client device 210 through messaging interface 250 a.
- FIG. 3 illustrates a flow diagram of an exemplary method for initializing various devices that cooperate to perform hybrid processing in a natural language voice services environment.
- the hybrid processing environment may generally include communication among various different devices that may cooperatively process natural language multi-modal interactions.
- the various devices in the hybrid processing environment may include a virtual router having one or more messaging interfaces for communicating with one or more voice-enabled devices, one or more non-voice devices, and/or a voice-enabled server.
- the method illustrated in FIG. 3 may be used to initialize communication in the hybrid processing environment to enable subsequent cooperative processing for one or more natural language multi-modal interactions received at any particular device in the hybrid processing environment.
- the various devices in the hybrid processing environment may be configured to continually listen or otherwise monitor respective input devices to determine whether a natural language multi-modal interaction has occurred.
- the method illustrated in FIG. 3 may be used to calibrate, synchronize, or otherwise initialize the various devices that continually listen for the natural language multi-modal interactions.
- the virtual router, the voice-enabled devices, the non-voice devices, the voice-enabled server, and/or other devices in the hybrid processing environment may be configured to provide various different capabilities or services, wherein the initialization method illustrated in FIG.
- the hybrid processing environment may be used to ensure that the hybrid processing environment obtains a suitable signal to process any particular natural language multi-modal interaction and appropriately invoke one or more of the devices to cooperatively process the natural language multi-modal interaction.
- the method illustrated in FIG. 3 and described herein may be invoked to register the various devices in the hybrid processing environment, register new devices added to the hybrid processing environment, publish domains, services, intent determination capabilities, and/or other features supported on the registered devices, synchronize local timing for the registered devices, and/or initialize any other suitable aspect of the devices in the hybrid processing environment.
- initializing the various devices in the hybrid processing environment may include an operation 310 , wherein a device listener may be established for each of the devices in the hybrid processing environment.
- the device listeners established in operation 310 may generally include any suitable combination of instructions, firmware, or other routines that can be executed on the various devices to determine capabilities, features, supported domains, or other information associated with the devices.
- the device listeners established in operation 310 may be configured to communicate with the respective devices using the Universal Plug and Play protocol designed for ancillary computer devices, although it will be apparent that any appropriate mechanism for communicating with the various devices may be suitably substituted.
- the device listeners may then be synchronized in an operation 320 .
- each of the registered devices may have an internal clock or other timing mechanism that indicates local timing for an incoming natural language multi-modal interaction, wherein operation 320 may be used to synchronize the device listeners established in operation 310 according to the internal clocks or timing mechanisms for the respective devices.
- synchronizing the device listeners in operation 320 may include each device listener publishing information relating to the internal clock or local timing for the respective device.
- the device listeners may publish the information relating to the internal clock or local timing to the virtual router, whereby the virtual router may subsequently coordinate cooperative hybrid processing for natural language multi-modal interactions received at one or more of the devices in the hybrid processing environment.
- the information relating to the internal clock or local timing for the various devices in the hybrid processing environment may be further published to the other voice-enabled devices, the other non-voice devices, the voice-enabled server, and/or any other suitable device that may participate in cooperative processing for natural language multi-modal interactions provided to the hybrid processing environment.
- the device listeners may continually listen or otherwise monitor respective devices on the respective registered devices in an operation 330 to detect information relating to one or more natural language multi-modal interactions.
- the device listeners may be configured to detect occurrences of the natural language multi-modal interactions in response to detecting an incoming natural language utterance, a point of focus or attention focus selection associated with an incoming natural language utterance, and/or another interaction or sequence of interactions that relates to an incoming natural language multi-modal interaction.
- operation 330 may further include the appropriate device listeners capturing the natural language utterance and/or related non-voice device interactions that relate to the natural language utterance.
- the captured natural language utterance and related non-voice device interactions may then be analyzed in an operation 340 to manage subsequent cooperative processing in the hybrid processing environment.
- operation 340 may determine whether one device listener or multiple device listeners captured information relating to the natural language multi-modal interaction detected in operation 330 .
- the hybrid processing environment may generally include various different devices that cooperate to process natural language multi-modal interactions, whereby the information relating to the natural language multi-modal interaction may be provided to one or a plurality of the devices in the hybrid processing environment.
- operation 340 may determine whether one device listener or multiple device listeners captured the information relating to the natural language multi-modal interaction in order to determine whether the hybrid processing environment needs to synchronize signals among various device listeners that captured information relating to the multi-modal interaction.
- a user interacting with the hybrid processing environment may view a web page presented on a non-voice display device and provide a natural language multi-modal interaction that requests more information about purchasing a product displayed on the web page.
- the user may then select text on the web page containing the product name using a mouse, keyboard, or other non-voice input device and provide a natural language utterance to a microphone or other voice-enabled device such as “Is this available on Amazon.com?”
- a device listener associated with the non-voice display device may detect the text selection for the product name in operation 330
- a device listener associated with the voice-enabled device may further detect the natural language utterance inquiring about the availability of the product in operation 330 .
- the user may be within a suitable range of multiple voice-enabled devices, which may result in multiple device listeners capturing different signals corresponding to the natural language utterance (e.g., the interaction may occur within range of a voice-enabled mobile phone, a voice-enabled telematics device, and/or other voice-enabled devices, depending on the arrangement and configuration of the various devices in the hybrid processing environment).
- a sequence of operations that synchronizes different signals relating to the multi-modal interaction received at the multiple device listeners may be initiated in response to operation 340 determining that multiple device listeners captured information relating to the natural language multi-modal interaction.
- the natural language multi-modal interaction may be processed in an operation 390 without executing the sequence of operations that synchronizes different signals (i.e., the one device listener provides all of the input information relating to the multi-modal interaction, such that hybrid processing for the interaction may be initiated in operation 390 without synchronizing different input signals).
- sequence of synchronization operations may also be initiated in response to one device listener capturing a natural language utterance and one or more non-voice interactions to align different signals relating to the natural language multi-modal interaction, as described in greater detail herein.
- each device listener that receives an input relating to the natural language multi-modal interaction detected in operation 330 may have an internal clock or other local timing mechanism.
- the sequence of synchronization operations for the different signals may be initiated in an operation 350 .
- operation 350 may include the one or more device listeners determining local timing information for the respective signals based on the internal clock or other local timing mechanism associated with the respective device listeners, wherein the local timing information determined for the respective signals may then be synchronized.
- synchronizing the local timing information for the respective signals may be initiated in an operation 360 .
- operation 360 may generally include notifying each device listener that received an input relating to the multi-modal interaction of the local timing information determined for each respective signal.
- each device listener may provide local timing information for a respective signal to the virtual router, and the virtual router may then provide the local timing information for all of the signals to each device listener.
- operation 360 may result in each device listener receiving a notification that includes local timing information for each of the different signals that relate to the natural language multi-modal interaction detected in operation 330 .
- the virtual router may collect the local timing information for each of the different signals from each of the device listeners and further synchronize the local timing information for the different signals to enable hybrid processing for the natural language multi-modal interaction.
- any particular natural language multi-modal interaction may include at least a natural language utterance, and may further include one or more additional device interactions relating to the natural language utterance.
- the utterance may generally be received prior to, contemporaneously with, or subsequent to the additional device interactions.
- the local timing information for the different signals may be synchronized in an operation 370 to enable hybrid processing for the natural language multi-modal interaction.
- operation 370 may include aligning the local timing information for one or more signals corresponding to the natural language utterance and/or one or more signals corresponding to any additional device interactions that relate to the natural language utterance.
- operation 370 may further include aligning the local timing information for the natural language utterance signals with the signals corresponding to the additional device interactions.
- any devices that participate in the hybrid processing for the natural language multi-modal interaction may be provided with voice components and/or non-voice components that have been aligned with one another.
- operation 370 may be executed on the virtual router, which may then provide the aligned timing information to any other device that may be invoked in the hybrid processing.
- one or more of the other devices that participate in the hybrid processing may locally align the timing information (e.g., in response to the virtual router invoking the voice-enabled server in the hybrid processing, resources associated with the voice-enabled server may be employed to align the timing information and preserve communication bandwidth at the virtual router).
- the virtual router and/or other devices in the hybrid processing environment may analyze the signals corresponding to the natural language utterance in an operation 380 to select the cleanest sample for further processing.
- the virtual router may include a messaging interface for receiving an encoded audio sample corresponding to the natural language utterance from one or more of the voice-enabled devices.
- the audio sample received at the virtual router may include the natural language utterance encoded in the MPEG-1 Audio Layer 3 (MP3) format or another lossy format to preserve communication bandwidth in the hybrid processing environment.
- MP3 MPEG-1 Audio Layer 3
- the audio sample may alternatively (or additionally) be encoded using the Free Lossless Audio Codec (FLAC) format or another lossless format in response to the hybrid processing environment having sufficient communication bandwidth for processing lossless audio that may provide a better sample of the natural language utterance.
- FLAC Free Lossless Audio Codec
- the signal corresponding to the natural language utterance that provides the cleanest sample may be selected in operation 380 .
- one voice-enabled device may be in a noisy environment or otherwise associated with conditions that interfere with generating a clean audio sample, while another voice-enabled device may include a microphone array or be configured to employ techniques that maximize fidelity of encoded speech.
- the cleanest signal in response to multiple signals corresponding to the natural language utterance being received in operation 330 , the cleanest signal may be selected in operation 380 and hybrid processing for the natural language utterance may then be initiated in an operation 390 .
- the synchronization and initialization techniques illustrated in FIG. 3 and described herein may ensure that the hybrid processing environment synchronizes each of the signals corresponding to the natural language multi-modal interaction and generates an input for further processing in operation 390 most likely to result in a correct intent determination. Furthermore, in synchronizing the signals and selecting the cleanest audio sample for the further processing in operation 390 , the techniques illustrated in FIG. 3 and described herein may ensure that none of the devices in the hybrid processing environment take action on a natural language multi-modal interaction until the appropriate signals to be used in operation 390 have been identified. As such, hybrid processing for the natural language multi-modal interaction may be initiated in operation 390 , as described in further detail herein.
- FIG. 4 illustrates a flow diagram of am exemplary method for performing hybrid processing at one or more client devices in a natural language voice services environment.
- the one or more client devices may perform the hybrid processing in cooperation with a virtual router through a messaging interface that communicatively couples the client devices and the virtual router.
- the messaging interface may generally include a light client (or thin client) that provides a mechanism for the client devices to transmit input relating to a natural language multi-modal interaction to the virtual router, and that further provides a mechanism for the client devices to receive output relating to the natural language multi-modal interaction from the virtual router.
- the hybrid processing at the client devices may be initiated in response to one or more of the client devices receiving a natural language multi-modal interaction in an operation 410 .
- the natural language multi-modal interaction may generally include a natural language utterance received at a microphone or other voice-enabled input device coupled to the client device that received the natural language multi-modal interaction, and may further include one or more other additional input modalities that relate to the natural language utterance (e.g., text selections, button presses, multi-touch gestures, etc.).
- the natural language multi-modal interaction received in operation 410 may include one or more queries, commands, or other requests provided to the client device, wherein the hybrid processing for the natural language multi-modal interaction may then be initiated in an operation 420 .
- the natural language voice services environment may generally include one or more voice-enabled client devices, one or more non-voice devices, a voice-enabled server, and a virtual router arranged to communicate with each of the voice-enabled client devices, the non-voice devices, and the voice-enabled server.
- the virtual router may therefore coordinate the hybrid processing for the natural language multi-modal interaction among the voice-enabled client devices, the non-voice devices, and the voice-enabled server.
- the hybrid processing techniques described herein may generally refer to the virtual router coordinating cooperative processing for the natural language multi-modal interaction in a manner that involves resolving an intent of the natural language multi-modal interaction in multiple stages.
- each of the client devices that received an input relating to the natural language multi-modal interaction may perform initial processing for the respective input in an operation 420 .
- a client device that received the natural language utterance included in the multi-modal interaction may perform initial processing in operation 420 that includes encoding an audio sample corresponding to the utterance, partially or completely transcribing the utterance, determining a preliminary intent for the utterance, or performing any other suitable preliminary processing for the utterance.
- the initial processing in operation 420 may also be performed at a client device that received one or more of the additional input modalities relating to the utterance.
- the initial processing performed in operation 420 for the additional input modalities may include identifying selected text, selected points of focus or attention focus, or generating any other suitable data that can be used to further interpret the utterance.
- an operation 430 may then include determining whether the hybrid processing environment has been configured to automatically route inputs relating to the natural language multi-modal interaction to the virtual router.
- operation 430 may determine that automatic routing has been configured to occur in response to multiple client devices receiving the natural language utterance included in the multi-modal interaction in operation 410 .
- the initial processing performed in operation 420 may include the multiple client devices encoding respective audio samples corresponding to the utterance, wherein messages that include the encoded audio samples may then be sent to the virtual router in an operation 460 .
- the virtual router may then select one of the encoded audio samples that provides a cleanest signal and coordinate subsequent hybrid processing for the natural language multi-modal interaction, as will be described in greater detail below with reference to FIG. 5 .
- operation 430 may determine that automatic routing has been configured to occur in response to the initial processing resulting in a determination that the multi-modal interaction relates to a request that may be best suited for processing on the voice-enabled server (e.g., the request may relate to a location-based search query or another command or task that requires resources managed on the voice-enabled server, content, applications, domains, or other information that resides on one or more devices other than the client device that received the request, etc.).
- the hybrid processing environment may be configured for automatic routing in response to other conditions and/or regardless of whether any attendant conditions exist, as appropriate.
- the virtual router may provide results of the hybrid processing to the client device in an operation 470 .
- the results provided to the client device in operation 470 may include a final intent determination for the natural language multi-modal interaction, information requested in the interaction, data generated in response to executing a command or task requested in the interaction, and/or other results that enable the client device to complete processing for the natural language request in operation 480 .
- operation 480 may include the client device executing a query, command, task, or other request based on the final intent determination returned from the virtual router, presenting the requested information returned from the virtual router, confirming that the requested command or task has been executed, and/or performing any additional processing to resolve the natural language request.
- the client device may further process the natural language multi-modal interaction in an operation 440 .
- the further processing in operation 440 may include the client device attempting to determine an intent for the natural language multi-modal interaction using local natural language processing capabilities. For example, the client device may merge any non-voice input modalities included in the multi-modal interaction a transcription for the utterance included in the multi-modal interaction.
- the conversational language processor on the client device may then determine the intent for the multi-modal interaction utilizing local information relating to context, domains, shared knowledge, criteria values, or other information.
- the client device may then generate one or more interpretations for the utterance to determine the intent for the multi-modal interaction (e.g., identifying a conversation type, one or more requests contained in the interactions, etc.).
- operation 440 may further include determining a confidence level for the intent determination generated on the client device (e.g., the confidence level may be derived in response to whether the client devices includes a multi-pass speech recognition engine, whether the utterance contained any ambiguous words or phrases, whether the intent differs from one context to another, etc.).
- an operation 450 may then determine whether or not to invoke off-board processing depending on the confidence level determined in operation 440 .
- operation 450 may generally include determining whether the intent determined in operation 440 satisfies a particular threshold value that indicates an acceptable confidence level for taking action on the determined intent.
- operation 450 may determine to not invoke off-board processing.
- the confidence level satisfying the threshold value may indicate that the client device has sufficient information to take action on the determined intent, whereby the client device may then process one or more queries, commands, tasks, or other requests to resolve the multi-modal interaction in operation 480 .
- operation 450 may invoke off-board processing, which may include sending one or more messages to the virtual router in operation 460 .
- the one or more messages may cause the virtual router to invoke additional hybrid processing for the multi-modal interaction in a similar manner as noted above, and as will be described in greater detail herein with reference to FIG. 5 .
- FIG. 5 illustrates a flow diagram of am exemplary method for performing hybrid processing at a virtual router in a natural language voice services environment.
- the virtual router may coordinate the hybrid processing for natural language multi-modal interactions received at one or more client devices.
- the virtual router may receive one or more messages relating to a natural language multi-modal interaction received at one or more of the client devices in the voice services environment.
- the virtual router may include a messaging interface that communicatively couples the virtual router to the client devices and a voice-enabled server, wherein the messaging interface may generally include a light client (or thin client) that provides a mechanism for the virtual router to receive input from one or more the client devices and/or the voice-enabled server, and further to transmit output to one or more the client devices and/or the voice-enabled server.
- the messages received in operation 510 may generally include any suitable processing results for the multi-modal interactions, whereby the virtual router may coordinate hybrid processing in a manner that includes multiple processing stages that may occur at the virtual router, one or more of the client devices, the voice-enabled server, or any suitable combination thereof.
- the virtual router may analyze the messages received in operation 510 to determine whether to invoke the hybrid processing in a peer-to-peer mode.
- one or more of the messages may include a preliminary intent determination that the virtual router can use to determine whether to invoke one or more of the client devices, the voice-enabled server, or various combinations thereof in order to execute one or more of the multiple processing stages for the multi-modal interaction.
- one or more of the messages may include an encoded audio sample that the virtual router forwards to one or more of the various devices in the hybrid processing environment.
- the virtual router may analyze the messages received in operation 510 to determine whether or not to invoke the voice-enabled server to process the multi-modal interaction (e.g., the messages may include a preliminary intent determination that indicates that the multi-modal interaction includes a location-based request that requires resources residing on the server).
- the virtual router may forward the messages to the server in an operation 530 .
- the messages forwarded to the server may generally include the encoded audio corresponding to the natural language utterance and any additional information relating to other input modalities relevant to the utterance.
- the voice-enabled server may include various natural language processing components that can suitably determine the intent of the multi-modal interaction, whereby the messages sent to the voice-enabled server may include the encoded audio in order to permit the voice-enabled server to determine the intent independently of any preliminary processing on the client devices that may be inaccurate or incomplete.
- results of the processing may then be returned to the virtual router in an operation 570 .
- the results may include the intent determination for the natural language multi-modal interaction, results of any queries, commands, tasks, or other requests performed in response to the determined intent, or any other suitable results, as will be apparent.
- the virtual router may coordinate the hybrid processing among one or more the client devices, the voice-enabled server, or any suitable combination thereof.
- the virtual router may determine a context for the natural language multi-modal interaction in an operation 540 and select one or more peer devices based on the determined context in an operation 550 .
- one or more of the client devices may be configured to provide content and/or services in the determined context, whereby the virtual router may forward one or more messages to such devices in an operation 560 in order to request such content and/or services.
- the multi-modal interaction may include a compound request that relates to multiple contexts supported on different devices, whereby the virtual router may forward messages to each such device in operation 560 in order to request appropriate content and/or services in the different contexts.
- the interaction may include a request to be processed on the voice-enabled server, yet the request may require content and/or services that reside on one or more of the client devices (e.g., a location-based query relating to an entry in an address book on one or more of the client devices).
- the virtual router may generally forward various messages to the selected peer devices in operation 560 to manage the multiple stages in the hybrid processing techniques described herein. For example, the virtual router may send messages to one or more voice-enabled client devices that have intent determination capabilities in a particular context, one or more non-voice client devices that have access to content, services, and/or other resources needed to process the multi-modal interaction, or any appropriate combination thereof.
- the virtual router may therefore send messages to the client devices and/or the voice-enabled server in operation 560 and receive responsive messages from the client devices and/or the voice-enabled server in operation 570 in any appropriate manner (e.g., in parallel, sequentially, iteratively, etc.). The virtual router may then collate the results received in the responsive messages in operation 580 and return the results to one or more of the client devices for any final processing and/or presentation of the results.
- FIG. 6 illustrates a block diagram of an exemplary system for providing a natural language content dedication service.
- the system for providing the natural language content dedication service may generally include a voice-enabled client device 610 that can communicate with a content dedication system 665 through a messaging interface 665 .
- the voice-enabled client device 610 may communicate with the content dedication system 665 in a similar manner as described above with reference to FIGS. 2 through 5 .
- the content dedication system 665 may include a virtual router 660 and a voice-enabled server 640 that can service multi-modal natural language requests provided to the voice-enabled client device 610 in a similar manner as described above, and may further include a billing system 638 that may be used to process transactions relating to the content dedication service.
- FIG. 6 may include a virtual router 660 and a voice-enabled server 640 that can service multi-modal natural language requests provided to the voice-enabled client device 610 in a similar manner as described above, and may further include a billing system 638 that may be used to process transactions relating to the content dedication service.
- the content dedication system 665 as including the virtual router 660 , the voice-enabled server 640 , and the billing system 638 within one component, such illustration will be understood to be for ease of description only, and that the virtual router 660 , the voice-enabled server 640 , and the billing system 638 may in fact be arranged within any number of components that can suitably communicate with one another to process multi-modal natural language requests relating to the content dedication service (e.g., the virtual router 660 may communicate with the voice-enabled server 640 through another messaging interface distinct from the messaging interface 650 for communicating with the voice-enabled client device 610 , as shown in the exemplary system illustrated in FIG. 2 ).
- the virtual router 660 may communicate with the voice-enabled server 640 through another messaging interface distinct from the messaging interface 650 for communicating with the voice-enabled client device 610 , as shown in the exemplary system illustrated in FIG. 2 ).
- the system shown in FIG. 6 may provide the natural language content dedication service to any suitable voice-enabled client device 610 having a suitable combination of input and output devices 615 a that can receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions, wherein the input and output devices 615 a may be further arranged to receive any other suitable type of input and provide any other suitable type of output.
- the voice-enabled client device 610 may comprise a mobile phone that includes a keypad input device 615 a , a touch screen input device 615 a , or other input mechanisms 615 a in addition to any input microphones or other suitable input devices 615 a that can receive voice signals.
- the mobile phone may further include an output display device 615 a in addition to any output microphones or other suitable output devices 615 a that can output audible signals.
- a user of the voice-enabled client device 610 may be listening to music, watching video, or otherwise interacting with content through the input and output devices 615 a and provide a multi-modal natural language request to engage in a transaction to dedicate the music, video, or other content, as will be described in greater detail below.
- the voice-enabled client device 610 may be included within a hybrid processing environment that may include a plurality of different devices, whereby the content dedication request may relate to content played on a different device from the voice-enabled client device 610 , although the content dedication request may relate to any suitable content (i.e., the request need not necessarily relate to played content, as users may provide natural language to request content dedications for any suitable content).
- the voice-enabled client device 610 may invoke an Automatic Speech Recognizer (ASR) 620 a to generate one or more preliminary interpretations of the utterance.
- ASR Automatic Speech Recognizer
- the ASR 620 a may then provide the preliminary interpretations to a conversational language processor 630 a , which may attempt to determine an intent for the multi-modal interaction.
- the conversational language processor 630 a may determine a most likely context for the interaction from the preliminary interpretations of the utterance, any accompanying non-speech inputs in the multi-modal interaction that relate to the utterance, contexts associated with prior requests, short-term and long-term shared knowledge, or any other suitable information for interpreting the multi-modal interaction.
- a content dedication application 634 a may be invoked to resolve the content dedication request.
- an initial multi-modal interaction may include the utterance “Find ‘Superstylin” by Groove Armada.”
- the ASR 620 a may generate a preliminary interpretation that includes the words “Find” and “Superstylin” and the phrase “Groove Armada.”
- the ASR 620 a may then provide the preliminary interpretation to the conversational language processor 630 a , which may determine that the word “Find” indicates that the most likely intent of the interaction includes a search, while the word “Superstylin” and the phrase “Groove Armada” provide criteria for the search.
- the conversational language processor 630 a may establish a music context for the interaction and attempt to resolve the search request. For example, the conversational language processor 630 a may search one or more data repositories 636 a that contain music information to identify music having a song title of “Superstylin” and an artist name of “Groove Armada,” and the conversational language processor 630 a may further cooperate with other devices in the hybrid processing environment to search for the song (e.g., in response to the local data repositories 636 a not yielding adequate results, another device in the environment having a larger music data repository than the client device 610 may be invoked).
- the conversational language processor 630 a may search one or more data repositories 636 a that contain music information to identify music having a song title of “Superstylin” and an artist name of “Groove Armada,” and the conversational language processor 630 a may further cooperate with other devices in the hybrid processing environment to search for the song (e
- the conversational language processor 630 a may then receive appropriate results for the search and present the results to the user through the output device 615 a (e.g., displaying information about the song, playing a sample audio clip of the song, displaying options to purchase the song, recommending similar songs or similar artists, etc.).
- the output device 615 a e.g., displaying information about the song, playing a sample audio clip of the song, displaying options to purchase the song, recommending similar songs or similar artists, etc.
- a subsequent multi-modal interaction may include the utterance “Share this with my wife,” “Dedicate it to Charlene,” “That's the one, I want to pass along to some friends,” or another suitable utterance that reflects a request to dedicate the content.
- the conversational language processor 630 a may invoke the content dedication application 634 a , which may present an option to dedicate the content through the output device 615 a together with the results of the search.
- the request to dedicate the content may also be provided in a non-speech input, such as a button press or touch screen selection of the option to dedicate the content.
- the content dedication application 634 a may be invoked to process the content dedication request.
- the content dedication application 634 a may capture a natural language utterance that contains the dedication.
- the content dedication application 634 a may provide a prompt through the output device 615 a that instructs the user to provide the dedication utterance (e.g., a visual or audible prompt instructing the user to begin speaking, to speak after an audible beep, etc.).
- the user may then provide the dedication utterance through the voice-enabled input device 615 a , and the dedication utterance may then be converted into an electronic signal that the content dedication application 634 a captures for the dedication (e.g., “Dear Charlene, I was listening to this song and I thought of you. Eat!”).
- the content dedication application 634 a may further prompt the user to provide any additional tags for the dedicated content (e.g., an image or a picture to be inserted as album art for the dedicated content, a natural language utterance that includes information to be inserted as voice-tags for the dedicated content, a non-speech input or data input that includes information to insert in tags for the dedicated content, etc.).
- the content dedication application 634 a may then prompt the user to identify a recipient 690 of the dedication, wherein the user may provide any suitable multi-modal input that includes an e-mail address, a telephone number, an address book entry, or other information identifying the recipient 690 of the dedication.
- the content dedication application 634 a may then route the request, including the dedication utterance, the additional tags (if any), and the information identifying the recipient 690 to the content dedication system 665 through the messaging interface 650 .
- the dedication utterance may be converted into encoded audio that can be communicated through the messaging interface 650 , whereby the content dedication system 665 can insert the encoded audio corresponding to the dedication utterance within the dedicated content and/or verbally annotate the dedicated content with the encoded audio.
- the dedication utterance may be interpreted and parsed to transcribe one or more words or phrases from the dedication utterance, wherein the transcribed words or phrases may provide a textual annotation for the dedicated content (e.g., the textual annotation may be inserted within metadata for the dedicated content, such as an ID3 Comments tag).
- any utterances to be inserted as voice-tags for the dedicated content may provide further verbal annotations for the dedicated content, or such utterances may be transcribed to provide further textual annotations for the dedicated content.
- verbal annotations, textual annotations, and other types of annotations may be created and associated with the dedicated content using techniques described in co-pending U.S.
- the content dedication system 665 may invoke a similar content dedication application 634 b on the voice-enabled server 640 to process the request.
- the content dedication application 634 b on the voice-enabled server 640 may identify the content requested for dedication and initiate a transaction for the content requested for dedication.
- the content dedication request received at the content dedication system 665 may include an identification of the content to be dedicated.
- the content dedication system 665 may use shared knowledge relating to the dedication request to identify the content to be dedicated. For example, the content dedication system 665 may invoke one or more local natural language processing components (e.g., ASR 620 b , conversational language processor 630 b or 630 c , etc.), search one or more local data repositories 636 b , interact with one or more content providers 680 over a network 670 , pull data from a satellite radio system that played the requested content at the voice-enabled client device 610 or another device in the hybrid processing environment, or otherwise consult available resources that can be used to identify the content to be dedicated.
- ASR 620 b e.g., ASR 620 b , conversational language processor 630 b or 630 c , etc.
- search one or more local data repositories 636 b e.g., interact with one or more content providers 680 over a network 670 , pull data from a satellite radio system that played the requested content at the voice-en
- the content dedication system 665 may communicate with a billing system 638 to identify one or more purchase options for the dedication request and process an appropriate transaction for the dedication request.
- the content dedication system 665 may generally support various different purchase options to provide users with flexibility in the manner of requesting content dedications, including a buy-to-own purchase option, a pay-to-play purchase option, a paid subscription purchase option, or other appropriate options.
- the purchase options may be modeled on techniques for providing natural language services and subscriptions described in U.S. patent application Ser. No. 10/452,147, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” which issued as U.S. Pat. No. 7,398,209 on Jul.
- the content dedication system 665 may purchase full rights to the content from an appropriate content provider 680 , wherein the billing system 638 may then charge the user of the voice-enabled client device 610 a particular amount that encompasses the cost for purchasing the content from the content provider 680 plus a service charge for dedicating the content, tagging the dedicated content, delivering the dedicated content to the recipient 690 , or any other appropriate services rendered for the content dedication request.
- the billing system 638 may charge the user in a similar manner under the pay-to-play purchase option, except that the rights purchased from the content provider 680 may be limited to a single play, such that the cost for purchasing the content from the content provider 680 may be somewhat less under the pay-to-play purchase option.
- the user may pay a periodic service charge to the content dedication system 665 that permits the user to make a predetermined number of content dedications or an unlimited number of content dedications in a subscription period, or the subscription purchase option may permit the user to make content dedications in any other suitable manner (e.g., different subscription levels having different content dedication options may be offered, such that the user may select a subscription level that meets the user's particular needs).
- the billing system 638 may only charge the user any costs for obtaining the rights to the content, which may be purchased under either the buy-to-own option or the pay-to-play option, or alternatively the user may be charged nothing if the user already owns the dedication content.
- a first subscription level may cost a first amount to permit the user to make a particular number of content dedications in a subscription period, while a second subscription level may cost a higher amount to permit the user to make an unlimited number of content dedications in the subscription period, while still other subscription levels having different terms may be offered, as will be apparent.
- a service provider associated with the content dedication system 665 may negotiate an agreement with the content provider 680 to determine the manner in which revenues for content transactions will be shared between the content dedication system 665 and the content provider 680 .
- the agreement may provide that the content provider 680 may keep all of the revenue for transactions that include purchasing content from the content provider 680 and that the service provider associated with the content dedication system 665 may recoup costs for such transactions from users.
- the agreement may provide that the content provider 680 and the service provider associated with the content dedication system 665 may share the revenue for the transactions that include purchasing content from the content provider 680 .
- the agreement may generally include any suitable arrangement that defines the manner in which the content provider 680 and the service provider associated with the content dedication system 665 manage the revenue for the transactions that include purchasing content from the content provider 680 , while the service provider associated with the content dedication system 665 may manage billing users for the natural language aspects of the content dedication service according to the techniques described in further detail above.
- the method shown in FIG. 7 may be used to provide the natural language content dedication service to any suitable voice-enabled client device having a suitable combination of input and output devices that can receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions, wherein the input and output devices may be further arranged to receive any other suitable type of input and provide any other suitable type of output.
- the voice-enabled client device may comprise a mobile phone that includes a keypad input device, a touch screen input device, or other input mechanisms in addition to any input microphones or other suitable input devices that can receive voice signals, and may further include an output display device in addition to any output microphones or other suitable output devices that can output audible signals.
- a user of the voice-enabled client device may be listening to music, watching video, or otherwise interacting with content through the input and output devices, wherein an operation 710 may include the voice-enabled client device receiving a multi-modal natural language interaction to engage in a transaction to dedicate the music, video, or other content.
- the voice-enabled client device may invoke an Automatic Speech Recognizer (ASR) to generate one or more preliminary interpretations of the utterance.
- ASR Automatic Speech Recognizer
- the ASR may then provide the preliminary interpretations to a conversational language processor, which may attempt to determine an intent for the multi-modal interaction.
- the multi-modal interaction received in operation 720 may include the utterance “Buy that song and send it to Michael.”
- the ASR may generate a preliminary interpretation that includes words and/or phrases such as “Buy,” “that song,” “send it,” and “to Michael.”
- the ASR may then provide the preliminary interpretation to the conversational language processor, which may determine that the word and/or phrase combination of “Buy” and “send it” indicates that the most likely intent of the interaction includes a content dedication request, while the word and/or phrase combination of “that song” and “to Michael” provide criteria for the intended content and recipient for the dedication.
- the conversational language processor may establish a device context, a dedication context, a content or music context, an address book context, or other suitable contexts in an attempt to resolve the request.
- the device context may enable the conversational language processor to retrieve data from the voice-enabled client device or another suitable device that provides the user's intended meaning for the phrase “that song” (e.g., the user may be referring to a song playing on the user's satellite radio device, such that the conversational language processor can identify the song that was playing when the device interaction was received in operation 710 ).
- the address book context may enable the conversational language processor to identify “Michael,” the intended recipient of the dedication request.
- the conversational language processor may then receive appropriate results for resolving the intent of the request and present the results to the user through the output device (e.g., displaying information requesting that the user confirm that the content and recipient was correctly identified, playing a sample audio clip of the song, displaying options to purchase the song, recommending similar songs or similar artists, etc.). Accordingly, in response to detecting the content dedication request in operation 720 and identifying the relevant criteria identifying the content to be dedicated and the intended recipient, the content dedication application may be invoked to process the content dedication request.
- the content dedication application may then process the content dedication request in an operation 760 , which may include prompting the user to identify the recipient of the dedication.
- the content dedication application may request information identifying the recipient of the dedication in response to the user not having already identified the recipient, in response to the user identifying the recipient in a manner that includes ambiguity or other criteria that cannot be resolved without further information, to distinguish among different contact information known for the recipient, or in response to other appropriate circumstances.
- the user may provide any suitable multi-modal input that includes an e-mail address, a telephone number, an address book entry, or other criteria that can be used to uniquely identify the information for contacting the recipient of the dedication.
- processing the content dedication request in operation 760 may further include the content dedication application routing the request, including the dedication utterance, the additional tags (if any), and the information for contacting the recipient to the content dedication system through the messaging interface, wherein the content dedication system may then process a transaction for the content dedication request, as will be described in greater detail below.
- FIG. 8 illustrates a flow diagram of an exemplary method for providing a natural language content dedication service.
- the method for providing a natural language content dedication service may include an operation 810 in which a content dedication system may receive a natural language content dedication request through a messaging interface.
- the content dedication system may receive the natural language content dedication request from a content dedication application that executes on a voice-enabled client device.
- the natural language content dedication request received in operation 810 may generally include a natural language dedication utterance, information to be inserted within one or more tags for content to be dedicated, and information identifying a recipient of the dedicated content.
- the dedication utterance received in operation 810 may include encoded audio that the content dedication system can insert within the dedicated content or use to verbally annotate the dedicated content.
- the content dedication system may interpret and parse the dedication utterance to transcribe one or more words or phrases from the dedication utterance, wherein the transcribed words or phrases may provide a textual annotation for the dedicated content (e.g., the textual annotation may be inserted within metadata for the dedicated content, such as an ID3 Comments tag).
- any utterances to be inserted as voice-tags for the dedicated content may provide further verbal annotations for the dedicated content, or such utterances may be transcribed to provide further textual annotations for the dedicated content.
- verbal annotations, textual annotations, and other types of annotations may be created and associated with the dedicated content using techniques described in co-pending U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, the contents of which are hereby incorporated by reference in their entirety.
- operation 820 may include the content dedication system invoking one or more natural language processing components (e.g., an ASR, a conversational language processor, etc.), searching one or more data repositories, interacting with one or more content providers, pulling data from a satellite radio system that played the requested content at the voice-enabled client device or another device in a hybrid processing environment, or otherwise consulting available resources that can be used to identify the content to be dedicated.
- the hybrid processing environment may include a device having an application that can identify played content (e.g., a Shazam® listening device that a user can hold near a speaker to identify content playing through the speaker).
- operation 820 may generally include the content dedication system communicating with any suitable device, system, application, or other resource to identify the content requested for dedication.
- an operation 830 may include the content dedication system identifying one or more purchase options for the dedication request.
- operation 830 may include the content dedication system communicating with a billing system that supports various purchase options to provide users with flexibility in the manner of requesting content dedications.
- the billing system may support content dedication purchase options that include a buy-to-own purchase option, a pay-to-play purchase option, a paid subscription purchase option, or other appropriate options.
- an operation 840 may include the content dedication system processing a transaction for the content identified in operation 820 .
- the transaction processed in operation 840 may include purchasing full rights to the requested content from an appropriate content provider, wherein the billing system may then charge the user of the voice-enabled client device an appropriate amount that includes costs for purchasing the content from the content provider, adding the natural language utterance dedicating the content, tagging the dedicated content, delivering the dedicated content to the recipient, or any other appropriate services rendered for the content dedication request.
- the user in response to determining that the requested content dedication includes a selection of the paid subscription purchase option, the user may pay a periodic service charge to the content dedication system that permits the user to make a predetermined number of content dedications in a subscription period, an unlimited number of content dedications in the subscription period, or otherwise make content dedications in accordance with terms defined in a subscription (e.g., different subscription levels having different content dedication options may be offered, such as a subscription level that only permits utterance dedications, another subscription level that further permits interpreting and parsing utterance dedications, etc.).
- the content transaction may be processed in operation 840 according to the purchase options identified in operation 830 , and the content dedication system may then further process the dedication request to customize the dedicated content according to criteria provided in the request previously received in operation 810 .
- an operation 850 may include the content dedication system inserting the natural language dedication utterance into the dedicated content, verbally annotating the dedicated content with the dedication utterance, or otherwise associating the dedicated content with the dedication utterance.
- an operation 860 may include the content dedication system determining whether any additional tags have been specified for the dedicated content.
- the dedication request may identify an image or picture to insert as album art for the dedicated content, one or more natural language utterances, non-voice, and/or data inputs to be transcribed into text that can be inserted in tags for the dedicated content, or any other suitable information that can be inserted within or associated with metadata for the dedicated content.
- an operation 870 may include inserting such additional tags into the dedicated content.
- the content dedication system may send a content dedication message to the recipient of the dedication in an operation 880 .
- the content dedication message may include a Short Message Service (SMS) text message, an electronic mail message, an automated telephone call managed by a text-to-speech engine, or any other appropriate message that can be appropriately delivered to the recipient (e.g., the message may include a link that the recipient can select to stream, download, or otherwise access the dedicated content, the dedication utterance, etc.).
- SMS Short Message Service
- the content dedication message may generally notify the recipient that content has been dedicated to the recipient and provide various mechanisms for the recipient to access the content dedication, as will be apparent.
- Implementations of the invention may be made in hardware, firmware, software, or various combinations thereof.
- the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors.
- a machine-readable medium may include various mechanisms for storing or transmitting information in a form readable by a machine (e.g., a computing device).
- a machine-readable storage medium may include read only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, or other storage media
- a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, or other transmission media.
- firmware, software, routines, or instructions may be described in the above disclosure in terms of specific exemplary aspects and implementations of the invention, and performing certain actions. However, it will be apparent that such descriptions are merely for convenience, and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, or instructions.
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 12/943,699, entitled “System and Method for Providing a Natural Language Content Dedication Service,” filed Nov. 10, 2010, which claims the benefit of U.S. Provisional Patent Application Ser. No. 61/259,820, entitled “System and Method for Providing a Natural Language Content Dedication Service,” filed Nov. 10, 2009, the contents of which are hereby incorporated by reference in their entirety.
- The invention generally relates to providing a natural language content dedication service in a voice services environment, and in particular, to detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language to customize the content for recipients of the dedications, and delivering the customized content to the recipients of the dedications.
- As technology has progressed in recent years, consumer electronic devices have emerged to become nearly ubiquitous in the everyday lives of many people. To meet the increasing demand that has resulted from growth in the functionality and mobility of mobile phones, navigation devices, embedded devices, and other such devices, many devices offer a wealth of features and functions in addition to core applications. Greater functionality also introduces trade-offs, however, including learning curves that often inhibit users from fully exploiting all of the capabilities of their electronic devices. For example, many existing electronic devices include complex human to machine interfaces that may not be particularly user-friendly, which can inhibit mass-market adoption for many technologies. Moreover, cumbersome interfaces often result in otherwise desirable features being difficult to find or use (e.g., because of menus that are complex or otherwise tedious to navigate). As such, many users tend not to use, or even know about, many of the potential capabilities of their devices.
- As such, the increased functionality of electronic devices often tends to be wasted, as market research suggests that many users only use only a fraction of the features or applications available on a given device. Moreover, in a society where wireless networking and broadband access are increasingly prevalent, consumers tend to naturally desire seamless mobile capabilities from their electronic devices. Thus, as consumer demand intensifies for simpler mechanisms to interact with electronic devices, cumbersome interfaces that prevent quick and focused interaction become an important concern. Nevertheless, the ever-growing demand for mechanisms to use technology in intuitive ways remains largely unfulfilled.
- One approach towards simplifying human to machine interactions in electronic devices has included the use of voice recognition software, which has the potential to enable users to exploit features that would otherwise be unfamiliar, unknown, or difficult to use. For example, a recent survey conducted by the Navteq Corporation, which provides data used in a variety of applications such as automotive navigation and web-based applications, demonstrates that voice recognition often ranks among the features most desired by consumers of electronic devices. Even so, existing voice user interfaces, when they actually work, still require significant learning on the part of the user.
- For example, many existing voice user interface only support requests formulated according to specific command-and-control sequences or syntaxes. Furthermore, many existing voice user interfaces cause user frustration or dissatisfaction because of inaccurate speech recognition. Similarly, by forcing a user to provide pre-established commands or keywords to communicate requests in ways that a system can understand, existing voice user interfaces do not effectively engage the user in a productive, cooperative dialogue to resolve requests and advance a conversation towards a satisfactory goal (e.g., when users may be uncertain of particular needs, available information, device capabilities, etc.). As such, existing voice user interfaces tend to suffer from various drawbacks, including significant limitations on engaging users in a dialogue in a cooperative and conversational manner.
- Additionally, many existing voice user interfaces fall short in utilizing information distributed across different domains, devices, and applications in order to resolve natural language voice-based inputs. Thus, existing voice user interfaces suffer from being constrained to a finite set of applications for which they have been designed, or to devices on which they reside. Although technological advancement has resulted in users often having several devices to suit their various needs, existing voice user interfaces do not adequately free users from device constraints. For example, users may be interested in services associated with different applications and devices, but existing voice user interfaces tend to restrict users from accessing the applications and devices as they see fit. Moreover, users typically can only practicably carry a finite number of devices at any given time, yet content or services associated with users' devices other than those currently being used may be desired in various circumstances.
- Accordingly, although users tend to have varying needs, where content or services associated with different devices may be desired in various contexts or environments, existing voice technologies tend to fall short in providing an integrated environment in which users can request content or services associated with virtually any device or network. As such, constraints on information availability and device interaction mechanisms in existing voice services environments tend to prevent users from experiencing technology in an intuitive, natural, and efficient way. For instance, when a user wishes to perform a given function using a given electronic device, but does not necessarily know how to go about performing the function, the user typically cannot engage in cooperative multi-modal interactions with the device to simply utter words in natural language to request the function.
- Furthermore, relatively simple functions can often be tedious to perform using electronic devices that do not have voice recognition capabilities. For example, purchasing new ring-tones for a mobile phone tends to be a relatively straightforward process, but users must typically navigate several menus and press many different buttons in order to complete the process. In another example, users often listen to music or interact with other media in mobile environments, such that interest in purchasing music, media, or other content may be fleeting or often occur on an impulse basis. Whereas existing human to machine interfaces that lack voice recognition capabilities typically fall short in providing mechanisms that can readily meet this demand, adding voice recognition to an electronic device can substantially simplify human to machine interaction in a manner that can meet user needs, improve experience, and satisfy potentially transient consumer interests. As such, interaction with electronic devices could be made far more efficient if users were provided with the ability to use natural language in order to exploit buried or otherwise difficult to use functionality.
- According to one aspect of the invention, a system and method for providing a natural language content dedication service may generally operate in a voice services environment that includes one or more electronic devices that can receive multi-modal natural language device interactions. In particular, providing the natural language content dedication service may generally include detecting multi-modal device interactions that include requests to dedicate content, identifying the content requested for dedication from natural language utterances included in the multi-modal device interactions, processing transactions for the content requested for dedication, processing natural language to customize the content for recipients of the dedications, and delivering the customized content to the recipients of the dedications.
- According to one aspect of the invention, the natural language content dedication service may operate in a hybrid processing environment, which may generally include a plurality of multi-modal devices configured to cooperatively interpret and process natural language utterances included in the multi-modal device interactions. For example, a virtual router may receive messages that include encoded audio corresponding to natural language utterances contained in the multi-modal device interactions, which may be received at one or more of the plurality of multi-modal devices in the hybrid processing environment. For example, the virtual router may analyze the encoded audio to select a cleanest sample of the natural language utterances and communicate with one or more other devices in the hybrid processing environment to determine an intent of the multi-modal device interactions. The virtual router may then coordinate resolving the multi-modal device interactions based on the intent of the multi-modal device interactions.
- According to one aspect of the invention, a method for providing the natural language content dedication service may comprise detecting a multi-modal device interaction at an electronic device, wherein the multi-modal device interaction may include at least a natural language utterance. One or more messages containing information relating to the multi-modal device interaction may then be communicated to the virtual router through a messaging interface. The electronic device may then receive one or more messages (e.g., from the virtual router through message interface), wherein the messages may contain information relating to an intent of the multi-modal device interaction. As such, the multi-modal device interaction may be resolved at the electronic device based on the information contained in the one or more messages received from the virtual router.
- According to one aspect of the invention, a system for providing the natural language content dedication service may generally include a voice-enabled client device that can communicate with a content dedication system through the messaging interface. The content dedication system include a voice-enabled server, which may be configured to communicate with the virtual router through another messaging interface, or the content dedication system may alternatively include the virtual router. In addition, the content dedication system may further include a billing system for processing transactions relating to the content dedication service. The natural language content dedication service may be provided on any suitable voice-enabled client device having a suitable combination of input and output devices that can receive and respond to multi-modal device interactions that include natural language utterances, and the input and output devices may be further arranged to receive and respond to any other suitable type of input and output.
- According to one aspect of the invention, operating the natural language content dedication service may generally include a user of the voice-enabled client device listening to music, watching video, or otherwise interacting with content and providing a multi-modal natural language request to engage in a transaction to dedicate the music, video, or other content. Furthermore, the voice-enabled client device may be included within the hybrid processing environment that includes the plurality of multi-modal devices, whereby the content dedication request may relate to content played on a different device from the voice-enabled client device, although the content dedication request may relate to any suitable content (i.e., the request need not necessarily relate to played content, as users may provide natural language to request content dedications for any suitable content, including a particular song or video that the user may be thinking about).
- According to one aspect of the invention, in response to the voice-enabled client device receiving a multi-modal interaction that includes a natural language utterance, the voice-enabled client device may invoke an Automatic Speech Recognizer (ASR) to generate a preliminary interpretation of the utterance. The ASR may then provide the preliminary interpretation of the utterance to a conversational language processor, which may attempt to determine an intent for the multi-modal interaction. For example, to determine the intent for the multi-modal interaction, the conversational language processor may determine a most likely context for the interaction from the preliminary interpretation of the utterance, any accompanying non-speech inputs in the multi-modal interaction that relate to the utterance, contexts associated with prior requests, short-term and long-term shared knowledge, or any other suitable information for interpreting the multi-modal interaction. Thus, in response to the conversational language processor determining that the intent for the multi-modal interaction relates to a content dedication request, a content dedication application may be invoked to resolve the content dedication request.
- According to one aspect of the invention, to resolve the intent of the content dedication request, the conversational language processor may search one or more data repositories that contain content information to identify content matching criteria contained in the content dedication request. Moreover, the conversational language processor may further cooperate with other devices in the hybrid processing environment to search for or otherwise identify the content requested for dedication (e.g., in response to a local data repository not yielding adequate results, another device in the hybrid processing environment having a larger content data repository than the client device may be invoked). The conversational language processor may then receive appropriate results identifying the content requested for dedication and present the results to the user through the output device (e.g., displaying information about the content, playing a sample clip of the content, displaying options to purchase the content, recommending similar content, etc.). The results presented through the output device may further include an option to confirm the content dedication, wherein the user may confirm the content dedication in a natural language utterance, a non-speech input, or any suitable combination thereof. The content dedication application may then be invoked to process the content dedication request.
- According to one aspect of the invention, to process the content dedication request, the content dedication application may capture a natural language utterance that contains the dedication to accompany the content. The user may then provide the dedication utterance through the voice-enabled input device, and the dedication utterance may then be converted into an electronic signal that the content dedication application captures for the dedication. In addition, the content dedication application may prompt the user to provide any additional tags for the dedicated content (e.g., an image to insert as album art in the dedicated content, an utterance to insert or transcribe into metadata tags for the dedicated content, a non-speech or data input to insert in the metadata tags for the dedicated content, etc.). The content dedication application may further prompt the user to identify a recipient of the dedication, wherein the user may provide any suitable multi-modal input that includes information identifying the recipient of the content dedication. The content dedication application may then route the request to the content dedication system, which may process a transaction for the content dedication.
- According to one aspect of the invention, processing the transaction for the content dedication may include the content dedication system receiving encoded audio corresponding to the dedication utterance through the messaging interface. The content dedication system may then insert the encoded audio corresponding to the dedication utterance within the dedicated content, verbally annotate the dedicated content with the encoded audio, and/or transcribe the dedication utterance into a textual annotation for the dedicated content. Similarly, any utterances to insert into the metadata tags for the dedicated content may provide further verbal annotations for the dedicated content, and any such utterances may also be transcribed into text to provide further textual annotations for the dedicated content. The content dedication system may then invoke a content dedication application hosted on the voice-enabled server to process the request. In particular, the content dedication application hosted on the voice-enabled server may identify the content requested for dedication (e.g., if the content dedication application on the voice-enabled client device was unable to suitably identify the requested content, or the information communicated from the voice-enabled client device may identify the requested content if the content dedication application on the voice-enabled client device was able to suitably identify the requested content, etc.). In response to identifying the content to be dedicated, the content dedication system may then communicate with the billing system to process an appropriate transaction for the dedication request based on a selected purchase option for the dedication request.
- According to one aspect of the invention, the content dedication system may support various purchase options to provide users with flexibility in requesting content dedications. For example, a buy-to-own purchase option may include the content dedication system purchasing full rights to the content from an appropriate content provider. The billing system may then charge the user of the voice-enabled client device an appropriate amount that encompasses the cost for purchasing the rights to the content from the content provider and a service charge for customizing the content with the dedication and any additional tags and subsequently delivering the customized content to the dedication recipient. The user may be charged in a similar manner under a pay-to-play purchase option, except that the rights purchased from the content provider may be limited (e.g., to a predetermined number of plays). Thus, the cost for purchasing the content from the content provider may be somewhat less under the pay-to-play purchase option, such that the billing system may charge the user somewhat less under the pay-to-play purchase option. Under a paid subscription purchase option, the user may pay a periodic service charge to the content dedication system that permits the user to make content dedications based on terms of the subscription (e.g., a predetermined number an unlimited number of content dedications may be made in a subscription period depending on the particular terms of the user's subscription). Furthermore, other purchase options may be suitably employed, as will be apparent.
- According to one aspect of the invention, a service provider associated with the content dedication system may negotiate agreements with content providers to determine the manner in which revenues for content transactions will be shared between the content dedication system and the content providers. For example, an agreement may permit a particular content provider to keep all of the revenue for transactions that include purchasing content from the content provider, while the service provider associated with the content dedication system agrees to recoup any such costs from users. In another example, an agreement may share revenue for content transactions between a content provider and the service provider. Thus, the agreements may generally include any suitable arrangements that define the manner in which content providers and the service provider agree to divide revenue for transactions in which the content dedication system purchases content from the content provider, while the service provider associated with the content dedication system may be responsible for billing users for any natural language aspects of the content dedication service (e.g., for inserting dedication utterances, transcribing utterances into metadata tags, delivering the content to recipients, etc.).
- According to one aspect of the invention, in response to purchasing the requested content from an appropriate service provider and determining the appropriate billing options for the content dedication, the content dedication system may insert the natural language dedication utterance into the dedicated content, verbally annotate the dedicated content with the dedication utterance, or otherwise associate the content with the dedication utterance. Furthermore, the content dedication system may determine whether any additional tags have been specified for the dedicated content and insert such additional tags into the dedicated content, as appropriate (e.g., inserting an image or picture into metadata tags corresponding to album art for the dedicated content, transcribing natural language utterances, non-voice, and/or data inputs into text and inserting such text into the metadata tags for the dedicated content, etc.). The content dedication system may then send a content dedication message to the recipient of the dedication, wherein the message may include a link that the recipient can select to stream, download, or otherwise access the dedicated content, the dedication utterance, etc.). Thus, the content dedication message may generally notify the recipient that content has been dedicated to the recipient and provide various mechanisms for the recipient to access the content dedication, as will be apparent. Furthermore, if the purchase option selected for the content dedication includes the buy-to-own purchase option or another purchase option that confers full rights to the dedicated content, the recipient may then own the full rights to the dedicated content, otherwise the recipient may own rights with respect to the dedicated content based on whatever terms the selected purchase option provides.
- Other objects and advantages of the invention will be apparent based on the following drawings and detailed description.
-
FIG. 1 illustrates a block diagram of an exemplary voice-enabled device that can be used for hybrid processing in a natural language voice services environment, according to one aspect of the invention. -
FIG. 2 illustrates a block diagram of an exemplary system for hybrid processing in a natural language voice service environment, according to one aspect of the invention. -
FIG. 3 illustrates a flow diagram of an exemplary method for initializing various devices that cooperate to perform hybrid processing in a natural language voice services environment, according to one aspect of the invention. -
FIGS. 4-5 illustrate flow diagrams of exemplary methods for hybrid processing in a natural language voice services environment, according to one aspect of the invention. -
FIG. 6 illustrates a block diagram of an exemplary system for providing a natural language content dedication service, according to one aspect of the invention. -
FIGS. 7-8 illustrate flow diagrams of exemplary methods for providing a natural language content dedication service, according to one aspect of the invention. - According to one aspect of the invention,
FIG. 1 illustrates a block diagram of an exemplary voice-enableddevice 100 that can be used for hybrid processing in a natural language voice services environment. As will be apparent from the further description to be provided herein, the voice-enableddevice 100 illustrated inFIG. 1 may generally include aninput device 112, or a combination ofinput devices 112, which may enable a user to interact with the voice-enableddevice 100 in a multi-modal manner. In particular, theinput devices 112 may generally include any suitable combination of at least one voice input device 112 (e.g., a microphone) and at least one non-voice input device 112 (e.g., a mouse, touch-screen display, wheel selector, etc.). As such, theinput devices 112 may include any suitable combination of electronic devices having mechanisms for receiving both voice-based and non-voice-based inputs (e.g., a microphone coupled to one or more of a telematics device, personal navigation device, mobile phone, VoIP node, personal computer, media device, embedded device, server, or other electronic device). - In one implementation, the voice-enabled
device 100 may enable the user to engage in various multi-modal conversational interactions, which the voice-enableddevice 100 may process in a free-form and cooperative manner to execute various tasks, resolve various queries, or otherwise resolve various natural language requests included in the multi-modal interactions. For example, in one implementation, the voice-enableddevice 100 may include various natural language processing components, including at least a voice-click module coupled to the one ormore input devices 112, as described in further detail in co-pending U.S. patent application Ser. No. 12/389,678, entitled “System and Method for Processing Multi-Modal Device Interactions in a Natural Language Voice Services Environment,” filed Feb. 20, 2009, the contents of which are hereby incorporated by reference in their entirety. Thus, as will be described in further detail herein, the one ormore input devices 112 and the voice-click module may be collectively configured to process various multi-modal interactions between the user and the voice-enableddevice 100. - For example, in one implementation, the multi-modal interactions may include at least one natural language utterance, wherein the natural language utterance may be converted into an electronic signal. The electronic signal may then be provided to an Automatic Speech Recognizer (ASR) 120, which may also be referred to as a
speech recognition engine 120 and/or a multi-passspeech recognition engine 120. In response to receiving the electronic signal corresponding to the utterance, theASR 120 may generate one or more preliminary interpretations of the utterance and provide the preliminary interpretation to aconversational language processor 130. Additionally, in one implementation, the multi-modal interactions may include one or more non-voice interactions with the one or more input devices 112 (e.g., button pushes, multi-touch gestures, point of focus or attention focus selections, etc.). As such, the voice-click module may extract context from the non-voice interactions and provide the context to theconversational language processor 130 for use in generating an interpretation of the utterance (i.e., via the dashed line illustrated inFIG. 1 ). As such, as described in greater detail below, theconversational language processor 130 may analyze the utterance and any accompanying non-voice interactions to determine an intent of the multi-modal interactions with the voice-enableddevice 100. - In one implementation, as noted above, the voice-enabled
device 100 may include various natural language processing components that can support free-form utterances and/or other forms of non-voice device interactions, which may liberate the user from restrictions relating to the manner of formulating commands, queries, or other requests. As such, the user may provide the utterance to thevoice input device 112 using any manner of speaking, and may further provide other non-voice interactions to thenon-voice input device 112 to request any content or service available through the voice-enableddevice 100. For instance, in one implementation, in response to receiving the utterance at thevoice input device 112, the utterance may be processed using techniques described in U.S. patent application Ser. No. 10/452,147, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” which issued as U.S. Pat. No. 7,398,209 on Jul. 8, 2008, and co-pending U.S. patent application Ser. No. 10/618,633, entitled “Mobile Systems and Methods for Responding to Natural Language Speech Utterance,” filed Jun. 15, 2003, the contents of which are hereby incorporated by reference in their entirety. In addition, the user may interact with one or more of thenon-voice input devices 112 to provide buttons pushes, multi-touch gestures, point of focus or attention focus selections, or other non-voice device interactions, which may provide further context or other information relating to the natural language utterances and/or the requested content or service. - In one implementation, the voice-enabled
device 100 may be coupled to one or more additional systems that may be configured to cooperate with the voice-enableddevice 100 to interpret or otherwise process the multi-modal interactions that include combinations of natural language utterances and/or non-voice device interactions. For example, as will be described in greater detail below in connection withFIG. 2 , the one or more additional systems may include one or more multi-modal voice-enabled devices having similar natural language processing capabilities to the voice-enableddevice 100, one or more non-voice devices having data retrieval and/or task execution capabilities, and a virtual router that coordinates interaction among the voice-enableddevice 100 and the additional systems. As such, the voice-enableddevice 100 may include an interface to an integrated natural language voice services environment that includes a plurality of multi-modal devices, wherein the user may request content or services available through any of the multi-modal devices. - For example, in one implementation, the
conversational language processor 130 may include aconstellation model 132 b that provides knowledge relating to content, services, applications, intent determination capabilities, and other features available in the voice services environment, as described in co-pending U.S. patent application Ser. No. 12/127,343, entitled “System and Method for an Integrated, Multi-Modal, Multi-Device Natural Language Voice Services Environment,” filed May 27, 2008, the contents of which are hereby incorporated by reference in their entirety. As such, the voice-enableddevice 100 may have access to shared knowledge relating to natural language processing capabilities, context, prior interactions, domain knowledge, short-term knowledge, long-term knowledge, and cognitive models for the various systems and multi-modal devices, providing a cooperative environment for resolving the multi-modal interactions received at the voice-enableddevice 100. - In one implementation, the
input devices 112 and the voice-click module coupled thereto may be configured to continually monitor for one or more multi-modal interactions received at the voice-enableddevice 100. In particular, theinput devices 112 and the voice-click module may continually monitor for one or more natural language utterances and/or one or more distinguishable non-voice device interactions, which may collectively provide the relevant context for retrieving content, executing tasks, invoking services or commands, or processing any other suitable requests. Thus, in response to detecting one or more multi-modal interactions, theinput devices 112 and/or the voice-click module may signal the voice-enableddevice 100 that an utterance and/or a non-voice interaction have been received. For example, in one implementation, the non-voice interaction may provide context for sharpening recognition, interpretation, and understanding of an accompanying utterance, and moreover, the utterance may provide further context for enhancing interpretation of the accompanying non-voice interaction. Accordingly, the utterance and the non-voice interaction may collectively provide relevant context that various natural language processing components may use to determine an intent of the multi-modal interaction that includes the utterance and the non-voice interaction. - In one implementation, as noted above, processing the utterance included in the multi-modal interaction may be initiated at the
ASR 120, wherein theASR 120 may generate one or more preliminary interpretations of the utterance. In one implementation, to generate the preliminary interpretations of the utterance, theASR 120 may be configured to recognize one or more syllables, words, phrases, or other acoustic characteristics from the utterance using one or more dynamic recognition grammars and/or acoustic models. For example, in one implementation, theASR 120 may use the dynamic recognition grammars and/or the acoustic models to recognize a stream of phonemes from the utterance based on phonetic dictation techniques, as described in U.S. patent application Ser. No. 11/513,269, entitled “Dynamic Speech Sharpening,” which issued as U.S. Pat. No. 7,634,409 on Dec. 15, 2009, the contents of which are hereby incorporated by reference in their entirety. In addition, the dynamic recognition grammars and/or the acoustic models may include unstressed central vowels (e.g., “schwa”), which may reduce a search space for recognizing the stream of phonemes for the utterance. - Furthermore, in one implementation, the
ASR 120 may be configured as a multi-passspeech recognition engine 120, as described in U.S. patent application Ser. No. 11/197,504, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” which issued as U.S. Pat. No. 7,640,160 on Dec. 29, 2009, the contents of which are hereby incorporated by reference in their entirety. Themulti-pass speech recognition 120 may be configured to initially invoke a primary speech recognition engine to generate a first transcription of the utterance, and further to optionally subsequently invoke one or more secondary speech recognition engines to generate one or more secondary transcriptions of the utterance. In one implementation, the first transcription may be generated using a large list dictation grammar, while the secondary transcriptions may be generated using virtual dictation grammars having decoy words for out-of-vocabulary words, reduced vocabularies derived from a conversation history, or other dynamic recognition grammars. For example, in one implementation, if a confidence level for the first transcription does not meet or exceed a threshold, the secondary speech recognition engines may be invoked to sharpen the interpretation of the primary speech recognition engine. It will be apparent, however, that the multi-passspeech recognition engine 120 may interpret the utterance using any suitable combination of techniques that results in a preliminary interpretation derived from a plurality of transcription passes for the utterance (e.g., the secondary speech recognition engines may be invoked regardless of the confidence level for the first transcription, or the primary speech recognition engine and/or the secondary speech recognition engines may employ recognition grammars that are identical or optimized for a particular interpretation context, etc.). - Accordingly, in one implementation, the dynamic recognition grammars used in the
ASR 120 may be optimized for different languages, contexts, domains, memory constraints, and/or other suitable criteria. For example, in one implementation, the voice-enableddevice 100 may include one ormore applications 134 that provide content or services for a particular context or domain, such as anavigation application 134. As such, in response to theASR 120 determining navigation as the most likely context for the utterance, the dynamic recognition grammars may be optimized for various physical, temporal, directional, or other geographical characteristics (e.g., as described in co-pending U.S. patent application Ser. No. 11/954,064, entitled “System and Method for Providing a Natural Language Voice User Interface in an Integrated Voice Navigation Services Environment,” filed Dec. 11, 2007, the contents of which are hereby incorporated by reference in their entirety). In another example, an utterance containing the word “traffic” may be subject to different interpretations depending on whether the user intended a navigation context (i.e., traffic on roads), a music context (i.e., the 1960's rock band), or a movie context (i.e., the Steven Soderbergh film). Accordingly, the recognition grammars used in theASR 120 may be dynamically adapted to optimize accurate recognition for any given utterance (e.g., in response to incorrectly interpreting an utterance to contain a particular word or phrase, the incorrect interpretation may be removed from the recognition grammar to prevent repeating the incorrect interpretation). - In one implementation, in response to the
ASR 120 generating the preliminary interpretations of the utterance included in the multi-modal interaction using one or more of the techniques described above, theASR 120 may provide the preliminary interpretations to theconversational language processor 130. Theconversational language processor 130 may generally include various natural language processing components, which may be configured to model human-to-human conversations or interactions. Thus, theconversational language processor 130 may invoke one or more of the natural language processing components to further analyze the preliminary interpretations of the utterance and any accompanying non-voice interactions to determine the intent of the multi-modal interactions received at the voice-enableddevice 100. - In one implementation, the
conversational language processor 120 may invoke an intent determination engine 130 a configured to determine the intent of the multi-modal interactions received at the voice-enableddevice 100. In one implementation, the intent determination engine 130 a may invoke a knowledge-enhanced speech recognition engine that provides long-term and short-term semantic knowledge for determining the intent, as described in co-pending U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, the contents of which are hereby incorporated by reference in their entirety. For example, in one implementation, the semantic knowledge may be based on a personalized cognitive model derived from one or more prior interactions with the user, a general cognitive model derived from one or more prior interactions with various different users, and/or an environmental cognitive model derived from an environment associated with the user, the voice-enableddevice 100, and/or the voice services environment (e.g., ambient noise characteristics, location sensitive information, etc.). - Furthermore, the
intent determination engine 132 a may invoke acontext tracking engine 132 d to determine the context for the multi-modal interactions. For example, any context derived from the natural language utterance and/or the non-voice interactions in the multi-modal interactions may be pushed to a context stack associated with thecontext tracking engine 132 d, wherein the context stack may include various entries that may be weighted or otherwise ranked according to one or more contexts identified from the cognitive models and the context for the current multi-modal interactions. As such, thecontext tracking engine 132 d may determine one or more entries in the context stack that match information associated with the current multi-modal interactions to determine a most likely context for the current multi-modal interactions. Thecontext tracking engine 132 d may then provide the most likely context to theintent determination engine 132 a, which may determine the intent of the multi-modal interactions in view of the most likely context. - In addition, based on the most likely context, the
intent determination engine 132 a may reference theconstellation model 132 b to determine whether to invoke any of the various systems or multi-modal devices in the voice services environment. For example, as noted above, theconstellation model 132 b may provide intent determination capabilities, domain knowledge, semantic knowledge, cognitive models, and other information available through the various systems and multi-modal devices. As such, theintent determination engine 132 a may reference theconstellation model 132 b to determine whether one or more of the other systems and/or multi-modal devices should be engaged to participate in determining the intent of the multi-modal interactions. For example, in response to theconstellation model 132 b indicating that one or more of the other systems and/or multi-modal devices have natural language processing capabilities optimized for the most likely context, theintent determination engine 132 a may forward information relating to the multi-modal interactions to such systems and/or multi-modal devices, which may then determine the intent of the multi-modal interactions and return the intent determination to the voice-enableddevice 100. - In one implementation, the
conversational language processor 130 may be configured to engage the user in one or more cooperative conversations to resolve the intent or otherwise process the multi-modal interactions, as described in co-pending U.S. patent application Ser. No. 11/580,926, entitled “System and Method for a Cooperative Conversational Voice User Interface,” filed Oct. 16, 2006, the contents of which are hereby incorporated by reference in their entirety. In particular, theconversational language processor 130 may generally identify a conversational goal for the multi-modal interactions, wherein the conversational goal may be identifying from analyzing the utterance, the non-voice interactions, the most likely context, and/or the determined intent. As such, the conversational goal identified for the multi-modal interactions may generally control the cooperative conversation between theconversational language processor 130 and the user. For example, theconversational language processor 130 may generally engage the user in one or more query conversations, didactic conversations, and/or exploratory conversations to resolve or otherwise process the multi-modal interactions. - In particular, the
conversational language processor 130 may engage the user in a query conversation in response to identifying that the conversational goal relates to retrieving discrete information or performing a particular function. Thus, in a cooperative query conversation, the user may lead the conversation towards achieving the particular conversational goal, while theconversational language processor 130 may initiate one or more queries, tasks, commands, or other requests to achieve the goal and thereby support the user in the conversation. In response to ambiguity or uncertainty in the intent of the multi-modal interaction, theconversational language processor 130 may engage the user in a didactic conversation to resolve the ambiguity or uncertainty (e.g., where noise or malapropisms interfere with interpreting the utterance, multiple likely contexts cannot be disambiguated, etc.). As such, in a cooperative didactic conversation, theconversational language processor 130 may lead the conversation to clarify the intent of the multi-modal interaction (e.g., generating feedback provided through an output device 114), while the user may regulate the conversation and provide additional multi-modal interactions to clarify the intent. In response to determining the intent of the multi-modal interactions with suitable confidence, with the intent indicating an ambiguous or uncertain goal, theconversational language processor 130 may engage the user in an exploratory conversation to resolve the goal. In a cooperative exploratory conversation, theconversational language processor 130 and the user may share leader and supporter roles, wherein the ambiguous or uncertain goal may be improvised or refined over a course of the conversation. - Thus, the
conversational language processor 130 may generally engage in one or more cooperative conversations to determine the intent and resolve a particular goal for the multi-modal interactions received at the voice-enableddevice 100. Theconversational language processor 130 may then initiate one or more queries, tasks, commands, or other requests in furtherance of the intent and the goal determined for the multi-modal interactions. For example, in one implementation, theconversational language processor 130 may invoke one ormore agents 132 c having capabilities for processing requests in a particular domain orapplication 134, avoice search engine 132 f having capabilities for retrieving information requested in the multi-modal interactions (e.g., from one ormore data repositories 136, networks, or other information sources coupled to the voice-enabled device 100), or one or more other systems or multi-modal devices having suitable processing capabilities for furthering the intent and the goal for the multi-modal interactions (e.g., as determined from theconstellation model 132 b). - Additionally, in one implementation, the
conversational language processor 130 may invoke anadvertising application 134 in relation to the queries, tasks, commands, or other requests initiated to process the multi-modal interactions, wherein theadvertising application 134 may be configured to select one or more advertisements that may be relevant to the intent and/or the goal for the multi-modal interactions, as described in co-pending U.S. patent application Ser. No. 11/671,526, entitled “System and Method for Selecting and Presenting Advertisements Based on Natural Language Processing of Voice-Based Input,” filed Feb. 6, 2007, the contents of which are hereby incorporated by reference in their entirety. - In one implementation, in response to receiving results from any suitable combination of queries, tasks, commands, or other requests processed for the multi-modal interactions, the
conversational language processor 130 may format the results for presentation to the user through theoutput device 114. For example, the results may be formatted into a natural language utterance that can be converted into an electronic signal and provided to the user through a speaker coupled to theoutput device 114, or the results may be visually presented on a display coupled to theoutput device 114, or in any other suitable manner (e.g., the results may indicate whether a particular task or command was successfully performed, or the results may include information retrieved in response to one or more queries, or the results may include a request to frame a subsequent multi-modal interaction if the results are ambiguous or otherwise incomplete, etc.). - Furthermore, in one implementation, the
conversational language processor 130 may include amisrecognition engine 132 e configured to determine whether theconversational language processor 130 incorrectly determined the intent for the multi-modal interactions. In one implementation, themisrecognition engine 132 e may determine that theconversational language processor 130 incorrectly determined the intent in response to one or more subsequent multi-modal interactions provided proximately in time to the prior multi-modal interactions, as described in U.S. patent application Ser. No. 11/200,164, entitled “System and Method of Supporting Adaptive Misrecognition in Conversational Speech,” which issued as U.S. Pat. No. 7,620,549 on Nov. 17, 2009, the contents of which are hereby incorporated by reference in their entirety. For example, themisrecognition engine 132 e may monitor for one or more subsequent multi-modal interactions that include a stop word, override a current request, or otherwise indicate an unrecognized or misrecognized event. Themisrecognition engine 132 e may then determine one or more tuning parameters for various components associated with theASR 120 and/or theconversational language processor 130 to improve subsequent interpretations. - Accordingly, as described in further detail above, the voice-enabled
device 100 may generally include various natural language processing components and capabilities that may be used for hybrid processing in the natural language voice services environment. In particular, the voice-enableddevice 100 may be configured to determine the intent for various multi-modal interactions that include any suitable combination of natural language utterances and/or non-voice interactions and process one or more queries, tasks, commands, or other requests based on the determined intent. Furthermore, as noted above and as will be described in greater detail below, one or more other systems and/or multi-modal devices may participate in determining the intent and processing the queries, tasks, commands, or other requests for the multi-modal interactions to provide a hybrid processing methodology, wherein the voice-enableddevice 100 and the various other systems and multi-modal devices may each perform partial processing to determine the intent and otherwise process the multi-modal interactions in a cooperative manner. For example, in one implementation, hybrid processing in the natural language voice services environment may include one or more techniques described in U.S. Provisional Patent Application Ser. No. 61/259,827, entitled “System and Method for Hybrid Processing in a Natural Language Voice Services Environment,” filed on Nov. 10, 2009, the contents of which are hereby incorporated by reference in their entirety. - According to one aspect of the invention,
FIG. 2 illustrates a block diagram of an exemplary system for hybrid processing in a natural language voice service environment. In particular, the system illustrated inFIG. 2 may generally include a voice-enabledclient device 210 similar to the voice-enabled device described above in relation toFIG. 1 . For example, the voice-enabledclient device 210 may include any suitable combination of input andoutput devices 215 a respectively arranged to receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions. In addition, the voice-enabledclient device 210 may include an Automatic Speech Recognizer (ASR) 220 a configured to generate one or more preliminary interpretations of natural language utterances received at theinput device 215 a, and further configured to provide the preliminary interpretations to aconversational language processor 230 a. - In one implementation, the
conversational language processor 230 a on the voice-enabledclient device 210 may include one or more natural language processing components, which may be invoked to determine an intent for the multi-modal interactions received at the voice-enabledclient device 210. Theconversational language processor 230 a may then initiate one or more queries, tasks, commands, or other requests to resolve the determined intent. For example, theconversational language processor 230 a may invoke one ormore applications 234 a to process requests in a particular domain, query one ormore data repositories 236 a to retrieve information requested in the multi-modal interactions, or otherwise engage in one or more cooperative conversations with a user of the voice-enabledclient device 210 to resolve the determined intent. Furthermore, as noted above in connection withFIG. 1 , the voice-enabledclient device 210 may also cooperate with one or more other systems or multi-modal devices having suitable processing capabilities for initiating queries, tasks, commands, or other requests to resolve the intent of the multi-modal interactions. - In particular, to cooperate with the other systems or multi-modal devices in the hybrid processing environment, the voice-enabled
client device 210 may use amessaging interface 250 a to communicative with avirtual router 260, wherein themessaging interface 250 a may generally include a light client (or thin client) that provides a mechanism for the voice-enabledclient device 210 to transmit input to and receive output from thevirtual router 260. In addition, thevirtual router 260 may further include amessaging interface 250 b providing a mechanism for communicating with one or more additional voice-enabled devices 270 a-n, one or more non-voice devices 280 a-n, and a voice-enabledserver 240. Furthermore, althoughFIG. 2 illustratesmessaging interface 250 a andmessaging interface 250 b as components that are distinct from the devices to which they are communicatively coupled, it will be apparent that such illustration is for ease of description only, as the messaging interfaces 250 a-b may be provided as on-board components that execute on the various devices illustrated inFIG. 2 to facilitate communication among the various devices in the hybrid processing environment. - For example, in one implementation, the
messaging interface 250 a that executes on the voice-enabledclient device 210 may transmit input from the voice-enabledclient device 210 to thevirtual router 260 within one or more XML messages, wherein the input may include encoded audio corresponding to natural language utterances, preliminary interpretations of the natural language utterances, data corresponding to multi-touch gestures, point of focus or attention focus selections, and/or other multi-modal interactions. In one implementation, thevirtual router 260 may then further process the input using aconversational language processor 230 c having capabilities for speech recognition, intent determination, adaptive misrecognition, and/or other natural language processing. Furthermore, theconversational language processor 230 c may include knowledge relating to content, services, applications, natural language processing capabilities, and other features available through the various devices in the hybrid processing environment. - As such, in one implementation, the
virtual router 260 may further communicate with the voice-enabled devices 270, the non-voice devices 280, and/or the voice-enabledserver 240 through themessaging interface 250 b to coordinate processing for the input received from the voice-enabledclient device 210. For example, based on the knowledge relating to the features and capabilities of the various devices in the hybrid processing environment, thevirtual router 260 may identify one or more of the devices that have suitable features and/or capabilities for resolving the intent of the input received from the voice-enabledclient device 210. Thevirtual router 260 may then forward one or more components of the input to the identified devices throughrespective messaging interfaces 250 b, wherein the identified devices may be invoked to perform any suitable processing for the components of the input forwarded from thevirtual router 260. In one implementation, the identified devices may then return any results of the processing to thevirtual router 260 through therespective messaging interfaces 250 b, wherein thevirtual router 260 may collate the results of the processing and return the results to the voice-enabledclient device 210 through themessaging interface 250 a. - Accordingly, the
virtual router 260 may communicate with any of the devices available in the hybrid processing environment through messaging interfaces 250 a-b to coordinate cooperative hybrid processing for multi-modal interactions or other natural language inputs received from the voice-enabledclient device 210. For example, in one implementation, the cooperative hybrid processing may be used to enhance performance in embedded processing architectures in which the voice-enabledclient device 210 includes a constrained amount of resources (e.g., the voice-enabledclient device 210 may be a mobile device having a limited amount of internal memory or other dedicated resources for natural language processing). As such, when the voice-enabledclient device 210 has an embedded processing architecture, one or more components of the voice-enabledclient device 210 may be configured to optimize efficiency of on-board natural language processing to reduce or eliminate bottlenecks, lengthy response times, or degradations in performance. - For example, in one implementation, optimizing the efficiency of the on-board natural language processing may include configuring the
ASR 220 a to use a virtual dictation grammar having decoy words for out-of-vocabulary words, reduced vocabularies derived from a conversation history, or other dynamic recognition grammars (e.g., grammars optimized for particular languages, contexts, domains, memory constraints, and/or other suitable criteria). In another example, the on-board applications 234 a and/ordata repositories 236 a may be associated with an embedded application suite providing particular features and capabilities for the voice-enabledclient device 210. For example, the voice-enabledclient device 210 may be embedded within an automotive telematics system, a personal navigation device, a global positioning system, a mobile phone, or another device in which users often request location-based services. Thus, in such circumstances, the on-board applications 234 a and thedata repositories 236 a in the embedded application suite may be optimized to provide certain location-based services that can be efficiently processed on-board (e.g., destination entry, navigation, map control, music search, hands-free dialing, etc.). - Furthermore, although the components of the voice-enabled
client device 210 may be optimized for efficiency in embedded architectures, a user may nonetheless request any suitable content, services, applications, and/or other features available in the hybrid processing environment, and the other devices in the hybrid processing environment may collectively provide natural language processing capabilities to supplement the embedded natural language processing capabilities for the voice-enabledclient device 210. For example, the voice-enabledclient device 210 may perform preliminary processing for a particular multi-modal interaction using the embedded natural language processing capabilities (e.g., the on-board ASR 220 a may perform advanced virtual dictation to partially transcribe an utterance in the multi-modal interaction, the on-boardconversational language processor 230 a may determine a preliminary intent of the multi-modal interaction, etc.), wherein results of the preliminary processing may be provided to thevirtual router 260 for further processing. - In one implementation, the voice-enabled
client device 210 may also communicate input corresponding to the multi-modal interaction to thevirtual router 260 in response to determining that on-board capabilities cannot suitably interpret the interaction (e.g., if a confidence level for a partial transcription does not satisfy a particular threshold), or in response to determining that the interaction should be processed off-board (e.g., if a preliminary interpretation indicates that the interaction relates to a local search request requiring large computations to be performed on the voice-enabled server 240). As such, thevirtual router 260 may capture the input received from the voice-enabledclient device 210 and coordinate further processing among the voice-enabled devices 270 and the voice-enabledserver 240 that provide natural language processing capabilities in addition to the non-voice devices 280 that provide capabilities for retrieving data or executing tasks. Furthermore, in response to thevirtual router 260 invoking one or more of the voice-enabled devices 270, the input provided to the voice-enabled devices 270 may be optimized to suit the processing requested from the invoked voice-enabled devices 270 (e.g., to avoid over-taxing processing resources, a particular voice-enabled device 270 may be provided a partial transcription or a preliminary interpretation and resolve the intent for a given context or domain). - Alternatively, in response to the
virtual router 260 invoking the voice-enabledserver 240, the input provided to the voice-enabled devices 270 may further include encoded audio corresponding to natural language utterances and any other data associated with the multi-modal interaction. In particular, as shown inFIG. 2 , the voice-enabledserver 240 may have a natural language processing architecture similar to the voice-enabledclient device 210, except that the voice-enabledserver 240 may include substantial processing resources that obviate constraints that the voice-enabledclient device 210 may be subject to. Thus, when the voice-enabledserver 240 cooperates in the hybrid processing for the multi-modal interaction, the encoded audio corresponding to the natural language utterances and the other data associated with the multi-modal interaction may be provided to the voice-enabledserver 240 to maximize a likelihood of the voice-enabledserver 240 correctly determining the intent of the multi-modal interaction (e.g., theASR 220 b may perform multi-pass speech recognition to generate an accurate transcription for the natural language utterance, theconversational language processor 230 b may arbitrate among intent determinations performed in any number of different contexts or domains, etc.). Accordingly, in summary, the hybrid processing techniques performed in the environment illustrated inFIG. 2 may generally include various different devices, which may or may not include natural language capabilities, cooperatively determining the intent of a particular multi-modal interaction and taking action to resolve the intent. - Although the cooperative hybrid processing techniques described above have been particularly described in the context of an embedded processing architecture, such techniques are not necessarily limited to embedded processing architectures. In particular, the same techniques may be applied in any suitable voice services environment having various devices that can cooperate to initiate queries, tasks, commands, or other requests to resolve the intent of multi-modal interactions. Furthermore, in one implementation, the voice-enabled
client device 210 may include a suitable amount of memory or other resources that can be dedicated to natural language processing (e.g., the voice-enabledclient device 210 may be a desktop computer or other device that can process natural language without substantially degraded performance). In such circumstances, one or more of the components of the voice-enabledclient device 210 may be configured to optimize the on-board natural language processing in a manner that could otherwise cause bottlenecks, lengthy response times, or degradations in performance in an embedded architecture. For example, in one implementation, optimizing the on-board natural language processing may include configuring theASR 220 a to use a large list dictation grammar in addition to and/or instead of the virtual dictation grammar used in embedded processing architectures. - Nonetheless, as will be described in greater detail below in connection with
FIGS. 3-5 , the cooperative hybrid processing techniques may be substantially similar regardless of whether the voice-enabledclient device 210 has an embedded or non-embedded architecture. In particular, regardless of the architecture for the voice-enabledclient device 210, cooperative hybrid processing may include the voice-enabledclient device 210 optionally performing preliminary processing for a natural language multi-modal interaction and communicating input corresponding to the multi-modal interaction to thevirtual router 260 for further processing through themessaging interface 250 a. Alternatively (or additionally), the cooperative hybrid processing may include thevirtual router 260 coordinating the further processing for the input among the various devices in the hybrid environment throughmessaging interface 250 b, and subsequently returning any results of the processing to the voice-enabledclient device 210 throughmessaging interface 250 a. - According to various aspects of the invention,
FIG. 3 illustrates a flow diagram of an exemplary method for initializing various devices that cooperate to perform hybrid processing in a natural language voice services environment. In particular, as noted above, the hybrid processing environment may generally include communication among various different devices that may cooperatively process natural language multi-modal interactions. For example, in one implementation, the various devices in the hybrid processing environment may include a virtual router having one or more messaging interfaces for communicating with one or more voice-enabled devices, one or more non-voice devices, and/or a voice-enabled server. As such, in one implementation, the method illustrated inFIG. 3 may be used to initialize communication in the hybrid processing environment to enable subsequent cooperative processing for one or more natural language multi-modal interactions received at any particular device in the hybrid processing environment. - In one implementation, the various devices in the hybrid processing environment may be configured to continually listen or otherwise monitor respective input devices to determine whether a natural language multi-modal interaction has occurred. As such, the method illustrated in
FIG. 3 may be used to calibrate, synchronize, or otherwise initialize the various devices that continually listen for the natural language multi-modal interactions. For example, as described above in connection withFIG. 2 , the virtual router, the voice-enabled devices, the non-voice devices, the voice-enabled server, and/or other devices in the hybrid processing environment may be configured to provide various different capabilities or services, wherein the initialization method illustrated inFIG. 3 may be used to ensure that the hybrid processing environment obtains a suitable signal to process any particular natural language multi-modal interaction and appropriately invoke one or more of the devices to cooperatively process the natural language multi-modal interaction. Furthermore, the method illustrated inFIG. 3 and described herein may be invoked to register the various devices in the hybrid processing environment, register new devices added to the hybrid processing environment, publish domains, services, intent determination capabilities, and/or other features supported on the registered devices, synchronize local timing for the registered devices, and/or initialize any other suitable aspect of the devices in the hybrid processing environment. - In one implementation, initializing the various devices in the hybrid processing environment may include an
operation 310, wherein a device listener may be established for each of the devices in the hybrid processing environment. The device listeners established inoperation 310 may generally include any suitable combination of instructions, firmware, or other routines that can be executed on the various devices to determine capabilities, features, supported domains, or other information associated with the devices. For example, in one implementation, the device listeners established inoperation 310 may be configured to communicate with the respective devices using the Universal Plug and Play protocol designed for ancillary computer devices, although it will be apparent that any appropriate mechanism for communicating with the various devices may be suitably substituted. - In response to establishing the device listeners for each device registered in the hybrid processing environment (or in response to establishing device listeners for any device newly registered in the hybrid processing environment), the device listeners may then be synchronized in an
operation 320. In particular, each of the registered devices may have an internal clock or other timing mechanism that indicates local timing for an incoming natural language multi-modal interaction, whereinoperation 320 may be used to synchronize the device listeners established inoperation 310 according to the internal clocks or timing mechanisms for the respective devices. Thus, in one implementation, synchronizing the device listeners inoperation 320 may include each device listener publishing information relating to the internal clock or local timing for the respective device. For example, the device listeners may publish the information relating to the internal clock or local timing to the virtual router, whereby the virtual router may subsequently coordinate cooperative hybrid processing for natural language multi-modal interactions received at one or more of the devices in the hybrid processing environment. It will be apparent, however, that the information relating to the internal clock or local timing for the various devices in the hybrid processing environment may be further published to the other voice-enabled devices, the other non-voice devices, the voice-enabled server, and/or any other suitable device that may participate in cooperative processing for natural language multi-modal interactions provided to the hybrid processing environment. - In one implementation, in response to establishing and synchronizing the device listeners for the various devices registered in the hybrid processing environment, the device listeners may continually listen or otherwise monitor respective devices on the respective registered devices in an
operation 330 to detect information relating to one or more natural language multi-modal interactions. For example, the device listeners may be configured to detect occurrences of the natural language multi-modal interactions in response to detecting an incoming natural language utterance, a point of focus or attention focus selection associated with an incoming natural language utterance, and/or another interaction or sequence of interactions that relates to an incoming natural language multi-modal interaction. In addition,operation 330 may further include the appropriate device listeners capturing the natural language utterance and/or related non-voice device interactions that relate to the natural language utterance. - In one implementation, the captured natural language utterance and related non-voice device interactions may then be analyzed in an
operation 340 to manage subsequent cooperative processing in the hybrid processing environment. In one implementation, for example,operation 340 may determine whether one device listener or multiple device listeners captured information relating to the natural language multi-modal interaction detected inoperation 330. In particular, as noted above, the hybrid processing environment may generally include various different devices that cooperate to process natural language multi-modal interactions, whereby the information relating to the natural language multi-modal interaction may be provided to one or a plurality of the devices in the hybrid processing environment. As such,operation 340 may determine whether one device listener or multiple device listeners captured the information relating to the natural language multi-modal interaction in order to determine whether the hybrid processing environment needs to synchronize signals among various device listeners that captured information relating to the multi-modal interaction. - For example, a user interacting with the hybrid processing environment may view a web page presented on a non-voice display device and provide a natural language multi-modal interaction that requests more information about purchasing a product displayed on the web page. The user may then select text on the web page containing the product name using a mouse, keyboard, or other non-voice input device and provide a natural language utterance to a microphone or other voice-enabled device such as “Is this available on Amazon.com?” In this example, a device listener associated with the non-voice display device may detect the text selection for the product name in
operation 330, and a device listener associated with the voice-enabled device may further detect the natural language utterance inquiring about the availability of the product inoperation 330. Furthermore, in one implementation, the user may be within a suitable range of multiple voice-enabled devices, which may result in multiple device listeners capturing different signals corresponding to the natural language utterance (e.g., the interaction may occur within range of a voice-enabled mobile phone, a voice-enabled telematics device, and/or other voice-enabled devices, depending on the arrangement and configuration of the various devices in the hybrid processing environment). - Accordingly, as will be described in greater detail herein, a sequence of operations that synchronizes different signals relating to the multi-modal interaction received at the multiple device listeners may be initiated in response to
operation 340 determining that multiple device listeners captured information relating to the natural language multi-modal interaction. On the other hand, in response tooperation 340 determining that only one device listener captured information relating to the natural language multi-modal interaction, the natural language multi-modal interaction may be processed in anoperation 390 without executing the sequence of operations that synchronizes different signals (i.e., the one device listener provides all of the input information relating to the multi-modal interaction, such that hybrid processing for the interaction may be initiated inoperation 390 without synchronizing different input signals). However, in one implementation, the sequence of synchronization operations may also be initiated in response to one device listener capturing a natural language utterance and one or more non-voice interactions to align different signals relating to the natural language multi-modal interaction, as described in greater detail herein. - As described above, each device listener that receives an input relating to the natural language multi-modal interaction detected in
operation 330 may have an internal clock or other local timing mechanism. As such, in response to determining that one or more device listeners captured different signals relating to the natural language multi-modal interaction inoperation 340, the sequence of synchronization operations for the different signals may be initiated in anoperation 350. In particular,operation 350 may include the one or more device listeners determining local timing information for the respective signals based on the internal clock or other local timing mechanism associated with the respective device listeners, wherein the local timing information determined for the respective signals may then be synchronized. - For example, in one implementation, synchronizing the local timing information for the respective signals may be initiated in an
operation 360. In particular,operation 360 may generally include notifying each device listener that received an input relating to the multi-modal interaction of the local timing information determined for each respective signal. For example, in one implementation, each device listener may provide local timing information for a respective signal to the virtual router, and the virtual router may then provide the local timing information for all of the signals to each device listener. As such, in one implementation,operation 360 may result in each device listener receiving a notification that includes local timing information for each of the different signals that relate to the natural language multi-modal interaction detected inoperation 330. Alternatively (or additionally), the virtual router may collect the local timing information for each of the different signals from each of the device listeners and further synchronize the local timing information for the different signals to enable hybrid processing for the natural language multi-modal interaction. - In one implementation, any particular natural language multi-modal interaction may include at least a natural language utterance, and may further include one or more additional device interactions relating to the natural language utterance. As noted above, the utterance may generally be received prior to, contemporaneously with, or subsequent to the additional device interactions. As such, the local timing information for the different signals may be synchronized in an
operation 370 to enable hybrid processing for the natural language multi-modal interaction. In particular,operation 370 may include aligning the local timing information for one or more signals corresponding to the natural language utterance and/or one or more signals corresponding to any additional device interactions that relate to the natural language utterance. In addition,operation 370 may further include aligning the local timing information for the natural language utterance signals with the signals corresponding to the additional device interactions. - Thus, in matching the utterance signals and the non-voice device interaction signals, any devices that participate in the hybrid processing for the natural language multi-modal interaction may be provided with voice components and/or non-voice components that have been aligned with one another. For example, in one implementation,
operation 370 may be executed on the virtual router, which may then provide the aligned timing information to any other device that may be invoked in the hybrid processing. Alternatively (or additionally), one or more of the other devices that participate in the hybrid processing may locally align the timing information (e.g., in response to the virtual router invoking the voice-enabled server in the hybrid processing, resources associated with the voice-enabled server may be employed to align the timing information and preserve communication bandwidth at the virtual router). - Furthermore, in one implementation, the virtual router and/or other devices in the hybrid processing environment may analyze the signals corresponding to the natural language utterance in an
operation 380 to select the cleanest sample for further processing. In particular, as noted above, the virtual router may include a messaging interface for receiving an encoded audio sample corresponding to the natural language utterance from one or more of the voice-enabled devices. For example, the audio sample received at the virtual router may include the natural language utterance encoded in the MPEG-1 Audio Layer 3 (MP3) format or another lossy format to preserve communication bandwidth in the hybrid processing environment. However, it will be apparent that the audio sample may alternatively (or additionally) be encoded using the Free Lossless Audio Codec (FLAC) format or another lossless format in response to the hybrid processing environment having sufficient communication bandwidth for processing lossless audio that may provide a better sample of the natural language utterance. - Regardless of whether the audio sample has been encoded in a lossy or lossless format, the signal corresponding to the natural language utterance that provides the cleanest sample may be selected in
operation 380. For example, one voice-enabled device may be in a noisy environment or otherwise associated with conditions that interfere with generating a clean audio sample, while another voice-enabled device may include a microphone array or be configured to employ techniques that maximize fidelity of encoded speech. As such, in response to multiple signals corresponding to the natural language utterance being received inoperation 330, the cleanest signal may be selected inoperation 380 and hybrid processing for the natural language utterance may then be initiated in anoperation 390. - Accordingly, the synchronization and initialization techniques illustrated in
FIG. 3 and described herein may ensure that the hybrid processing environment synchronizes each of the signals corresponding to the natural language multi-modal interaction and generates an input for further processing inoperation 390 most likely to result in a correct intent determination. Furthermore, in synchronizing the signals and selecting the cleanest audio sample for the further processing inoperation 390, the techniques illustrated inFIG. 3 and described herein may ensure that none of the devices in the hybrid processing environment take action on a natural language multi-modal interaction until the appropriate signals to be used inoperation 390 have been identified. As such, hybrid processing for the natural language multi-modal interaction may be initiated inoperation 390, as described in further detail herein. - According to one aspect of the invention,
FIG. 4 illustrates a flow diagram of am exemplary method for performing hybrid processing at one or more client devices in a natural language voice services environment. In particular, as will be described in greater below with reference toFIG. 5 , the one or more client devices may perform the hybrid processing in cooperation with a virtual router through a messaging interface that communicatively couples the client devices and the virtual router. For example, in one implementation, the messaging interface may generally include a light client (or thin client) that provides a mechanism for the client devices to transmit input relating to a natural language multi-modal interaction to the virtual router, and that further provides a mechanism for the client devices to receive output relating to the natural language multi-modal interaction from the virtual router. - For example, in one implementation, the hybrid processing at the client devices may be initiated in response to one or more of the client devices receiving a natural language multi-modal interaction in an
operation 410. In particular, the natural language multi-modal interaction may generally include a natural language utterance received at a microphone or other voice-enabled input device coupled to the client device that received the natural language multi-modal interaction, and may further include one or more other additional input modalities that relate to the natural language utterance (e.g., text selections, button presses, multi-touch gestures, etc.). As such, the natural language multi-modal interaction received inoperation 410 may include one or more queries, commands, or other requests provided to the client device, wherein the hybrid processing for the natural language multi-modal interaction may then be initiated in anoperation 420. - As described in greater detail above, the natural language voice services environment may generally include one or more voice-enabled client devices, one or more non-voice devices, a voice-enabled server, and a virtual router arranged to communicate with each of the voice-enabled client devices, the non-voice devices, and the voice-enabled server. In one implementation, the virtual router may therefore coordinate the hybrid processing for the natural language multi-modal interaction among the voice-enabled client devices, the non-voice devices, and the voice-enabled server. As such, the hybrid processing techniques described herein may generally refer to the virtual router coordinating cooperative processing for the natural language multi-modal interaction in a manner that involves resolving an intent of the natural language multi-modal interaction in multiple stages.
- In particular, as described above in connection with
FIG. 3 , the various devices that cooperate to perform the hybrid processing may be initialized to enable the cooperative processing for the natural language multi-modal interaction. As such, in one implementation, in response to initializing the various devices, each of the client devices that received an input relating to the natural language multi-modal interaction may perform initial processing for the respective input in anoperation 420. For example, in one implementation, a client device that received the natural language utterance included in the multi-modal interaction may perform initial processing inoperation 420 that includes encoding an audio sample corresponding to the utterance, partially or completely transcribing the utterance, determining a preliminary intent for the utterance, or performing any other suitable preliminary processing for the utterance. In addition, the initial processing inoperation 420 may also be performed at a client device that received one or more of the additional input modalities relating to the utterance. For example, the initial processing performed inoperation 420 for the additional input modalities may include identifying selected text, selected points of focus or attention focus, or generating any other suitable data that can be used to further interpret the utterance. In one implementation, anoperation 430 may then include determining whether the hybrid processing environment has been configured to automatically route inputs relating to the natural language multi-modal interaction to the virtual router. - For example, in one implementation,
operation 430 may determine that automatic routing has been configured to occur in response to multiple client devices receiving the natural language utterance included in the multi-modal interaction inoperation 410. In this example, the initial processing performed inoperation 420 may include the multiple client devices encoding respective audio samples corresponding to the utterance, wherein messages that include the encoded audio samples may then be sent to the virtual router in anoperation 460. The virtual router may then select one of the encoded audio samples that provides a cleanest signal and coordinate subsequent hybrid processing for the natural language multi-modal interaction, as will be described in greater detail below with reference toFIG. 5 . In another example,operation 430 may determine that automatic routing has been configured to occur in response to the initial processing resulting in a determination that the multi-modal interaction relates to a request that may be best suited for processing on the voice-enabled server (e.g., the request may relate to a location-based search query or another command or task that requires resources managed on the voice-enabled server, content, applications, domains, or other information that resides on one or more devices other than the client device that received the request, etc.). However, it will be apparent that the hybrid processing environment may be configured for automatic routing in response to other conditions and/or regardless of whether any attendant conditions exist, as appropriate. - In one implementation, in response to the virtual router coordinating the hybrid processing for the natural language multi-modal interaction, the virtual router may provide results of the hybrid processing to the client device in an
operation 470. For example, the results provided to the client device inoperation 470 may include a final intent determination for the natural language multi-modal interaction, information requested in the interaction, data generated in response to executing a command or task requested in the interaction, and/or other results that enable the client device to complete processing for the natural language request inoperation 480. For example, in one implementation,operation 480 may include the client device executing a query, command, task, or other request based on the final intent determination returned from the virtual router, presenting the requested information returned from the virtual router, confirming that the requested command or task has been executed, and/or performing any additional processing to resolve the natural language request. - Referring back to
operation 430, in response to determining that the conditions that trigger automatic routing have not been satisfied or that automatic router has otherwise not been configured, the client device may further process the natural language multi-modal interaction in anoperation 440. In one implementation, the further processing inoperation 440 may include the client device attempting to determine an intent for the natural language multi-modal interaction using local natural language processing capabilities. For example, the client device may merge any non-voice input modalities included in the multi-modal interaction a transcription for the utterance included in the multi-modal interaction. The conversational language processor on the client device may then determine the intent for the multi-modal interaction utilizing local information relating to context, domains, shared knowledge, criteria values, or other information. The client device may then generate one or more interpretations for the utterance to determine the intent for the multi-modal interaction (e.g., identifying a conversation type, one or more requests contained in the interactions, etc.). - In one implementation,
operation 440 may further include determining a confidence level for the intent determination generated on the client device (e.g., the confidence level may be derived in response to whether the client devices includes a multi-pass speech recognition engine, whether the utterance contained any ambiguous words or phrases, whether the intent differs from one context to another, etc.). In one implementation, anoperation 450 may then determine whether or not to invoke off-board processing depending on the confidence level determined inoperation 440. For example,operation 450 may generally include determining whether the intent determined inoperation 440 satisfies a particular threshold value that indicates an acceptable confidence level for taking action on the determined intent. As such, in response to the confidence level for the intent determination satisfying the threshold value,operation 450 may determine to not invoke off-board processing. In particular, the confidence level satisfying the threshold value may indicate that the client device has sufficient information to take action on the determined intent, whereby the client device may then process one or more queries, commands, tasks, or other requests to resolve the multi-modal interaction inoperation 480. - Alternatively, in response to the confidence level for the intent determination failing to satisfy the threshold value,
operation 450 may invoke off-board processing, which may include sending one or more messages to the virtual router inoperation 460. The one or more messages may cause the virtual router to invoke additional hybrid processing for the multi-modal interaction in a similar manner as noted above, and as will be described in greater detail herein with reference toFIG. 5 . - According to one aspect of the invention,
FIG. 5 illustrates a flow diagram of am exemplary method for performing hybrid processing at a virtual router in a natural language voice services environment. In particular, as the virtual router may coordinate the hybrid processing for natural language multi-modal interactions received at one or more client devices. In one implementation, in anoperation 510, the virtual router may receive one or more messages relating to a natural language multi-modal interaction received at one or more of the client devices in the voice services environment. For example, the virtual router may include a messaging interface that communicatively couples the virtual router to the client devices and a voice-enabled server, wherein the messaging interface may generally include a light client (or thin client) that provides a mechanism for the virtual router to receive input from one or more the client devices and/or the voice-enabled server, and further to transmit output to one or more the client devices and/or the voice-enabled server. The messages received inoperation 510 may generally include any suitable processing results for the multi-modal interactions, whereby the virtual router may coordinate hybrid processing in a manner that includes multiple processing stages that may occur at the virtual router, one or more of the client devices, the voice-enabled server, or any suitable combination thereof. - In one implementation, the virtual router may analyze the messages received in
operation 510 to determine whether to invoke the hybrid processing in a peer-to-peer mode. For example, one or more of the messages may include a preliminary intent determination that the virtual router can use to determine whether to invoke one or more of the client devices, the voice-enabled server, or various combinations thereof in order to execute one or more of the multiple processing stages for the multi-modal interaction. In another example, one or more of the messages may include an encoded audio sample that the virtual router forwards to one or more of the various devices in the hybrid processing environment. As such, in one implementation, the virtual router may analyze the messages received inoperation 510 to determine whether or not to invoke the voice-enabled server to process the multi-modal interaction (e.g., the messages may include a preliminary intent determination that indicates that the multi-modal interaction includes a location-based request that requires resources residing on the server). - In response to the virtual router determining to invoke the voice-enabled server, the virtual router may forward the messages to the server in an
operation 530. In particular, the messages forwarded to the server may generally include the encoded audio corresponding to the natural language utterance and any additional information relating to other input modalities relevant to the utterance. For example, as described in greater detail above with reference toFIG. 2 , the voice-enabled server may include various natural language processing components that can suitably determine the intent of the multi-modal interaction, whereby the messages sent to the voice-enabled server may include the encoded audio in order to permit the voice-enabled server to determine the intent independently of any preliminary processing on the client devices that may be inaccurate or incomplete. In response to the voice-enabled server processing the messages received from the virtual router, results of the processing may then be returned to the virtual router in anoperation 570. For example, the results may include the intent determination for the natural language multi-modal interaction, results of any queries, commands, tasks, or other requests performed in response to the determined intent, or any other suitable results, as will be apparent. - Alternatively, in response to the virtual router determining to invoke the peer-to-peer mode in
operation 520, the virtual router may coordinate the hybrid processing among one or more the client devices, the voice-enabled server, or any suitable combination thereof. For example, in one implementation, the virtual router may determine a context for the natural language multi-modal interaction in anoperation 540 and select one or more peer devices based on the determined context in anoperation 550. For example, one or more of the client devices may be configured to provide content and/or services in the determined context, whereby the virtual router may forward one or more messages to such devices in anoperation 560 in order to request such content and/or services. In another example, the multi-modal interaction may include a compound request that relates to multiple contexts supported on different devices, whereby the virtual router may forward messages to each such device inoperation 560 in order to request appropriate content and/or services in the different contexts. - In still another example, the interaction may include a request to be processed on the voice-enabled server, yet the request may require content and/or services that reside on one or more of the client devices (e.g., a location-based query relating to an entry in an address book on one or more of the client devices). As such, the virtual router may generally forward various messages to the selected peer devices in
operation 560 to manage the multiple stages in the hybrid processing techniques described herein. For example, the virtual router may send messages to one or more voice-enabled client devices that have intent determination capabilities in a particular context, one or more non-voice client devices that have access to content, services, and/or other resources needed to process the multi-modal interaction, or any appropriate combination thereof. The virtual router may therefore send messages to the client devices and/or the voice-enabled server inoperation 560 and receive responsive messages from the client devices and/or the voice-enabled server inoperation 570 in any appropriate manner (e.g., in parallel, sequentially, iteratively, etc.). The virtual router may then collate the results received in the responsive messages inoperation 580 and return the results to one or more of the client devices for any final processing and/or presentation of the results. - According to one aspect of the invention,
FIG. 6 illustrates a block diagram of an exemplary system for providing a natural language content dedication service. In particular, as shown inFIG. 6 , the system for providing the natural language content dedication service may generally include a voice-enabledclient device 610 that can communicate with acontent dedication system 665 through amessaging interface 665. In one implementation, the voice-enabledclient device 610 may communicate with thecontent dedication system 665 in a similar manner as described above with reference toFIGS. 2 through 5 . For example, thecontent dedication system 665 may include avirtual router 660 and a voice-enabledserver 640 that can service multi-modal natural language requests provided to the voice-enabledclient device 610 in a similar manner as described above, and may further include abilling system 638 that may be used to process transactions relating to the content dedication service. Further, althoughFIG. 6 illustrates thecontent dedication system 665 as including thevirtual router 660, the voice-enabledserver 640, and thebilling system 638 within one component, such illustration will be understood to be for ease of description only, and that thevirtual router 660, the voice-enabledserver 640, and thebilling system 638 may in fact be arranged within any number of components that can suitably communicate with one another to process multi-modal natural language requests relating to the content dedication service (e.g., thevirtual router 660 may communicate with the voice-enabledserver 640 through another messaging interface distinct from themessaging interface 650 for communicating with the voice-enabledclient device 610, as shown in the exemplary system illustrated inFIG. 2 ). - In one implementation, the system shown in
FIG. 6 may provide the natural language content dedication service to any suitable voice-enabledclient device 610 having a suitable combination of input andoutput devices 615 a that can receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions, wherein the input andoutput devices 615 a may be further arranged to receive any other suitable type of input and provide any other suitable type of output. For example, in one implementation, the voice-enabledclient device 610 may comprise a mobile phone that includes akeypad input device 615 a, a touchscreen input device 615 a, orother input mechanisms 615 a in addition to any input microphones or othersuitable input devices 615 a that can receive voice signals. As such, in this example, the mobile phone may further include anoutput display device 615 a in addition to any output microphones or othersuitable output devices 615 a that can output audible signals. Thus, a user of the voice-enabledclient device 610 may be listening to music, watching video, or otherwise interacting with content through the input andoutput devices 615 a and provide a multi-modal natural language request to engage in a transaction to dedicate the music, video, or other content, as will be described in greater detail below. Furthermore, in one implementation, the voice-enabledclient device 610 may be included within a hybrid processing environment that may include a plurality of different devices, whereby the content dedication request may relate to content played on a different device from the voice-enabledclient device 610, although the content dedication request may relate to any suitable content (i.e., the request need not necessarily relate to played content, as users may provide natural language to request content dedications for any suitable content). - Thus, in one implementation, in response to the voice-enabled
client device 710 receiving a multi-modal interaction that includes a natural language utterance, the voice-enabledclient device 610 may invoke an Automatic Speech Recognizer (ASR) 620 a to generate one or more preliminary interpretations of the utterance. TheASR 620 a may then provide the preliminary interpretations to aconversational language processor 630 a, which may attempt to determine an intent for the multi-modal interaction. In one implementation, to determine the intent for the multi-modal interaction, theconversational language processor 630 a may determine a most likely context for the interaction from the preliminary interpretations of the utterance, any accompanying non-speech inputs in the multi-modal interaction that relate to the utterance, contexts associated with prior requests, short-term and long-term shared knowledge, or any other suitable information for interpreting the multi-modal interaction. Thus, in response to theconversational language processor 630 a determining that the intent for the multi-modal interaction relates to a content dedication request, acontent dedication application 634 a may be invoked to resolve the content dedication request. - For example, in one implementation, an initial multi-modal interaction may include the utterance “Find ‘Superstylin” by Groove Armada.” In response to the initial multi-modal interaction, the
ASR 620 a may generate a preliminary interpretation that includes the words “Find” and “Superstylin” and the phrase “Groove Armada.” TheASR 620 a may then provide the preliminary interpretation to theconversational language processor 630 a, which may determine that the word “Find” indicates that the most likely intent of the interaction includes a search, while the word “Superstylin” and the phrase “Groove Armada” provide criteria for the search. Furthermore, in response to determining the most likely intent of the interaction, theconversational language processor 630 a may establish a music context for the interaction and attempt to resolve the search request. For example, theconversational language processor 630 a may search one ormore data repositories 636 a that contain music information to identify music having a song title of “Superstylin” and an artist name of “Groove Armada,” and theconversational language processor 630 a may further cooperate with other devices in the hybrid processing environment to search for the song (e.g., in response to thelocal data repositories 636 a not yielding adequate results, another device in the environment having a larger music data repository than theclient device 610 may be invoked). Theconversational language processor 630 a may then receive appropriate results for the search and present the results to the user through theoutput device 615 a (e.g., displaying information about the song, playing a sample audio clip of the song, displaying options to purchase the song, recommending similar songs or similar artists, etc.). - Continuing with the above example, a subsequent multi-modal interaction may include the utterance “Share this with my wife,” “Dedicate it to Charlene,” “That's the one, I want to pass along to some friends,” or another suitable utterance that reflects a request to dedicate the content. Alternatively (or additionally), in response to the intent of the initial interaction including a request to search for content, the
conversational language processor 630 a may invoke thecontent dedication application 634 a, which may present an option to dedicate the content through theoutput device 615 a together with the results of the search. As such, the request to dedicate the content may also be provided in a non-speech input, such as a button press or touch screen selection of the option to dedicate the content. Accordingly, in response to detecting a content dedication request (e.g., through theASR 620 a and theconversational language processor 630 a processing a suitable utterance, theinput device 615 a receiving a suitable non-speech input, or any suitable combination thereof), thecontent dedication application 634 a may be invoked to process the content dedication request. - In one implementation, to process the content dedication request, the
content dedication application 634 a may capture a natural language utterance that contains the dedication. For example, thecontent dedication application 634 a may provide a prompt through theoutput device 615 a that instructs the user to provide the dedication utterance (e.g., a visual or audible prompt instructing the user to begin speaking, to speak after an audible beep, etc.). The user may then provide the dedication utterance through the voice-enabledinput device 615 a, and the dedication utterance may then be converted into an electronic signal that thecontent dedication application 634 a captures for the dedication (e.g., “Dear Charlene, I was listening to this song and I thought of you. Enjoy!”). In addition, thecontent dedication application 634 a may further prompt the user to provide any additional tags for the dedicated content (e.g., an image or a picture to be inserted as album art for the dedicated content, a natural language utterance that includes information to be inserted as voice-tags for the dedicated content, a non-speech input or data input that includes information to insert in tags for the dedicated content, etc.). Thecontent dedication application 634 a may then prompt the user to identify arecipient 690 of the dedication, wherein the user may provide any suitable multi-modal input that includes an e-mail address, a telephone number, an address book entry, or other information identifying therecipient 690 of the dedication. - In one implementation, the
content dedication application 634 a may then route the request, including the dedication utterance, the additional tags (if any), and the information identifying therecipient 690 to thecontent dedication system 665 through themessaging interface 650. For example, the dedication utterance may be converted into encoded audio that can be communicated through themessaging interface 650, whereby thecontent dedication system 665 can insert the encoded audio corresponding to the dedication utterance within the dedicated content and/or verbally annotate the dedicated content with the encoded audio. Alternatively (or additionally), the dedication utterance may be interpreted and parsed to transcribe one or more words or phrases from the dedication utterance, wherein the transcribed words or phrases may provide a textual annotation for the dedicated content (e.g., the textual annotation may be inserted within metadata for the dedicated content, such as an ID3 Comments tag). Similarly, any utterances to be inserted as voice-tags for the dedicated content may provide further verbal annotations for the dedicated content, or such utterances may be transcribed to provide further textual annotations for the dedicated content. In one implementation, verbal annotations, textual annotations, and other types of annotations may be created and associated with the dedicated content using techniques described in co-pending U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, the contents of which are hereby incorporated by reference in their entirety. - In one implementation, in response to the
content dedication system 665 receiving the content dedication request, the dedication utterance, and any additional tags from thecontent dedication application 634 a, thecontent dedication system 665 may invoke a similarcontent dedication application 634 b on the voice-enabledserver 640 to process the request. In particular, thecontent dedication application 634 b on the voice-enabledserver 640 may identify the content requested for dedication and initiate a transaction for the content requested for dedication. For example, in one implementation, if thecontent dedication application 634 a on the voice-enabledclient device 610 was able to identify the requested content, the content dedication request received at thecontent dedication system 665 may include an identification of the content to be dedicated. Alternatively, because thecontent dedication system 665 can cooperate in resolving the multi-modal interactions involved in the dedication request, thecontent dedication system 665 may use shared knowledge relating to the dedication request to identify the content to be dedicated. For example, thecontent dedication system 665 may invoke one or more local natural language processing components (e.g.,ASR 620 b,conversational language processor local data repositories 636 b, interact with one ormore content providers 680 over anetwork 670, pull data from a satellite radio system that played the requested content at the voice-enabledclient device 610 or another device in the hybrid processing environment, or otherwise consult available resources that can be used to identify the content to be dedicated. - In one implementation, in response to identifying the content to be dedicated, the
content dedication system 665 may communicate with abilling system 638 to identify one or more purchase options for the dedication request and process an appropriate transaction for the dedication request. In particular, thecontent dedication system 665 may generally support various different purchase options to provide users with flexibility in the manner of requesting content dedications, including a buy-to-own purchase option, a pay-to-play purchase option, a paid subscription purchase option, or other appropriate options. In one implementation, the purchase options may be modeled on techniques for providing natural language services and subscriptions described in U.S. patent application Ser. No. 10/452,147, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” which issued as U.S. Pat. No. 7,398,209 on Jul. 8, 2008, and co-pending U.S. patent application Ser. No. 10/618,633, entitled “Mobile Systems and Methods for Responding to Natural Language Speech Utterance,” filed Jun. 15, 2003, the contents of which are hereby incorporated by reference in their entirety. For example, in the buy-to-own purchase option, thecontent dedication system 665 may purchase full rights to the content from anappropriate content provider 680, wherein thebilling system 638 may then charge the user of the voice-enabled client device 610 a particular amount that encompasses the cost for purchasing the content from thecontent provider 680 plus a service charge for dedicating the content, tagging the dedicated content, delivering the dedicated content to therecipient 690, or any other appropriate services rendered for the content dedication request. Thebilling system 638 may charge the user in a similar manner under the pay-to-play purchase option, except that the rights purchased from thecontent provider 680 may be limited to a single play, such that the cost for purchasing the content from thecontent provider 680 may be somewhat less under the pay-to-play purchase option. - Under the paid subscription purchase option, however, the user may pay a periodic service charge to the
content dedication system 665 that permits the user to make a predetermined number of content dedications or an unlimited number of content dedications in a subscription period, or the subscription purchase option may permit the user to make content dedications in any other suitable manner (e.g., different subscription levels having different content dedication options may be offered, such that the user may select a subscription level that meets the user's particular needs). For example, under the paid subscription purchase option, thebilling system 638 may only charge the user any costs for obtaining the rights to the content, which may be purchased under either the buy-to-own option or the pay-to-play option, or alternatively the user may be charged nothing if the user already owns the dedication content. In another example, a first subscription level may cost a first amount to permit the user to make a particular number of content dedications in a subscription period, while a second subscription level may cost a higher amount to permit the user to make an unlimited number of content dedications in the subscription period, while still other subscription levels having different terms may be offered, as will be apparent. - Furthermore, in one implementation, a service provider associated with the
content dedication system 665 may negotiate an agreement with thecontent provider 680 to determine the manner in which revenues for content transactions will be shared between thecontent dedication system 665 and thecontent provider 680. For example, the agreement may provide that thecontent provider 680 may keep all of the revenue for transactions that include purchasing content from thecontent provider 680 and that the service provider associated with thecontent dedication system 665 may recoup costs for such transactions from users. In another example, the agreement may provide that thecontent provider 680 and the service provider associated with thecontent dedication system 665 may share the revenue for the transactions that include purchasing content from thecontent provider 680. Thus, the agreement may generally include any suitable arrangement that defines the manner in which thecontent provider 680 and the service provider associated with thecontent dedication system 665 manage the revenue for the transactions that include purchasing content from thecontent provider 680, while the service provider associated with thecontent dedication system 665 may manage billing users for the natural language aspects of the content dedication service according to the techniques described in further detail above. - According to one aspect of the invention,
FIG. 7 illustrates a flow diagram of an exemplary method for providing a natural language content dedication service. In particular, the method for providing a natural language content dedication service, as shown inFIG. 7 , may be performed on a voice-enabled client device that can communicate with a content dedication system through a messaging interface, wherein the voice-enabled client device may communicate with the content dedication system in a similar manner as described above with reference toFIGS. 2 through 6 . For example, the content dedication system may include a virtual router and a voice-enabled server that can service multi-modal natural language requests provided to the voice-enabled client device, and may further include a billing system for processing transactions relating to the content dedication service. - In one implementation, the method shown in
FIG. 7 may be used to provide the natural language content dedication service to any suitable voice-enabled client device having a suitable combination of input and output devices that can receive natural language multi-modal interactions and provide responses to the natural language multi-modal interactions, wherein the input and output devices may be further arranged to receive any other suitable type of input and provide any other suitable type of output. For example, in one implementation, the voice-enabled client device may comprise a mobile phone that includes a keypad input device, a touch screen input device, or other input mechanisms in addition to any input microphones or other suitable input devices that can receive voice signals, and may further include an output display device in addition to any output microphones or other suitable output devices that can output audible signals. Thus, a user of the voice-enabled client device may be listening to music, watching video, or otherwise interacting with content through the input and output devices, wherein anoperation 710 may include the voice-enabled client device receiving a multi-modal natural language interaction to engage in a transaction to dedicate the music, video, or other content. - Thus, in one implementation, in response to the voice-enabled client device receiving the multi-modal interaction in
operation 710 that includes a natural language utterance, the voice-enabled client device may invoke an Automatic Speech Recognizer (ASR) to generate one or more preliminary interpretations of the utterance. The ASR may then provide the preliminary interpretations to a conversational language processor, which may attempt to determine an intent for the multi-modal interaction. In one implementation, to determine the intent for the multi-modal interaction, the conversational language processor may determine a most likely context for the interaction from the preliminary interpretations of the utterance, any accompanying non-speech inputs in the multi-modal interaction that relate to the utterance, contexts associated with prior requests, short-term and long-term shared knowledge, or any other suitable information for interpreting the multi-modal interaction. Thus, in one implementation, anoperation 720 may include the conversational language processor detecting a content dedication request in response to determining that the intent for the multi-modal interaction relates to a content dedication request. The conversational language processor may then invoke a content dedication application to resolve the content dedication request. - For example, in one implementation, the multi-modal interaction received in
operation 720 may include the utterance “Buy that song and send it to Michael.” In response to the initial multi-modal interaction, the ASR may generate a preliminary interpretation that includes words and/or phrases such as “Buy,” “that song,” “send it,” and “to Michael.” The ASR may then provide the preliminary interpretation to the conversational language processor, which may determine that the word and/or phrase combination of “Buy” and “send it” indicates that the most likely intent of the interaction includes a content dedication request, while the word and/or phrase combination of “that song” and “to Michael” provide criteria for the intended content and recipient for the dedication. Furthermore, in response to determining the most likely intent of the interaction, the conversational language processor may establish a device context, a dedication context, a content or music context, an address book context, or other suitable contexts in an attempt to resolve the request. - For example, the device context may enable the conversational language processor to retrieve data from the voice-enabled client device or another suitable device that provides the user's intended meaning for the phrase “that song” (e.g., the user may be referring to a song playing on the user's satellite radio device, such that the conversational language processor can identify the song that was playing when the device interaction was received in operation 710). Furthermore, the address book context may enable the conversational language processor to identify “Michael,” the intended recipient of the dedication request. The conversational language processor may then receive appropriate results for resolving the intent of the request and present the results to the user through the output device (e.g., displaying information requesting that the user confirm that the content and recipient was correctly identified, playing a sample audio clip of the song, displaying options to purchase the song, recommending similar songs or similar artists, etc.). Accordingly, in response to detecting the content dedication request in
operation 720 and identifying the relevant criteria identifying the content to be dedicated and the intended recipient, the content dedication application may be invoked to process the content dedication request. - For example, in one implementation, processing the content dedication request may include the content dedication application capturing a natural language utterance that contains the dedication for the requested content in an
operation 730. For example, the content dedication application may provide a prompt through the output device that instructs the user to provide the dedication utterance (e.g., a visual or audible prompt instructing the user to begin speaking, to speak after an audible beep, etc.). The user may then provide the dedication utterance through the voice-enabled input device, and the dedication utterance may then be converted into an electronic signal that the content dedication application captures inoperation 730. In addition, anoperation 740 may include the content dedication application further prompting the user to provide any additional tags for the dedicated content (e.g., an image or a picture to be inserted as album art for the dedicated content, one or more natural language utterances to insert as voice-tags for the dedicated content, one or more natural language utterances to be transcribed into text that can be inserted in tags for the dedicated content, a non-speech input or data input that include information to insert in tags for the dedicated content, etc.). - Thus, in response to determining that the user has provided additional information to insert in tags for the dedication content in
operation 740, the content dedication application may then capture the tags in anoperation 750. For example,operation 750 may include the content dedication application capturing an image or picture that the user identifies for album art to be inserted in the dedicated content, capturing any natural language utterances to insert as voice-tags for the dedicated content, communicating with the ASR and/or conversational language processor to transcribe any natural language utterances to be inserted as text within tags for the dedicated content, capturing any non-speech inputs or data inputs that include information to insert as text within the tags for the dedicated content, or otherwise capturing information that relates to the additional tags. - In one implementation, the content dedication application may then process the content dedication request in an
operation 760, which may include prompting the user to identify the recipient of the dedication. For example, the content dedication application may request information identifying the recipient of the dedication in response to the user not having already identified the recipient, in response to the user identifying the recipient in a manner that includes ambiguity or other criteria that cannot be resolved without further information, to distinguish among different contact information known for the recipient, or in response to other appropriate circumstances. Thus, the user may provide any suitable multi-modal input that includes an e-mail address, a telephone number, an address book entry, or other criteria that can be used to uniquely identify the information for contacting the recipient of the dedication. Furthermore, in one implementation, processing the content dedication request inoperation 760 may further include the content dedication application routing the request, including the dedication utterance, the additional tags (if any), and the information for contacting the recipient to the content dedication system through the messaging interface, wherein the content dedication system may then process a transaction for the content dedication request, as will be described in greater detail below. - According to one aspect of the invention,
FIG. 8 illustrates a flow diagram of an exemplary method for providing a natural language content dedication service. In particular, the method for providing a natural language content dedication service, as shown inFIG. 8 , may include anoperation 810 in which a content dedication system may receive a natural language content dedication request through a messaging interface. For example, in one implementation, the content dedication system may receive the natural language content dedication request from a content dedication application that executes on a voice-enabled client device. In one implementation, the natural language content dedication request received inoperation 810 may generally include a natural language dedication utterance, information to be inserted within one or more tags for content to be dedicated, and information identifying a recipient of the dedicated content. - For example, in one implementation, the dedication utterance received in
operation 810 may include encoded audio that the content dedication system can insert within the dedicated content or use to verbally annotate the dedicated content. Alternatively (or additionally), the content dedication system may interpret and parse the dedication utterance to transcribe one or more words or phrases from the dedication utterance, wherein the transcribed words or phrases may provide a textual annotation for the dedicated content (e.g., the textual annotation may be inserted within metadata for the dedicated content, such as an ID3 Comments tag). Similarly, any utterances to be inserted as voice-tags for the dedicated content may provide further verbal annotations for the dedicated content, or such utterances may be transcribed to provide further textual annotations for the dedicated content. In one implementation, verbal annotations, textual annotations, and other types of annotations may be created and associated with the dedicated content using techniques described in co-pending U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, the contents of which are hereby incorporated by reference in their entirety. - In one implementation, in response to the content dedication system receiving the content dedication request, the dedication utterance, and any additional tags from the content dedication application in
operation 810, the content dedication system may invoke a content dedication application on a voice-enabled server to process the content dedication request. In particular, anoperation 820 may include the content dedication application on the voice-enabled server identifying the content requested for dedication. In one implementation, the request received from the content dedication application inoperation 810 may include information identifying the requested content and/or information that the content dedication system can use to identify the requested content. For example, the request may include a multi-modal natural language input that includes a natural language utterance and/or a non-voice input, wherein the content dedication system may use shared knowledge relating to the dedication request to identify the content to be dedicated. - Thus,
operation 820 may include the content dedication system invoking one or more natural language processing components (e.g., an ASR, a conversational language processor, etc.), searching one or more data repositories, interacting with one or more content providers, pulling data from a satellite radio system that played the requested content at the voice-enabled client device or another device in a hybrid processing environment, or otherwise consulting available resources that can be used to identify the content to be dedicated. For example, in one implementation, the hybrid processing environment may include a device having an application that can identify played content (e.g., a Shazam® listening device that a user can hold near a speaker to identify content playing through the speaker). Thus,operation 820 may generally include the content dedication system communicating with any suitable device, system, application, or other resource to identify the content requested for dedication. - In one implementation, in response to identifying the content to be dedicated, an
operation 830 may include the content dedication system identifying one or more purchase options for the dedication request. In one implementation,operation 830 may include the content dedication system communicating with a billing system that supports various purchase options to provide users with flexibility in the manner of requesting content dedications. For example, the billing system may support content dedication purchase options that include a buy-to-own purchase option, a pay-to-play purchase option, a paid subscription purchase option, or other appropriate options. In response to identifying the purchase option for the content dedication request, anoperation 840 may include the content dedication system processing a transaction for the content identified inoperation 820. - For example, in response to
operation 830 indicating that the user has selected the buy-to-own purchase option, the transaction processed inoperation 840 may include purchasing full rights to the requested content from an appropriate content provider, wherein the billing system may then charge the user of the voice-enabled client device an appropriate amount that includes costs for purchasing the content from the content provider, adding the natural language utterance dedicating the content, tagging the dedicated content, delivering the dedicated content to the recipient, or any other appropriate services rendered for the content dedication request. Alternatively, in response tooperation 830 indicating that the user has selected the pay-to-play purchase option, the transaction processed inoperation 840 may include purchasing rights from the content provider that permits the purchased content to be played a predetermined number of times, wherein the billing system may charge the user in a similar manner as described above, except that the cost for purchasing limited rights to the content under the pay-to-play purchase option may be somewhat less than the cost for purchasing ownership rights to the content, as in the buy-to-own purchase option. - In another alternative implementation, in response to determining that the requested content dedication includes a selection of the paid subscription purchase option, the user may pay a periodic service charge to the content dedication system that permits the user to make a predetermined number of content dedications in a subscription period, an unlimited number of content dedications in the subscription period, or otherwise make content dedications in accordance with terms defined in a subscription (e.g., different subscription levels having different content dedication options may be offered, such as a subscription level that only permits utterance dedications, another subscription level that further permits interpreting and parsing utterance dedications, etc.). Thus, the content transaction may be processed in
operation 840 according to the purchase options identified inoperation 830, and the content dedication system may then further process the dedication request to customize the dedicated content according to criteria provided in the request previously received inoperation 810. - For example, in one implementation, an
operation 850 may include the content dedication system inserting the natural language dedication utterance into the dedicated content, verbally annotating the dedicated content with the dedication utterance, or otherwise associating the dedicated content with the dedication utterance. Furthermore, anoperation 860 may include the content dedication system determining whether any additional tags have been specified for the dedicated content. For example, as noted above, the dedication request may identify an image or picture to insert as album art for the dedicated content, one or more natural language utterances, non-voice, and/or data inputs to be transcribed into text that can be inserted in tags for the dedicated content, or any other suitable information that can be inserted within or associated with metadata for the dedicated content. Thus, in response to determining that information for any additional tags has been provided, anoperation 870 may include inserting such additional tags into the dedicated content. - In one implementation, in response to having purchased the content requested for dedication, associating the dedication utterance with the dedicated content, and associating the additional tags (if any) with the dedicated content, the content dedication system may send a content dedication message to the recipient of the dedication in an
operation 880. For example, the content dedication message may include a Short Message Service (SMS) text message, an electronic mail message, an automated telephone call managed by a text-to-speech engine, or any other appropriate message that can be appropriately delivered to the recipient (e.g., the message may include a link that the recipient can select to stream, download, or otherwise access the dedicated content, the dedication utterance, etc.). Thus, the content dedication message may generally notify the recipient that content has been dedicated to the recipient and provide various mechanisms for the recipient to access the content dedication, as will be apparent. - Implementations of the invention may be made in hardware, firmware, software, or various combinations thereof. The invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include various mechanisms for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable storage medium may include read only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, or other storage media, and a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, or other transmission media. Further, firmware, software, routines, or instructions may be described in the above disclosure in terms of specific exemplary aspects and implementations of the invention, and performing certain actions. However, it will be apparent that such descriptions are merely for convenience, and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, or instructions.
- Accordingly, aspects and implementations of the invention may be described herein as including a particular feature, structure, or characteristic, but it will be apparent that every aspect or implementation may or may not necessarily include the particular feature, structure, or characteristic. In addition, when a particular feature, structure, or characteristic has been described in connection with a given aspect or implementation, it will be understood that such feature, structure, or characteristic may be included in connection with other aspects or implementations, whether or not explicitly described. Thus, various changes and modifications may be made to the preceding description without departing from the scope or spirit of the invention, and the specification and drawings should therefore be regarded as exemplary only, with the scope of the invention determined solely by the appended claims.
Claims (30)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/631,772 US20150170641A1 (en) | 2009-11-10 | 2015-02-25 | System and method for providing a natural language content dedication service |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US25982009P | 2009-11-10 | 2009-11-10 | |
US12/943,699 US9502025B2 (en) | 2009-11-10 | 2010-11-10 | System and method for providing a natural language content dedication service |
US14/631,772 US20150170641A1 (en) | 2009-11-10 | 2015-02-25 | System and method for providing a natural language content dedication service |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/943,699 Continuation US9502025B2 (en) | 2009-11-10 | 2010-11-10 | System and method for providing a natural language content dedication service |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150170641A1 true US20150170641A1 (en) | 2015-06-18 |
Family
ID=43974883
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/943,699 Active 2034-07-17 US9502025B2 (en) | 2009-11-10 | 2010-11-10 | System and method for providing a natural language content dedication service |
US14/631,772 Abandoned US20150170641A1 (en) | 2009-11-10 | 2015-02-25 | System and method for providing a natural language content dedication service |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/943,699 Active 2034-07-17 US9502025B2 (en) | 2009-11-10 | 2010-11-10 | System and method for providing a natural language content dedication service |
Country Status (2)
Country | Link |
---|---|
US (2) | US9502025B2 (en) |
WO (1) | WO2011059997A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9406078B2 (en) | 2007-02-06 | 2016-08-02 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9570070B2 (en) | 2009-02-20 | 2017-02-14 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9620113B2 (en) | 2007-12-11 | 2017-04-11 | Voicebox Technologies Corporation | System and method for providing a natural language voice user interface |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US9767501B1 (en) * | 2013-11-07 | 2017-09-19 | Amazon Technologies, Inc. | Voice-assisted scanning |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US10297249B2 (en) | 2006-10-16 | 2019-05-21 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
Families Citing this family (302)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001013255A2 (en) | 1999-08-13 | 2001-02-22 | Pixo, Inc. | Displaying and traversing links in character array |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
ITFI20010199A1 (en) | 2001-10-22 | 2003-04-22 | Riccardo Vieri | SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM |
US7398209B2 (en) | 2002-06-03 | 2008-07-08 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7693720B2 (en) | 2002-07-15 | 2010-04-06 | Voicebox Technologies, Inc. | Mobile systems and methods for responding to natural language speech utterance |
US7669134B1 (en) | 2003-05-02 | 2010-02-23 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US20060271520A1 (en) * | 2005-05-27 | 2006-11-30 | Ragan Gene Z | Content-based implicit search query |
US7640160B2 (en) | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7620549B2 (en) | 2005-08-10 | 2009-11-17 | Voicebox Technologies, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US7949529B2 (en) | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
US7634409B2 (en) * | 2005-08-31 | 2009-12-15 | Voicebox Technologies, Inc. | Dynamic speech sharpening |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7633076B2 (en) | 2005-09-30 | 2009-12-15 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
ITFI20070177A1 (en) | 2007-07-26 | 2009-01-27 | Riccardo Vieri | SYSTEM FOR THE CREATION AND SETTING OF AN ADVERTISING CAMPAIGN DERIVING FROM THE INSERTION OF ADVERTISING MESSAGES WITHIN AN EXCHANGE OF MESSAGES AND METHOD FOR ITS FUNCTIONING. |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8165886B1 (en) | 2007-10-04 | 2012-04-24 | Great Northern Research LLC | Speech interface system and method for control and interaction with applications on a computing system |
US8595642B1 (en) | 2007-10-04 | 2013-11-26 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
US8364694B2 (en) | 2007-10-26 | 2013-01-29 | Apple Inc. | Search assistant for digital media assets |
US8620662B2 (en) * | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8327272B2 (en) | 2008-01-06 | 2012-12-04 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US8065143B2 (en) | 2008-02-22 | 2011-11-22 | Apple Inc. | Providing text input using speech data and non-speech data |
US8289283B2 (en) | 2008-03-04 | 2012-10-16 | Apple Inc. | Language input interface on a device |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8589161B2 (en) | 2008-05-27 | 2013-11-19 | Voicebox Technologies, Inc. | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8464150B2 (en) | 2008-06-07 | 2013-06-11 | Apple Inc. | Automatic language identification for dynamic text processing |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
US8352268B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US8396714B2 (en) | 2008-09-29 | 2013-03-12 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US8355919B2 (en) | 2008-09-29 | 2013-01-15 | Apple Inc. | Systems and methods for text normalization for text to speech synthesis |
US8352272B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US8862252B2 (en) | 2009-01-30 | 2014-10-14 | Apple Inc. | Audio user interface for displayless electronic device |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US20120311585A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Organizing task items that represent tasks to perform |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
US20110110534A1 (en) * | 2009-11-12 | 2011-05-12 | Apple Inc. | Adjustable voice output based on device status |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8914401B2 (en) * | 2009-12-30 | 2014-12-16 | At&T Intellectual Property I, L.P. | System and method for an N-best list interface |
US8600743B2 (en) | 2010-01-06 | 2013-12-03 | Apple Inc. | Noise profile determination for voice-related feature |
US8381107B2 (en) | 2010-01-13 | 2013-02-19 | Apple Inc. | Adaptive audio feedback system and method |
US8311838B2 (en) | 2010-01-13 | 2012-11-13 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US9104670B2 (en) | 2010-07-21 | 2015-08-11 | Apple Inc. | Customized search or acquisition of digital media assets |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US20130054450A1 (en) * | 2011-08-31 | 2013-02-28 | Richard Lang | Monetization of Atomized Content |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
CN103365836B (en) * | 2012-04-01 | 2016-05-11 | 郭佳 | A kind of mutual implementation method and system thereof of distributed intelligence that adopts natural language |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
WO2013185109A2 (en) | 2012-06-08 | 2013-12-12 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US20130346068A1 (en) * | 2012-06-25 | 2013-12-26 | Apple Inc. | Voice-Based Image Tagging and Searching |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9424233B2 (en) * | 2012-07-20 | 2016-08-23 | Veveo, Inc. | Method of and system for inferring user intent in search input in a conversational interaction system |
US9465833B2 (en) | 2012-07-31 | 2016-10-11 | Veveo, Inc. | Disambiguating user intent in conversational interaction system for large corpus information retrieval |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
US9070366B1 (en) * | 2012-12-19 | 2015-06-30 | Amazon Technologies, Inc. | Architecture for multi-domain utterance processing |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10795528B2 (en) | 2013-03-06 | 2020-10-06 | Nuance Communications, Inc. | Task assistant having multiple visual displays |
US10223411B2 (en) * | 2013-03-06 | 2019-03-05 | Nuance Communications, Inc. | Task assistant utilizing context for improved interaction |
US10783139B2 (en) | 2013-03-06 | 2020-09-22 | Nuance Communications, Inc. | Task assistant |
US9361884B2 (en) | 2013-03-11 | 2016-06-07 | Nuance Communications, Inc. | Communicating context across different components of multi-modal dialog applications |
US9171542B2 (en) | 2013-03-11 | 2015-10-27 | Nuance Communications, Inc. | Anaphora resolution using linguisitic cues, dialogue context, and general knowledge |
US9269354B2 (en) | 2013-03-11 | 2016-02-23 | Nuance Communications, Inc. | Semantic re-ranking of NLU results in conversational dialogue applications |
US9761225B2 (en) | 2013-03-11 | 2017-09-12 | Nuance Communications, Inc. | Semantic re-ranking of NLU results in conversational dialogue applications |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
CN110096712B (en) | 2013-03-15 | 2023-06-20 | 苹果公司 | User training through intelligent digital assistant |
CN105027197B (en) | 2013-03-15 | 2018-12-14 | 苹果公司 | Training at least partly voice command system |
KR102057795B1 (en) | 2013-03-15 | 2019-12-19 | 애플 인크. | Context-sensitive handling of interruptions |
US8768687B1 (en) * | 2013-04-29 | 2014-07-01 | Google Inc. | Machine translation of indirect speech |
EP2994908B1 (en) | 2013-05-07 | 2019-08-28 | Veveo, Inc. | Incremental speech input interface with real time feedback |
US9953630B1 (en) * | 2013-05-31 | 2018-04-24 | Amazon Technologies, Inc. | Language recognition for device settings |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
CN110442699A (en) | 2013-06-09 | 2019-11-12 | 苹果公司 | Operate method, computer-readable medium, electronic equipment and the system of digital assistants |
KR101809808B1 (en) | 2013-06-13 | 2017-12-15 | 애플 인크. | System and method for emergency calls initiated by voice command |
US9594542B2 (en) | 2013-06-20 | 2017-03-14 | Viv Labs, Inc. | Dynamically evolving cognitive architecture system based on training by third-party developers |
US20140379336A1 (en) * | 2013-06-20 | 2014-12-25 | Atul Bhatnagar | Ear-based wearable networking device, system, and method |
US9633317B2 (en) | 2013-06-20 | 2017-04-25 | Viv Labs, Inc. | Dynamically evolving cognitive architecture system based on a natural language intent interpreter |
US10474961B2 (en) | 2013-06-20 | 2019-11-12 | Viv Labs, Inc. | Dynamically evolving cognitive architecture system based on prompting for additional user input |
US9519461B2 (en) | 2013-06-20 | 2016-12-13 | Viv Labs, Inc. | Dynamically evolving cognitive architecture system based on third-party developers |
KR102053820B1 (en) * | 2013-07-02 | 2019-12-09 | 삼성전자주식회사 | Server and control method thereof, and image processing apparatus and control method thereof |
DE112014003653B4 (en) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatically activate intelligent responses based on activities from remote devices |
US11199906B1 (en) | 2013-09-04 | 2021-12-14 | Amazon Technologies, Inc. | Global user input management |
US10885918B2 (en) | 2013-09-19 | 2021-01-05 | Microsoft Technology Licensing, Llc | Speech recognition using phoneme matching |
US10008205B2 (en) * | 2013-11-20 | 2018-06-26 | General Motors Llc | In-vehicle nametag choice using speech recognition |
US9507849B2 (en) * | 2013-11-28 | 2016-11-29 | Soundhound, Inc. | Method for combining a query and a communication command in a natural language computer system |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9601108B2 (en) | 2014-01-17 | 2017-03-21 | Microsoft Technology Licensing, Llc | Incorporating an exogenous large-vocabulary model into rule-based speech recognition |
US10749989B2 (en) * | 2014-04-01 | 2020-08-18 | Microsoft Technology Licensing Llc | Hybrid client/server architecture for parallel processing |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
EP3480811A1 (en) * | 2014-05-30 | 2019-05-08 | Apple Inc. | Multi-command single utterance input method |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
EP2980733A1 (en) * | 2014-07-31 | 2016-02-03 | Samsung Electronics Co., Ltd | Message service providing device and method of providing content via the same |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9852136B2 (en) | 2014-12-23 | 2017-12-26 | Rovi Guides, Inc. | Systems and methods for determining whether a negation statement applies to a current or past query |
US9854049B2 (en) | 2015-01-30 | 2017-12-26 | Rovi Guides, Inc. | Systems and methods for resolving ambiguous terms in social chatter based on a user profile |
US10417313B2 (en) | 2015-02-20 | 2019-09-17 | International Business Machines Corporation | Inserting links that aid action completion |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US20160283876A1 (en) * | 2015-03-24 | 2016-09-29 | Tata Consultancy Services Limited | System and method for providing automomous contextual information life cycle management |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
EP3089159B1 (en) * | 2015-04-28 | 2019-08-28 | Google LLC | Correcting voice recognition using selective re-speak |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US9870196B2 (en) * | 2015-05-27 | 2018-01-16 | Google Llc | Selective aborting of online processing of voice inputs in a voice-enabled electronic device |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US9966073B2 (en) * | 2015-05-27 | 2018-05-08 | Google Llc | Context-sensitive dynamic update of voice to text model in a voice-enabled electronic device |
US10083697B2 (en) * | 2015-05-27 | 2018-09-25 | Google Llc | Local persisting of data for selectively offline capable voice action in a voice-enabled electronic device |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10742700B2 (en) * | 2015-06-24 | 2020-08-11 | Leo T. ABBE | User assembled content delivered in a media stream |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
CN105070288B (en) * | 2015-07-02 | 2018-08-07 | 百度在线网络技术(北京)有限公司 | Vehicle-mounted voice instruction identification method and device |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10171403B2 (en) * | 2015-11-30 | 2019-01-01 | International Business Machines Corporation | Determining intended electronic message recipients via linguistic profiles |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US9779735B2 (en) * | 2016-02-24 | 2017-10-03 | Google Inc. | Methods and systems for detecting and processing speech signals |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10606952B2 (en) | 2016-06-24 | 2020-03-31 | Elemental Cognition Llc | Architecture and processes for computer learning and understanding |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10540513B2 (en) | 2016-09-13 | 2020-01-21 | Microsoft Technology Licensing, Llc | Natural language processor extension transmission data protection |
US10503767B2 (en) * | 2016-09-13 | 2019-12-10 | Microsoft Technology Licensing, Llc | Computerized natural language query intent dispatching |
US10382440B2 (en) * | 2016-09-22 | 2019-08-13 | International Business Machines Corporation | Method to allow for question and answer system to dynamically return different responses based on roles |
US10754969B2 (en) | 2016-09-22 | 2020-08-25 | International Business Machines Corporation | Method to allow for question and answer system to dynamically return different responses based on roles |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
KR102389625B1 (en) * | 2017-04-30 | 2022-04-25 | 삼성전자주식회사 | Electronic apparatus for processing user utterance and controlling method thereof |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
DK179549B1 (en) * | 2017-05-16 | 2019-02-12 | Apple Inc. | Far-field extension for digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10657965B2 (en) * | 2017-07-31 | 2020-05-19 | Bose Corporation | Conversational audio assistant |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10504513B1 (en) | 2017-09-26 | 2019-12-10 | Amazon Technologies, Inc. | Natural language understanding with affiliated devices |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
EP4273696A3 (en) | 2017-10-03 | 2024-01-03 | Google LLC | Multiple digital assistant coordination in vehicular environments |
US20190146491A1 (en) * | 2017-11-10 | 2019-05-16 | GM Global Technology Operations LLC | In-vehicle system to communicate with passengers |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US11076039B2 (en) | 2018-06-03 | 2021-07-27 | Apple Inc. | Accelerated task performance |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10831442B2 (en) * | 2018-10-19 | 2020-11-10 | International Business Machines Corporation | Digital assistant user interface amalgamation |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11538468B2 (en) * | 2019-09-12 | 2022-12-27 | Oracle International Corporation | Using semantic frames for intent classification |
WO2021056255A1 (en) | 2019-09-25 | 2021-04-01 | Apple Inc. | Text detection using global geometry estimators |
US11195534B1 (en) * | 2020-03-30 | 2021-12-07 | Amazon Technologies, Inc. | Permissioning for natural language processing systems |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11043220B1 (en) | 2020-05-11 | 2021-06-22 | Apple Inc. | Digital assistant hardware abstraction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US20220044676A1 (en) * | 2020-08-04 | 2022-02-10 | Bank Of America Corporation | Determination of user intent using contextual analysis |
US11829720B2 (en) | 2020-09-01 | 2023-11-28 | Apple Inc. | Analysis and validation of language models |
US11533283B1 (en) * | 2020-11-16 | 2022-12-20 | Amazon Technologies, Inc. | Voice user interface sharing of content |
CN117252730B (en) * | 2023-11-17 | 2024-03-19 | 浙江口碑网络技术有限公司 | Service subscription processing system, service subscription information processing method and device |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892900A (en) * | 1996-08-30 | 1999-04-06 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US20020032752A1 (en) * | 2000-06-09 | 2002-03-14 | Gold Elliot M. | Method and system for electronic song dedication |
US6704576B1 (en) * | 2000-09-27 | 2004-03-09 | At&T Corp. | Method and system for communicating multimedia content in a unicast, multicast, simulcast or broadcast environment |
US20040133793A1 (en) * | 1995-02-13 | 2004-07-08 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US20040193420A1 (en) * | 2002-07-15 | 2004-09-30 | Kennewick Robert A. | Mobile systems and methods for responding to natural language speech utterance |
US20040199387A1 (en) * | 2000-07-31 | 2004-10-07 | Wang Avery Li-Chun | Method and system for purchasing pre-recorded music |
US20070174258A1 (en) * | 2006-01-23 | 2007-07-26 | Jones Scott A | Targeted mobile device advertisements |
US20070266257A1 (en) * | 2004-07-15 | 2007-11-15 | Allan Camaisa | System and method for blocking unauthorized network log in using stolen password |
US20070276651A1 (en) * | 2006-05-23 | 2007-11-29 | Motorola, Inc. | Grammar adaptation through cooperative client and server based speech recognition |
US20080010135A1 (en) * | 2006-07-10 | 2008-01-10 | Realnetworks, Inc. | Digital media content device incentive and provisioning method |
US20080032622A1 (en) * | 2004-04-07 | 2008-02-07 | Nokia Corporation | Mobile station and interface adapted for feature extraction from an input media sample |
US20080140385A1 (en) * | 2006-12-07 | 2008-06-12 | Microsoft Corporation | Using automated content analysis for audio/video content consumption |
US20080189110A1 (en) * | 2007-02-06 | 2008-08-07 | Tom Freeman | System and method for selecting and presenting advertisements based on natural language processing of voice-based input |
US20100064025A1 (en) * | 2008-09-10 | 2010-03-11 | Nokia Corporation | Method and Apparatus for Providing Media Service |
US20100076778A1 (en) * | 2008-09-25 | 2010-03-25 | Kondrk Robert H | Method and System for Providing and Maintaining Limited-Subscriptions to Digital Media Assets |
US7706616B2 (en) * | 2004-02-27 | 2010-04-27 | International Business Machines Corporation | System and method for recognizing word patterns in a very large vocabulary based on a virtual keyboard layout |
Family Cites Families (451)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4430669A (en) | 1981-05-29 | 1984-02-07 | Payview Limited | Transmitting and receiving apparatus for permitting the transmission and reception of multi-tier subscription programs |
US5208748A (en) | 1985-11-18 | 1993-05-04 | Action Technologies, Inc. | Method and apparatus for structuring and managing human communications by explicitly defining the types of communications permitted between participants |
US5027406A (en) | 1988-12-06 | 1991-06-25 | Dragon Systems, Inc. | Method for interactive speech recognition and training |
SE466029B (en) | 1989-03-06 | 1991-12-02 | Ibm Svenska Ab | DEVICE AND PROCEDURE FOR ANALYSIS OF NATURAL LANGUAGES IN A COMPUTER-BASED INFORMATION PROCESSING SYSTEM |
JPH03129469A (en) | 1989-10-14 | 1991-06-03 | Canon Inc | Natural language processor |
JP3266246B2 (en) | 1990-06-15 | 2002-03-18 | インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン | Natural language analysis apparatus and method, and knowledge base construction method for natural language analysis |
US5722084A (en) | 1990-09-28 | 1998-02-24 | At&T Corp. | Cellular/PCS handset NAM download capability using a wide-area paging system |
DE69116167D1 (en) | 1990-11-27 | 1996-02-15 | Gordon M Jacobs | DIGITAL DATA CONVERTER |
US5274560A (en) | 1990-12-03 | 1993-12-28 | Audio Navigation Systems, Inc. | Sensor free vehicle navigation system utilizing a voice input/output interface for routing a driver from his source point to his destination point |
DE69232407T2 (en) | 1991-11-18 | 2002-09-12 | Toshiba Kawasaki Kk | Speech dialogue system to facilitate computer-human interaction |
US5608635A (en) | 1992-04-14 | 1997-03-04 | Zexel Corporation | Navigation system for a vehicle with route recalculation between multiple locations |
CA2102077C (en) | 1992-12-21 | 1997-09-16 | Steven Lloyd Greenspan | Call billing and measurement methods for redirected calls |
US5471318A (en) | 1993-04-22 | 1995-11-28 | At&T Corp. | Multimedia communications network |
US5377350A (en) | 1993-04-30 | 1994-12-27 | International Business Machines Corporation | System for cooperative communication between local object managers to provide verification for the performance of remote calls by object messages |
US5537436A (en) | 1993-06-14 | 1996-07-16 | At&T Corp. | Simultaneous analog and digital communication applications |
US5983161A (en) | 1993-08-11 | 1999-11-09 | Lemelson; Jerome H. | GPS vehicle collision avoidance warning and control system and method |
DE69423838T2 (en) | 1993-09-23 | 2000-08-03 | Xerox Corp | Semantic match event filtering for speech recognition and signal translation applications |
US5475733A (en) | 1993-11-04 | 1995-12-12 | At&T Corp. | Language accommodated message relaying for hearing impaired callers |
CA2118278C (en) | 1993-12-21 | 1999-09-07 | J. David Garland | Multimedia system |
US5748841A (en) | 1994-02-25 | 1998-05-05 | Morin; Philippe | Supervised contextual language acquisition system |
US5533108A (en) | 1994-03-18 | 1996-07-02 | At&T Corp. | Method and system for routing phone calls based on voice and data transport capability |
US5488652A (en) | 1994-04-14 | 1996-01-30 | Northern Telecom Limited | Method and apparatus for training speech recognition algorithms for directory assistance applications |
US5752052A (en) | 1994-06-24 | 1998-05-12 | Microsoft Corporation | Method and system for bootstrapping statistical processing into a rule-based natural language parser |
JP2674521B2 (en) | 1994-09-21 | 1997-11-12 | 日本電気株式会社 | Mobile object guidance device |
US5539744A (en) | 1994-10-17 | 1996-07-23 | At&T Corp. | Hand-off management for cellular telephony |
US5696965A (en) | 1994-11-03 | 1997-12-09 | Intel Corporation | Electronic information appraisal agent |
JP2855409B2 (en) | 1994-11-17 | 1999-02-10 | 日本アイ・ビー・エム株式会社 | Natural language processing method and system |
US6571279B1 (en) | 1997-12-05 | 2003-05-27 | Pinpoint Incorporated | Location enhanced information delivery system |
US5499289A (en) | 1994-12-06 | 1996-03-12 | At&T Corp. | Systems, methods and articles of manufacture for performing distributed telecommunications |
US5748974A (en) | 1994-12-13 | 1998-05-05 | International Business Machines Corporation | Multimodal natural language interface for cross-application tasks |
US5774859A (en) | 1995-01-03 | 1998-06-30 | Scientific-Atlanta, Inc. | Information system having a speech interface |
US5794050A (en) | 1995-01-04 | 1998-08-11 | Intelligent Text Processing, Inc. | Natural language understanding system |
US5918222A (en) | 1995-03-17 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information disclosing apparatus and multi-modal information input/output system |
US6965864B1 (en) | 1995-04-10 | 2005-11-15 | Texas Instruments Incorporated | Voice activated hypermedia systems using grammatical metadata |
WO1996037881A2 (en) | 1995-05-26 | 1996-11-28 | Applied Language Technologies | Method and apparatus for dynamic adaptation of a large vocabulary speech recognition system and for use of constraints from a database in a large vocabulary speech recognition system |
US5708422A (en) | 1995-05-31 | 1998-01-13 | At&T | Transaction authorization and alert system |
JP3716870B2 (en) | 1995-05-31 | 2005-11-16 | ソニー株式会社 | Speech recognition apparatus and speech recognition method |
US5721938A (en) | 1995-06-07 | 1998-02-24 | Stuckey; Barbara K. | Method and device for parsing and analyzing natural language sentences and text |
US5617407A (en) | 1995-06-21 | 1997-04-01 | Bareis; Monica M. | Optical disk having speech recognition templates for information access |
US5794196A (en) | 1995-06-30 | 1998-08-11 | Kurzweil Applied Intelligence, Inc. | Speech recognition system distinguishing dictation from commands by arbitration between continuous speech and isolated word modules |
US6292767B1 (en) | 1995-07-18 | 2001-09-18 | Nuance Communications | Method and system for building and running natural language understanding systems |
US5963940A (en) | 1995-08-16 | 1999-10-05 | Syracuse University | Natural language information retrieval system and method |
US5911120A (en) | 1995-09-08 | 1999-06-08 | At&T Wireless Services | Wireless communication system having mobile stations establish a communication link through the base station without using a landline or regional cellular network and without a call in progress |
US5675629A (en) | 1995-09-08 | 1997-10-07 | At&T | Cordless cellular system base station |
US5855000A (en) | 1995-09-08 | 1998-12-29 | Carnegie Mellon University | Method and apparatus for correcting and repairing machine-transcribed input using independent or cross-modal secondary input |
US6192110B1 (en) | 1995-09-15 | 2001-02-20 | At&T Corp. | Method and apparatus for generating sematically consistent inputs to a dialog manager |
US5799276A (en) | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
US5960447A (en) | 1995-11-13 | 1999-09-28 | Holt; Douglas | Word tagging and editing system for speech recognition |
DE69631955T2 (en) | 1995-12-15 | 2005-01-05 | Koninklijke Philips Electronics N.V. | METHOD AND CIRCUIT FOR ADAPTIVE NOISE REDUCTION AND TRANSMITTER RECEIVER |
US6567778B1 (en) | 1995-12-21 | 2003-05-20 | Nuance Communications | Natural language speech recognition using slot semantic confidence scores related to their word recognition confidence scores |
US5832221A (en) | 1995-12-29 | 1998-11-03 | At&T Corp | Universal message storage system |
US5742763A (en) | 1995-12-29 | 1998-04-21 | At&T Corp. | Universal message delivery system for handles identifying network presences |
US5633922A (en) | 1995-12-29 | 1997-05-27 | At&T | Process and apparatus for restarting call routing in a telephone network |
US5802510A (en) | 1995-12-29 | 1998-09-01 | At&T Corp | Universal directory service |
US5987404A (en) | 1996-01-29 | 1999-11-16 | International Business Machines Corporation | Statistical natural language understanding using hidden clumpings |
US6314420B1 (en) | 1996-04-04 | 2001-11-06 | Lycos, Inc. | Collaborative/adaptive search engine |
US5848396A (en) | 1996-04-26 | 1998-12-08 | Freedom Of Information, Inc. | Method and apparatus for determining behavioral profile of a computer user |
US5878386A (en) | 1996-06-28 | 1999-03-02 | Microsoft Corporation | Natural language parser with dictionary-based part-of-speech probabilities |
US5953393A (en) | 1996-07-15 | 1999-09-14 | At&T Corp. | Personal telephone agent |
US6009382A (en) | 1996-08-19 | 1999-12-28 | International Business Machines Corporation | Word storage table for natural language determination |
US5867817A (en) | 1996-08-19 | 1999-02-02 | Virtual Vision, Inc. | Speech recognition manager |
US6385646B1 (en) | 1996-08-23 | 2002-05-07 | At&T Corp. | Method and system for establishing voice communications in an internet environment |
US6470315B1 (en) | 1996-09-11 | 2002-10-22 | Texas Instruments Incorporated | Enrollment and modeling method and apparatus for robust speaker dependent speech models |
US5878385A (en) | 1996-09-16 | 1999-03-02 | Ergo Linguistic Technologies | Method and apparatus for universal parsing of language |
US6085186A (en) | 1996-09-20 | 2000-07-04 | Netbot, Inc. | Method and system using information written in a wrapper description language to execute query on a network |
EP0863466A4 (en) | 1996-09-26 | 2005-07-20 | Mitsubishi Electric Corp | Interactive processor |
US5892813A (en) | 1996-09-30 | 1999-04-06 | Matsushita Electric Industrial Co., Ltd. | Multimodal voice dialing digital key telephone with dialog manager |
US5995928A (en) | 1996-10-02 | 1999-11-30 | Speechworks International, Inc. | Method and apparatus for continuous spelling speech recognition with early identification |
US5902347A (en) | 1996-11-19 | 1999-05-11 | American Navigation Systems, Inc. | Hand-held GPS-mapping device |
US5839107A (en) | 1996-11-29 | 1998-11-17 | Northern Telecom Limited | Method and apparatus for automatically generating a speech recognition vocabulary from a white pages listing |
US6154526A (en) | 1996-12-04 | 2000-11-28 | Intellivoice Communications, Inc. | Data acquisition and error correcting speech recognition system |
US5960399A (en) | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US6456974B1 (en) | 1997-01-06 | 2002-09-24 | Texas Instruments Incorporated | System and method for adding speech recognition capabilities to java |
US6122613A (en) * | 1997-01-30 | 2000-09-19 | Dragon Systems, Inc. | Speech recognition using multiple recognizers (selectively) applied to the same input sample |
JPH10254486A (en) | 1997-03-13 | 1998-09-25 | Canon Inc | Speech recognition device and method therefor |
GB2323693B (en) | 1997-03-27 | 2001-09-26 | Forum Technology Ltd | Speech to text conversion |
US6167377A (en) | 1997-03-28 | 2000-12-26 | Dragon Systems, Inc. | Speech recognition language models |
FR2761837B1 (en) | 1997-04-08 | 1999-06-11 | Sophie Sommelet | NAVIGATION AID DEVICE HAVING A DISTRIBUTED INTERNET-BASED ARCHITECTURE |
US6014559A (en) | 1997-04-10 | 2000-01-11 | At&T Wireless Services, Inc. | Method and system for delivering a voice mail notification to a private base station using cellular phone network |
US6078886A (en) | 1997-04-14 | 2000-06-20 | At&T Corporation | System and method for providing remote automatic speech recognition services via a packet network |
US6058187A (en) | 1997-04-17 | 2000-05-02 | At&T Corp. | Secure telecommunications data transmission |
US5895464A (en) | 1997-04-30 | 1999-04-20 | Eastman Kodak Company | Computer program product and a method for using natural language for the description, search and retrieval of multi-media objects |
US6173266B1 (en) | 1997-05-06 | 2001-01-09 | Speechworks International, Inc. | System and method for developing interactive speech applications |
US6128369A (en) | 1997-05-14 | 2000-10-03 | A.T.&T. Corp. | Employing customer premises equipment in communications network maintenance |
US5960397A (en) | 1997-05-27 | 1999-09-28 | At&T Corp | System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition |
US5995119A (en) | 1997-06-06 | 1999-11-30 | At&T Corp. | Method for generating photo-realistic animated characters |
US6199043B1 (en) | 1997-06-24 | 2001-03-06 | International Business Machines Corporation | Conversation management in speech recognition interfaces |
FI972723A0 (en) | 1997-06-24 | 1997-06-24 | Nokia Mobile Phones Ltd | Mobile communications services |
US6101241A (en) | 1997-07-16 | 2000-08-08 | At&T Corp. | Telephone-based speech recognition for data collection |
US5926784A (en) | 1997-07-17 | 1999-07-20 | Microsoft Corporation | Method and system for natural language parsing using podding |
US5933822A (en) | 1997-07-22 | 1999-08-03 | Microsoft Corporation | Apparatus and methods for an information retrieval system that employs natural language processing of search results to improve overall precision |
US6275231B1 (en) | 1997-08-01 | 2001-08-14 | American Calcar Inc. | Centralized control and management system for automobiles |
US6044347A (en) | 1997-08-05 | 2000-03-28 | Lucent Technologies Inc. | Methods and apparatus object-oriented rule-based dialogue management |
US6144667A (en) | 1997-08-07 | 2000-11-07 | At&T Corp. | Network-based method and apparatus for initiating and completing a telephone call via the internet |
US6192338B1 (en) | 1997-08-12 | 2001-02-20 | At&T Corp. | Natural language knowledge servers as network resources |
US6360234B2 (en) | 1997-08-14 | 2002-03-19 | Virage, Inc. | Video cataloger system with synchronized encoders |
US5895466A (en) | 1997-08-19 | 1999-04-20 | At&T Corp | Automated natural language understanding customer service system |
US6081774A (en) | 1997-08-22 | 2000-06-27 | Novell, Inc. | Natural language information retrieval system and method |
US6018708A (en) | 1997-08-26 | 2000-01-25 | Nortel Networks Corporation | Method and apparatus for performing speech recognition utilizing a supplementary lexicon of frequently used orthographies |
US6076059A (en) | 1997-08-29 | 2000-06-13 | Digital Equipment Corporation | Method for aligning text with audio signals |
US6049602A (en) | 1997-09-18 | 2000-04-11 | At&T Corp | Virtual call center |
US6650747B1 (en) | 1997-09-18 | 2003-11-18 | At&T Corp. | Control of merchant application by system monitor in virtual contact center |
DE19742054A1 (en) | 1997-09-24 | 1999-04-01 | Philips Patentverwaltung | Input system at least for place and / or street names |
US5897613A (en) | 1997-10-08 | 1999-04-27 | Lucent Technologies Inc. | Efficient transmission of voice silence intervals |
US6134235A (en) | 1997-10-08 | 2000-10-17 | At&T Corp. | Pots/packet bridge |
US6272455B1 (en) | 1997-10-22 | 2001-08-07 | Lucent Technologies, Inc. | Method and apparatus for understanding natural language |
JPH11126090A (en) | 1997-10-23 | 1999-05-11 | Pioneer Electron Corp | Method and device for recognizing voice, and recording medium recorded with program for operating voice recognition device |
US6021384A (en) | 1997-10-29 | 2000-02-01 | At&T Corp. | Automatic generation of superwords |
US6498797B1 (en) | 1997-11-14 | 2002-12-24 | At&T Corp. | Method and apparatus for communication services on a network |
US6188982B1 (en) | 1997-12-01 | 2001-02-13 | Industrial Technology Research Institute | On-line background noise adaptation of parallel model combination HMM with discriminative learning using weighted HMM for noisy speech recognition |
US6614773B1 (en) | 1997-12-02 | 2003-09-02 | At&T Corp. | Packet transmissions over cellular radio |
US6219346B1 (en) | 1997-12-02 | 2001-04-17 | At&T Corp. | Packet switching architecture in cellular radio |
US5970412A (en) | 1997-12-02 | 1999-10-19 | Maxemchuk; Nicholas Frank | Overload control in a packet-switching cellular environment |
US6195634B1 (en) | 1997-12-24 | 2001-02-27 | Nortel Networks Corporation | Selection of decoys for non-vocabulary utterances rejection |
US6301560B1 (en) | 1998-01-05 | 2001-10-09 | Microsoft Corporation | Discrete speech recognition system with ballooning active grammar |
US6278377B1 (en) | 1999-08-25 | 2001-08-21 | Donnelly Corporation | Indicator for vehicle accessory |
US6226612B1 (en) | 1998-01-30 | 2001-05-01 | Motorola, Inc. | Method of evaluating an utterance in a speech recognition system |
US6385596B1 (en) | 1998-02-06 | 2002-05-07 | Liquid Audio, Inc. | Secure online music distribution system |
US6160883A (en) | 1998-03-04 | 2000-12-12 | At&T Corporation | Telecommunications network system and method |
JP2002507010A (en) | 1998-03-09 | 2002-03-05 | ルノー・アンド・オスピー・スピーチ・プロダクツ・ナームローゼ・ベンノートシャープ | Apparatus and method for simultaneous multi-mode dictation |
US6119087A (en) | 1998-03-13 | 2000-09-12 | Nuance Communications | System architecture for and method of voice processing |
US6233559B1 (en) | 1998-04-01 | 2001-05-15 | Motorola, Inc. | Speech control of multiple applications using applets |
US6420975B1 (en) | 1999-08-25 | 2002-07-16 | Donnelly Corporation | Interior rearview mirror sound processing system |
US6173279B1 (en) | 1998-04-09 | 2001-01-09 | At&T Corp. | Method of using a natural language interface to retrieve information from one or more data resources |
US6144938A (en) | 1998-05-01 | 2000-11-07 | Sun Microsystems, Inc. | Voice user interface with personality |
US6574597B1 (en) | 1998-05-08 | 2003-06-03 | At&T Corp. | Fully expanded context-dependent networks for speech recognition |
US6236968B1 (en) | 1998-05-14 | 2001-05-22 | International Business Machines Corporation | Sleep prevention dialog based car system |
US20070094223A1 (en) | 1998-05-28 | 2007-04-26 | Lawrence Au | Method and system for using contextual meaning in voice to text conversion |
WO1999063456A1 (en) | 1998-06-04 | 1999-12-09 | Matsushita Electric Industrial Co., Ltd. | Language conversion rule preparing device, language conversion device and program recording medium |
US6219643B1 (en) | 1998-06-26 | 2001-04-17 | Nuance Communications, Inc. | Method of analyzing dialogs in a natural language speech recognition system |
US6175858B1 (en) | 1998-07-13 | 2001-01-16 | At&T Corp. | Intelligent network messaging agent and method |
US6553372B1 (en) | 1998-07-13 | 2003-04-22 | Microsoft Corporation | Natural language information retrieval system |
US6393428B1 (en) | 1998-07-13 | 2002-05-21 | Microsoft Corporation | Natural language information retrieval system |
US6269336B1 (en) | 1998-07-24 | 2001-07-31 | Motorola, Inc. | Voice browser for interactive services and methods thereof |
US6539348B1 (en) | 1998-08-24 | 2003-03-25 | Virtual Research Associates, Inc. | Systems and methods for parsing a natural language sentence |
US6208964B1 (en) | 1998-08-31 | 2001-03-27 | Nortel Networks Limited | Method and apparatus for providing unsupervised adaptation of transcriptions |
US6499013B1 (en) | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6434524B1 (en) | 1998-09-09 | 2002-08-13 | One Voice Technologies, Inc. | Object interactive user interface using speech recognition and natural language processing |
US6049607A (en) | 1998-09-18 | 2000-04-11 | Lamar Signal Processing | Interference canceling method and apparatus |
US6405170B1 (en) | 1998-09-22 | 2002-06-11 | Speechworks International, Inc. | Method and system of reviewing the behavior of an interactive speech recognition application |
US6606598B1 (en) | 1998-09-22 | 2003-08-12 | Speechworks International, Inc. | Statistical computing and reporting for interactive speech applications |
IL142363A0 (en) | 1998-10-02 | 2002-03-10 | Ibm | System and method for providing network coordinated conversational services |
US7003463B1 (en) | 1998-10-02 | 2006-02-21 | International Business Machines Corporation | System and method for providing network coordinated conversational services |
US6185535B1 (en) | 1998-10-16 | 2001-02-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Voice control of a user interface to service applications |
CA2347760A1 (en) | 1998-10-21 | 2000-04-27 | American Calcar Inc. | Positional camera and gps data interchange device |
US6453292B2 (en) | 1998-10-28 | 2002-09-17 | International Business Machines Corporation | Command boundary identifier for conversational natural language |
US6477200B1 (en) | 1998-11-09 | 2002-11-05 | Broadcom Corporation | Multi-pair gigabit ethernet transceiver |
US8121891B2 (en) | 1998-11-12 | 2012-02-21 | Accenture Global Services Gmbh | Personalized product report |
US6208972B1 (en) | 1998-12-23 | 2001-03-27 | Richard Grant | Method for integrating computer processes with an interface controlled by voice actuated grammars |
US6195651B1 (en) | 1998-11-19 | 2001-02-27 | Andersen Consulting Properties Bv | System, method and article of manufacture for a tuned user application experience |
US6246981B1 (en) | 1998-11-25 | 2001-06-12 | International Business Machines Corporation | Natural language task-oriented dialog manager and method |
US7881936B2 (en) | 1998-12-04 | 2011-02-01 | Tegic Communications, Inc. | Multimodal disambiguation of speech recognition |
US6430285B1 (en) | 1998-12-15 | 2002-08-06 | At&T Corp. | Method and apparatus for an automated caller interaction system |
US6721001B1 (en) | 1998-12-16 | 2004-04-13 | International Business Machines Corporation | Digital camera with voice recognition annotation |
US6233556B1 (en) | 1998-12-16 | 2001-05-15 | Nuance Communications | Voice processing and verification system |
US6754485B1 (en) | 1998-12-23 | 2004-06-22 | American Calcar Inc. | Technique for effectively providing maintenance and information to vehicles |
US6570555B1 (en) | 1998-12-30 | 2003-05-27 | Fuji Xerox Co., Ltd. | Method and apparatus for embodied conversational characters with multimodal input/output in an interface device |
US6757718B1 (en) | 1999-01-05 | 2004-06-29 | Sri International | Mobile navigation of network-based electronic information using spoken input |
US6851115B1 (en) | 1999-01-05 | 2005-02-01 | Sri International | Software-based architecture for communication and cooperation among distributed electronic agents |
US6742021B1 (en) | 1999-01-05 | 2004-05-25 | Sri International, Inc. | Navigating network-based electronic information using spoken input with multimodal error feedback |
US6523061B1 (en) | 1999-01-05 | 2003-02-18 | Sri International, Inc. | System, method, and article of manufacture for agent-based navigation in a speech-based data navigation system |
US6429813B2 (en) | 1999-01-14 | 2002-08-06 | Navigation Technologies Corp. | Method and system for providing end-user preferences with a navigation system |
US6567797B1 (en) | 1999-01-26 | 2003-05-20 | Xerox Corporation | System and method for providing recommendations based on multi-modal user clusters |
GB2361339B (en) | 1999-01-27 | 2003-08-06 | Kent Ridge Digital Labs | Method and apparatus for voice annotation and retrieval of multimedia data |
US6556970B1 (en) | 1999-01-28 | 2003-04-29 | Denso Corporation | Apparatus for determining appropriate series of words carrying information to be recognized |
US6278968B1 (en) | 1999-01-29 | 2001-08-21 | Sony Corporation | Method and apparatus for adaptive speech recognition hypothesis construction and selection in a spoken language translation system |
US6430531B1 (en) | 1999-02-04 | 2002-08-06 | Soliloquy, Inc. | Bilateral speech system |
US6643620B1 (en) | 1999-03-15 | 2003-11-04 | Matsushita Electric Industrial Co., Ltd. | Voice activated controller for recording and retrieving audio/video programs |
JP4176228B2 (en) | 1999-03-15 | 2008-11-05 | 株式会社東芝 | Natural language dialogue apparatus and natural language dialogue method |
US6631346B1 (en) | 1999-04-07 | 2003-10-07 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for natural language parsing using multiple passes and tags |
US6233561B1 (en) | 1999-04-12 | 2001-05-15 | Matsushita Electric Industrial Co., Ltd. | Method for goal-oriented speech translation in hand-held devices using meaning extraction and dialogue |
US6408272B1 (en) | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6570964B1 (en) | 1999-04-16 | 2003-05-27 | Nuance Communications | Technique for recognizing telephone numbers and other spoken information embedded in voice messages stored in a voice messaging system |
US6434523B1 (en) | 1999-04-23 | 2002-08-13 | Nuance Communications | Creating and editing grammars for speech recognition graphically |
US6314402B1 (en) | 1999-04-23 | 2001-11-06 | Nuance Communications | Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system |
US6804638B2 (en) | 1999-04-30 | 2004-10-12 | Recent Memory Incorporated | Device and method for selective recall and preservation of events prior to decision to record the events |
US6356869B1 (en) * | 1999-04-30 | 2002-03-12 | Nortel Networks Limited | Method and apparatus for discourse management |
US6505155B1 (en) * | 1999-05-06 | 2003-01-07 | International Business Machines Corporation | Method and system for automatically adjusting prompt feedback based on predicted recognition accuracy |
US6308151B1 (en) | 1999-05-14 | 2001-10-23 | International Business Machines Corp. | Method and system using a speech recognition system to dictate a body of text in response to an available body of text |
US6604075B1 (en) | 1999-05-20 | 2003-08-05 | Lucent Technologies Inc. | Web-based voice dialog interface |
US6584439B1 (en) | 1999-05-21 | 2003-06-24 | Winbond Electronics Corporation | Method and apparatus for controlling voice controlled devices |
GB9911971D0 (en) | 1999-05-21 | 1999-07-21 | Canon Kk | A system, a server for a system and a machine for use in a system |
US20020032564A1 (en) | 2000-04-19 | 2002-03-14 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
US6374214B1 (en) | 1999-06-24 | 2002-04-16 | International Business Machines Corp. | Method and apparatus for excluding text phrases during re-dictation in a speech recognition system |
DE60026637T2 (en) | 1999-06-30 | 2006-10-05 | International Business Machines Corp. | Method for expanding the vocabulary of a speech recognition system |
US7069220B2 (en) | 1999-08-13 | 2006-06-27 | International Business Machines Corporation | Method for determining and maintaining dialog focus in a conversational speech system |
US6377913B1 (en) | 1999-08-13 | 2002-04-23 | International Business Machines Corporation | Method and system for multi-client access to a dialog system |
US6513006B2 (en) | 1999-08-26 | 2003-01-28 | Matsushita Electronic Industrial Co., Ltd. | Automatic control of household activity using speech recognition and natural language |
US6415257B1 (en) | 1999-08-26 | 2002-07-02 | Matsushita Electric Industrial Co., Ltd. | System for identifying and adapting a TV-user profile by means of speech technology |
US6901366B1 (en) | 1999-08-26 | 2005-05-31 | Matsushita Electric Industrial Co., Ltd. | System and method for assessing TV-related information over the internet |
EP1083545A3 (en) | 1999-09-09 | 2001-09-26 | Xanavi Informatics Corporation | Voice recognition of proper names in a navigation apparatus |
US6658388B1 (en) | 1999-09-10 | 2003-12-02 | International Business Machines Corporation | Personality generator for conversational systems |
US7340040B1 (en) | 1999-09-13 | 2008-03-04 | Microstrategy, Incorporated | System and method for real-time, personalized, dynamic, interactive voice services for corporate-analysis related information |
US6850603B1 (en) | 1999-09-13 | 2005-02-01 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized dynamic and interactive voice services |
US6631351B1 (en) | 1999-09-14 | 2003-10-07 | Aidentity Matrix | Smart toys |
US6601026B2 (en) | 1999-09-17 | 2003-07-29 | Discern Communications, Inc. | Information retrieval by natural language querying |
US6587858B1 (en) | 1999-09-30 | 2003-07-01 | Steven Paul Strazza | Systems and methods for the control of dynamic data and request criteria in a data repository |
US6937977B2 (en) | 1999-10-05 | 2005-08-30 | Fastmobile, Inc. | Method and apparatus for processing an input speech signal during presentation of an output audio signal |
US6442522B1 (en) | 1999-10-12 | 2002-08-27 | International Business Machines Corporation | Bi-directional natural language system for interfacing with multiple back-end applications |
US6721697B1 (en) | 1999-10-18 | 2004-04-13 | Sony Corporation | Method and system for reducing lexical ambiguity |
EP1222655A1 (en) | 1999-10-19 | 2002-07-17 | Sony Electronics Inc. | Natural language interface control system |
US6581103B1 (en) | 1999-10-22 | 2003-06-17 | Dedicated Radio, Llc | Method for internet radio broadcasting including listener requests of audio and/or video files with input dedications |
US6594367B1 (en) | 1999-10-25 | 2003-07-15 | Andrea Electronics Corporation | Super directional beamforming design and implementation |
US7107218B1 (en) | 1999-10-29 | 2006-09-12 | British Telecommunications Public Limited Company | Method and apparatus for processing queries |
US6622119B1 (en) | 1999-10-30 | 2003-09-16 | International Business Machines Corporation | Adaptive command predictor and method for a natural language dialog system |
US6526140B1 (en) | 1999-11-03 | 2003-02-25 | Tellabs Operations, Inc. | Consolidated voice activity detection and noise estimation |
US6681206B1 (en) | 1999-11-05 | 2004-01-20 | At&T Corporation | Method for generating morphemes |
US8482535B2 (en) | 1999-11-08 | 2013-07-09 | Apple Inc. | Programmable tactile touch screen displays and man-machine interfaces for improved vehicle instrumentation and telematics |
US7392185B2 (en) | 1999-11-12 | 2008-06-24 | Phoenix Solutions, Inc. | Speech based learning/training system using semantic decoding |
US9076448B2 (en) | 1999-11-12 | 2015-07-07 | Nuance Communications, Inc. | Distributed real time speech recognition system |
US6633846B1 (en) | 1999-11-12 | 2003-10-14 | Phoenix Solutions, Inc. | Distributed realtime speech recognition system |
US6615172B1 (en) | 1999-11-12 | 2003-09-02 | Phoenix Solutions, Inc. | Intelligent query engine for processing voice based queries |
US6751612B1 (en) | 1999-11-29 | 2004-06-15 | Xerox Corporation | User query generate search results that rank set of servers where ranking is based on comparing content on each server with user query, frequency at which content on each server is altered using web crawler in a search engine |
US6418210B1 (en) | 1999-11-29 | 2002-07-09 | At&T Corp | Method and apparatus for providing information between a calling network and a called network |
GB9928420D0 (en) | 1999-12-02 | 2000-01-26 | Ibm | Interactive voice response system |
US6288319B1 (en) | 1999-12-02 | 2001-09-11 | Gary Catona | Electronic greeting card with a custom audio mix |
US6591239B1 (en) | 1999-12-09 | 2003-07-08 | Steris Inc. | Voice controlled surgical suite |
US6598018B1 (en) | 1999-12-15 | 2003-07-22 | Matsushita Electric Industrial Co., Ltd. | Method for natural dialog interface to car devices |
US6976229B1 (en) | 1999-12-16 | 2005-12-13 | Ricoh Co., Ltd. | Method and apparatus for storytelling with digital photographs |
US6832230B1 (en) | 1999-12-22 | 2004-12-14 | Nokia Corporation | Apparatus and associated method for downloading an application with a variable lifetime to a mobile terminal |
US6920421B2 (en) | 1999-12-28 | 2005-07-19 | Sony Corporation | Model adaptive apparatus for performing adaptation of a model used in pattern recognition considering recentness of a received pattern data |
US6678680B1 (en) | 2000-01-06 | 2004-01-13 | Mark Woo | Music search engine |
US6701294B1 (en) | 2000-01-19 | 2004-03-02 | Lucent Technologies, Inc. | User interface for translating natural language inquiries into database queries and data presentations |
US6829603B1 (en) | 2000-02-02 | 2004-12-07 | International Business Machines Corp. | System, method and program product for interactive natural dialog |
US6560590B1 (en) | 2000-02-14 | 2003-05-06 | Kana Software, Inc. | Method and apparatus for multiple tiered matching of natural language queries to positions in a text corpus |
US6434529B1 (en) | 2000-02-16 | 2002-08-13 | Sun Microsystems, Inc. | System and method for referencing object instances and invoking methods on those object instances from within a speech recognition grammar |
JP2003524259A (en) | 2000-02-22 | 2003-08-12 | メタカルタ インコーポレイテッド | Spatial coding and display of information |
US7110951B1 (en) | 2000-03-03 | 2006-09-19 | Dorothy Lemelson, legal representative | System and method for enhancing speech intelligibility for the hearing impaired |
US6466654B1 (en) | 2000-03-06 | 2002-10-15 | Avaya Technology Corp. | Personal virtual assistant with semantic tagging |
US6510417B1 (en) | 2000-03-21 | 2003-01-21 | America Online, Inc. | System and method for voice access to internet-based information |
US7974875B1 (en) | 2000-03-21 | 2011-07-05 | Aol Inc. | System and method for using voice over a telephone to access, process, and carry out transactions over the internet |
ATE494610T1 (en) | 2000-03-24 | 2011-01-15 | Eliza Corp | VOICE RECOGNITION |
US6868380B2 (en) | 2000-03-24 | 2005-03-15 | Eliza Corporation | Speech recognition system and method for generating phonotic estimates |
AU2001249768A1 (en) | 2000-04-02 | 2001-10-15 | Tangis Corporation | Soliciting information based on a computer user's context |
EP1273004A1 (en) | 2000-04-06 | 2003-01-08 | One Voice Technologies Inc. | Natural language and dialogue generation processing |
US6980092B2 (en) | 2000-04-06 | 2005-12-27 | Gentex Corporation | Vehicle rearview mirror assembly incorporating a communication system |
US6578022B1 (en) | 2000-04-18 | 2003-06-10 | Icplanet Corporation | Interactive intelligent searching with executable suggestions |
US6556973B1 (en) | 2000-04-19 | 2003-04-29 | Voxi Ab | Conversion between data representation formats |
US6560576B1 (en) | 2000-04-25 | 2003-05-06 | Nuance Communications | Method and apparatus for providing active help to a user of a voice-enabled application |
WO2001082031A2 (en) | 2000-04-26 | 2001-11-01 | Portable Internet Inc. | Portable internet services |
JP3542026B2 (en) | 2000-05-02 | 2004-07-14 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Speech recognition system, speech recognition method, and computer-readable recording medium |
AU2001259446A1 (en) | 2000-05-02 | 2001-11-12 | Dragon Systems, Inc. | Error correction in speech recognition |
WO2001089183A1 (en) | 2000-05-16 | 2001-11-22 | John Taschereau | Method and system for providing geographically targeted information and advertising |
JP2003535510A (en) | 2000-05-26 | 2003-11-25 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Method and apparatus for voice echo cancellation combined with adaptive beamforming |
WO2001097558A2 (en) | 2000-06-13 | 2001-12-20 | Gn Resound Corporation | Fixed polar-pattern-based adaptive directionality systems |
JP2004531780A (en) | 2000-06-22 | 2004-10-14 | マイクロソフト コーポレーション | Distributed computing service platform |
US7143039B1 (en) | 2000-08-11 | 2006-11-28 | Tellme Networks, Inc. | Providing menu and other services for an information processing system using a telephone or other audio interface |
WO2002010900A2 (en) | 2000-07-28 | 2002-02-07 | Siemens Automotive Corporation | User interface for telematics systems |
US7092928B1 (en) | 2000-07-31 | 2006-08-15 | Quantum Leap Research, Inc. | Intelligent portal engine |
US7027975B1 (en) | 2000-08-08 | 2006-04-11 | Object Services And Consulting, Inc. | Guided natural language interface system and method |
US7653748B2 (en) | 2000-08-10 | 2010-01-26 | Simplexity, Llc | Systems, methods and computer program products for integrating advertising within web content |
US6574624B1 (en) | 2000-08-18 | 2003-06-03 | International Business Machines Corporation | Automatic topic identification and switch for natural language search of textual document collections |
WO2002017069A1 (en) | 2000-08-21 | 2002-02-28 | Yahoo! Inc. | Method and system of interpreting and presenting web content using a voice browser |
US8200485B1 (en) | 2000-08-29 | 2012-06-12 | A9.Com, Inc. | Voice interface and methods for improving recognition accuracy of voice search queries |
US7062488B1 (en) | 2000-08-30 | 2006-06-13 | Richard Reisman | Task/domain segmentation in applying feedback to command control |
CN1226717C (en) | 2000-08-30 | 2005-11-09 | 国际商业机器公司 | Automatic new term fetch method and system |
EP1184841A1 (en) | 2000-08-31 | 2002-03-06 | Siemens Aktiengesellschaft | Speech controlled apparatus and method for speech input and speech recognition |
JP2004508636A (en) * | 2000-09-07 | 2004-03-18 | テレフオンアクチーボラゲット エル エム エリクソン(パブル) | Information providing system and control method thereof |
US20040205671A1 (en) | 2000-09-13 | 2004-10-14 | Tatsuya Sukehiro | Natural-language processing system |
JP2004509018A (en) | 2000-09-21 | 2004-03-25 | アメリカン カルカー インコーポレイティド | Operation support method, user condition determination method, tire condition determination method, visibility measurement method, road determination method, monitor device, and operation device |
US6362748B1 (en) | 2000-09-27 | 2002-03-26 | Lite Vision Corporation | System for communicating among vehicles and a communication system control center |
US6922670B2 (en) | 2000-10-24 | 2005-07-26 | Sanyo Electric Co., Ltd. | User support apparatus and system using agents |
US6721706B1 (en) | 2000-10-30 | 2004-04-13 | Koninklijke Philips Electronics N.V. | Environment-responsive user interface/entertainment device that simulates personal interaction |
US6795808B1 (en) | 2000-10-30 | 2004-09-21 | Koninklijke Philips Electronics N.V. | User interface/entertainment device that simulates personal interaction and charges external database with relevant data |
US6934756B2 (en) | 2000-11-01 | 2005-08-23 | International Business Machines Corporation | Conversational networking via transport, coding and control conversational protocols |
GB0027178D0 (en) | 2000-11-07 | 2000-12-27 | Canon Kk | Speech processing system |
US7158935B1 (en) | 2000-11-15 | 2007-01-02 | At&T Corp. | Method and system for predicting problematic situations in a automated dialog |
US6735592B1 (en) | 2000-11-16 | 2004-05-11 | Discern Communications | System, method, and computer program product for a network-based content exchange system |
US7013308B1 (en) | 2000-11-28 | 2006-03-14 | Semscript Ltd. | Knowledge storage and retrieval system and method |
US20020065568A1 (en) | 2000-11-30 | 2002-05-30 | Silfvast Robert Denton | Plug-in modules for digital signal processor functionalities |
US6973429B2 (en) | 2000-12-04 | 2005-12-06 | A9.Com, Inc. | Grammar generation for voice-based searches |
US7016847B1 (en) * | 2000-12-08 | 2006-03-21 | Ben Franklin Patent Holdings L.L.C. | Open architecture for a voice user interface |
US6456711B1 (en) | 2000-12-12 | 2002-09-24 | At&T Corp. | Method for placing a call intended for an enhanced network user on hold while the enhanced network user is unavailable to take the call using a distributed feature architecture |
US20020082911A1 (en) | 2000-12-22 | 2002-06-27 | Dunn Charles L. | Online revenue sharing |
US6973427B2 (en) | 2000-12-26 | 2005-12-06 | Microsoft Corporation | Method for adding phonetic descriptions to a speech recognition lexicon |
US20020087326A1 (en) | 2000-12-29 | 2002-07-04 | Lee Victor Wai Leung | Computer-implemented web page summarization method and system |
US6751591B1 (en) | 2001-01-22 | 2004-06-15 | At&T Corp. | Method and system for predicting understanding errors in a task classification system |
US7069207B2 (en) | 2001-01-26 | 2006-06-27 | Microsoft Corporation | Linguistically intelligent text compression |
US7206418B2 (en) | 2001-02-12 | 2007-04-17 | Fortemedia, Inc. | Noise suppression for a wireless communication device |
EP1231788A1 (en) | 2001-02-12 | 2002-08-14 | Koninklijke Philips Electronics N.V. | Arrangement for distributing content, profiling center, receiving device and method |
US6549629B2 (en) | 2001-02-21 | 2003-04-15 | Digisonix Llc | DVE system with normalized selection |
US6754627B2 (en) * | 2001-03-01 | 2004-06-22 | International Business Machines Corporation | Detecting speech recognition errors in an embedded speech recognition system |
US7024364B2 (en) | 2001-03-09 | 2006-04-04 | Bevocal, Inc. | System, method and computer program product for looking up business addresses and directions based on a voice dial-up session |
US20020173961A1 (en) | 2001-03-09 | 2002-11-21 | Guerra Lisa M. | System, method and computer program product for dynamic, robust and fault tolerant audio output in a speech recognition framework |
US20020133402A1 (en) | 2001-03-13 | 2002-09-19 | Scott Faber | Apparatus and method for recruiting, communicating with, and paying participants of interactive advertising |
US7574362B2 (en) | 2001-03-14 | 2009-08-11 | At&T Intellectual Property Ii, L.P. | Method for automated sentence planning in a task classification system |
WO2002073452A1 (en) | 2001-03-14 | 2002-09-19 | At & T Corp. | Method for automated sentence planning |
US7729918B2 (en) | 2001-03-14 | 2010-06-01 | At&T Intellectual Property Ii, Lp | Trainable sentence planning system |
US6801897B2 (en) | 2001-03-28 | 2004-10-05 | International Business Machines Corporation | Method of providing concise forms of natural commands |
US8175886B2 (en) | 2001-03-29 | 2012-05-08 | Intellisist, Inc. | Determination of signal-processing approach based on signal destination characteristics |
US7406421B2 (en) | 2001-10-26 | 2008-07-29 | Intellisist Inc. | Systems and methods for reviewing informational content in a vehicle |
US6996531B2 (en) | 2001-03-30 | 2006-02-07 | Comverse Ltd. | Automated database assistance using a telephone for a speech based or text based multimedia communication mode |
WO2002079896A2 (en) | 2001-03-30 | 2002-10-10 | British Telecommunications Public Limited Company | Multi-modal interface |
JP2002358095A (en) | 2001-03-30 | 2002-12-13 | Sony Corp | Method and device for speech processing, program, recording medium |
FR2822994B1 (en) | 2001-03-30 | 2004-05-21 | Bouygues Telecom Sa | ASSISTANCE TO THE DRIVER OF A MOTOR VEHICLE |
US6885989B2 (en) * | 2001-04-02 | 2005-04-26 | International Business Machines Corporation | Method and system for collaborative speech recognition for small-area network |
US6856990B2 (en) | 2001-04-09 | 2005-02-15 | Intel Corporation | Network dedication system |
US7437295B2 (en) | 2001-04-27 | 2008-10-14 | Accenture Llp | Natural language processing for a location-based services system |
US7970648B2 (en) | 2001-04-27 | 2011-06-28 | Accenture Global Services Limited | Advertising campaign and business listing management for a location-based services system |
US6950821B2 (en) | 2001-05-04 | 2005-09-27 | Sun Microsystems, Inc. | System and method for resolving distributed network search queries to information providers |
US6804684B2 (en) | 2001-05-07 | 2004-10-12 | Eastman Kodak Company | Method for associating semantic information with multiple images in an image database environment |
US6944594B2 (en) | 2001-05-30 | 2005-09-13 | Bellsouth Intellectual Property Corporation | Multi-context conversational environment system and method |
JP2003005897A (en) | 2001-06-20 | 2003-01-08 | Alpine Electronics Inc | Method and device for inputting information |
US6801604B2 (en) | 2001-06-25 | 2004-10-05 | International Business Machines Corporation | Universal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources |
US20020198714A1 (en) | 2001-06-26 | 2002-12-26 | Guojun Zhou | Statistical spoken dialog system |
US20100029261A1 (en) | 2001-06-27 | 2010-02-04 | John Mikkelsen | Virtual wireless data cable method, apparatus and system |
US20050234727A1 (en) | 2001-07-03 | 2005-10-20 | Leo Chiu | Method and apparatus for adapting a voice extensible markup language-enabled voice system for natural speech recognition and system response |
US6983307B2 (en) | 2001-07-11 | 2006-01-03 | Kirusa, Inc. | Synchronization among plural browsers |
US7123727B2 (en) | 2001-07-18 | 2006-10-17 | Agere Systems Inc. | Adaptive close-talking differential microphone array |
US7283951B2 (en) | 2001-08-14 | 2007-10-16 | Insightful Corporation | Method and system for enhanced data searching |
US6757544B2 (en) | 2001-08-15 | 2004-06-29 | Motorola, Inc. | System and method for determining a location relevant to a communication device and/or its associated user |
US7920682B2 (en) | 2001-08-21 | 2011-04-05 | Byrne William J | Dynamic interactive voice interface |
US7305381B1 (en) | 2001-09-14 | 2007-12-04 | Ricoh Co., Ltd | Asynchronous unconscious retrieval in a network of information appliances |
US6959276B2 (en) | 2001-09-27 | 2005-10-25 | Microsoft Corporation | Including the category of environmental noise when processing speech signals |
US6721633B2 (en) | 2001-09-28 | 2004-04-13 | Robert Bosch Gmbh | Method and device for interfacing a driver information system using a voice portal server |
US7289606B2 (en) | 2001-10-01 | 2007-10-30 | Sandeep Sibal | Mode-swapping in multi-modal telephonic applications |
JP3997459B2 (en) | 2001-10-02 | 2007-10-24 | 株式会社日立製作所 | Voice input system, voice portal server, and voice input terminal |
US7254384B2 (en) | 2001-10-03 | 2007-08-07 | Accenture Global Services Gmbh | Multi-modal messaging |
US7640006B2 (en) | 2001-10-03 | 2009-12-29 | Accenture Global Services Gmbh | Directory assistance with multi-modal messaging |
JP4065936B2 (en) | 2001-10-09 | 2008-03-26 | 独立行政法人情報通信研究機構 | Language analysis processing system using machine learning method and language omission analysis processing system using machine learning method |
US6501834B1 (en) | 2001-11-21 | 2002-12-31 | At&T Corp. | Message sender status monitor |
US20030101054A1 (en) | 2001-11-27 | 2003-05-29 | Ncc, Llc | Integrated system and method for electronic speech recognition and transcription |
US7165028B2 (en) | 2001-12-12 | 2007-01-16 | Texas Instruments Incorporated | Method of speech recognition resistant to convolutive distortion and additive distortion |
GB2383247A (en) | 2001-12-13 | 2003-06-18 | Hewlett Packard Co | Multi-modal picture allowing verbal interaction between a user and the picture |
US7231343B1 (en) | 2001-12-20 | 2007-06-12 | Ianywhere Solutions, Inc. | Synonyms mechanism for natural language systems |
US20030120493A1 (en) | 2001-12-21 | 2003-06-26 | Gupta Sunil K. | Method and system for updating and customizing recognition vocabulary |
US7203644B2 (en) | 2001-12-31 | 2007-04-10 | Intel Corporation | Automating tuning of speech recognition systems |
US7493259B2 (en) | 2002-01-04 | 2009-02-17 | Siebel Systems, Inc. | Method for accessing data via voice |
US7493559B1 (en) | 2002-01-09 | 2009-02-17 | Ricoh Co., Ltd. | System and method for direct multi-modal annotation of objects |
US7117200B2 (en) | 2002-01-11 | 2006-10-03 | International Business Machines Corporation | Synthesizing information-bearing content from multiple channels |
US7111248B2 (en) | 2002-01-15 | 2006-09-19 | Openwave Systems Inc. | Alphanumeric information input method |
US7536297B2 (en) | 2002-01-22 | 2009-05-19 | International Business Machines Corporation | System and method for hybrid text mining for finding abbreviations and their definitions |
US7054817B2 (en) | 2002-01-25 | 2006-05-30 | Canon Europa N.V. | User interface for speech model generation and testing |
US20030144846A1 (en) | 2002-01-31 | 2003-07-31 | Denenberg Lawrence A. | Method and system for modifying the behavior of an application based upon the application's grammar |
US7130390B2 (en) | 2002-02-01 | 2006-10-31 | Microsoft Corporation | Audio messaging system and method |
US7177814B2 (en) | 2002-02-07 | 2007-02-13 | Sap Aktiengesellschaft | Dynamic grammar for voice-enabled applications |
US7058890B2 (en) | 2002-02-13 | 2006-06-06 | Siebel Systems, Inc. | Method and system for enabling connectivity to a data system |
US8249880B2 (en) | 2002-02-14 | 2012-08-21 | Intellisist, Inc. | Real-time display of system instructions |
US7587317B2 (en) | 2002-02-15 | 2009-09-08 | Microsoft Corporation | Word training interface |
JP3974419B2 (en) | 2002-02-18 | 2007-09-12 | 株式会社日立製作所 | Information acquisition method and information acquisition system using voice input |
JP2006505833A (en) | 2002-02-27 | 2006-02-16 | ディー. セーター ニール | System and method for facilitating media customization |
US6704396B2 (en) | 2002-02-27 | 2004-03-09 | Sbc Technology Resources, Inc. | Multi-modal communications method |
US7016849B2 (en) | 2002-03-25 | 2006-03-21 | Sri International | Method and apparatus for providing speech-driven routing between spoken language applications |
US7136875B2 (en) | 2002-09-24 | 2006-11-14 | Google, Inc. | Serving advertisements based on content |
US7072834B2 (en) | 2002-04-05 | 2006-07-04 | Intel Corporation | Adapting to adverse acoustic environment in speech processing using playback training data |
US7197460B1 (en) * | 2002-04-23 | 2007-03-27 | At&T Corp. | System for handling frequently asked questions in a natural language dialog service |
US6877001B2 (en) | 2002-04-25 | 2005-04-05 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for retrieving documents with spoken queries |
US7167568B2 (en) | 2002-05-02 | 2007-01-23 | Microsoft Corporation | Microphone array signal enhancement |
US20030212558A1 (en) | 2002-05-07 | 2003-11-13 | Matula Valentine C. | Method and apparatus for distributed interactive voice processing |
US20030212550A1 (en) | 2002-05-10 | 2003-11-13 | Ubale Anil W. | Method, apparatus, and system for improving speech quality of voice-over-packets (VOP) systems |
US20030212562A1 (en) | 2002-05-13 | 2003-11-13 | General Motors Corporation | Manual barge-in for server-based in-vehicle voice recognition systems |
JP2003329477A (en) | 2002-05-15 | 2003-11-19 | Pioneer Electronic Corp | Navigation device and interactive information providing program |
US7107210B2 (en) | 2002-05-20 | 2006-09-12 | Microsoft Corporation | Method of noise reduction based on dynamic aspects of speech |
US7127400B2 (en) | 2002-05-22 | 2006-10-24 | Bellsouth Intellectual Property Corporation | Methods and systems for personal interactive voice response |
US20040140989A1 (en) | 2002-05-28 | 2004-07-22 | John Papageorge | Content subscription and delivery service |
US7546382B2 (en) | 2002-05-28 | 2009-06-09 | International Business Machines Corporation | Methods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms |
US7398209B2 (en) | 2002-06-03 | 2008-07-08 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7143037B1 (en) | 2002-06-12 | 2006-11-28 | Cisco Technology, Inc. | Spelling words using an arbitrary phonetic alphabet |
US7502737B2 (en) | 2002-06-24 | 2009-03-10 | Intel Corporation | Multi-pass recognition of spoken dialogue |
US20050021470A1 (en) | 2002-06-25 | 2005-01-27 | Bose Corporation | Intelligent music track selection |
US7177815B2 (en) | 2002-07-05 | 2007-02-13 | At&T Corp. | System and method of context-sensitive help for multi-modal dialog systems |
EP1391830A1 (en) | 2002-07-19 | 2004-02-25 | Albert Inc. S.A. | System for extracting informations from a natural language text |
EP1394692A1 (en) | 2002-08-05 | 2004-03-03 | Alcatel | Method, terminal, browser application, and mark-up language for multimodal interaction between a user and a terminal |
US7236923B1 (en) | 2002-08-07 | 2007-06-26 | Itt Manufacturing Enterprises, Inc. | Acronym extraction system and method of identifying acronyms and extracting corresponding expansions from text |
US6741931B1 (en) | 2002-09-05 | 2004-05-25 | Daimlerchrysler Corporation | Vehicle navigation system with off-board server |
US7184957B2 (en) | 2002-09-25 | 2007-02-27 | Toyota Infotechnology Center Co., Ltd. | Multiple pass speech recognition method and system |
US7328155B2 (en) | 2002-09-25 | 2008-02-05 | Toyota Infotechnology Center Co., Ltd. | Method and system for speech recognition using grammar weighted based upon location information |
US20030115062A1 (en) | 2002-10-29 | 2003-06-19 | Walker Marilyn A. | Method for automated sentence planning |
US8321427B2 (en) | 2002-10-31 | 2012-11-27 | Promptu Systems Corporation | Method and apparatus for generation and augmentation of search terms from external and internal sources |
US7890324B2 (en) * | 2002-12-19 | 2011-02-15 | At&T Intellectual Property Ii, L.P. | Context-sensitive interface widgets for multi-modal dialog systems |
US20040158555A1 (en) * | 2003-02-11 | 2004-08-12 | Terradigtal Systems Llc. | Method for managing a collection of media objects |
DE10306022B3 (en) | 2003-02-13 | 2004-02-19 | Siemens Ag | Speech recognition method for telephone, personal digital assistant, notepad computer or automobile navigation system uses 3-stage individual word identification |
GB2398913B (en) | 2003-02-27 | 2005-08-17 | Motorola Inc | Noise estimation in speech recognition |
JP4103639B2 (en) | 2003-03-14 | 2008-06-18 | セイコーエプソン株式会社 | Acoustic model creation method, acoustic model creation device, and speech recognition device |
US7146319B2 (en) | 2003-03-31 | 2006-12-05 | Novauris Technologies Ltd. | Phonetically based speech recognition system and method |
US20050021826A1 (en) | 2003-04-21 | 2005-01-27 | Sunil Kumar | Gateway controller for a multimodal system that provides inter-communication among different data and voice servers through various mobile devices, and interface for that controller |
US7421393B1 (en) * | 2004-03-01 | 2008-09-02 | At&T Corp. | System for developing a dialog manager using modular spoken-dialog components |
US20050015256A1 (en) | 2003-05-29 | 2005-01-20 | Kargman James B. | Method and apparatus for ordering food items, and in particular, pizza |
JP2005003926A (en) | 2003-06-11 | 2005-01-06 | Sony Corp | Information processor, method, and program |
KR100577387B1 (en) | 2003-08-06 | 2006-05-10 | 삼성전자주식회사 | Method and apparatus for handling speech recognition errors in spoken dialogue systems |
US20050043940A1 (en) | 2003-08-20 | 2005-02-24 | Marvin Elder | Preparing a data source for a natural language query |
US7428497B2 (en) | 2003-10-06 | 2008-09-23 | Utbk, Inc. | Methods and apparatuses for pay-per-call advertising in mobile/wireless applications |
US20070162296A1 (en) | 2003-10-06 | 2007-07-12 | Utbk, Inc. | Methods and apparatuses for audio advertisements |
US7454608B2 (en) | 2003-10-31 | 2008-11-18 | International Business Machines Corporation | Resource configuration in multi-modal distributed computing systems |
GB0325497D0 (en) | 2003-10-31 | 2003-12-03 | Vox Generation Ltd | Automated speech application creation deployment and management |
JP2005157494A (en) | 2003-11-20 | 2005-06-16 | Aruze Corp | Conversation control apparatus and conversation control method |
JP4558308B2 (en) | 2003-12-03 | 2010-10-06 | ニュアンス コミュニケーションズ,インコーポレイテッド | Voice recognition system, data processing apparatus, data processing method thereof, and program |
US20050137877A1 (en) | 2003-12-17 | 2005-06-23 | General Motors Corporation | Method and system for enabling a device function of a vehicle |
US7027586B2 (en) | 2003-12-18 | 2006-04-11 | Sbc Knowledge Ventures, L.P. | Intelligently routing customer communications |
US20050137850A1 (en) | 2003-12-23 | 2005-06-23 | Intel Corporation | Method for automation of programmable interfaces |
US7386443B1 (en) | 2004-01-09 | 2008-06-10 | At&T Corp. | System and method for mobile automatic speech recognition |
WO2005076258A1 (en) | 2004-02-03 | 2005-08-18 | Matsushita Electric Industrial Co., Ltd. | User adaptive type device and control method thereof |
US7542903B2 (en) | 2004-02-18 | 2009-06-02 | Fuji Xerox Co., Ltd. | Systems and methods for determining predictive models of discourse functions |
US20050216254A1 (en) | 2004-03-24 | 2005-09-29 | Gupta Anurag K | System-resource-based multi-modal input fusion |
US20050246174A1 (en) | 2004-04-28 | 2005-11-03 | Degolia Richard C | Method and system for presenting dynamic commercial content to clients interacting with a voice extensible markup language system |
US20050283752A1 (en) | 2004-05-17 | 2005-12-22 | Renate Fruchter | DiVAS-a cross-media system for ubiquitous gesture-discourse-sketch knowledge capture and reuse |
US20060206310A1 (en) | 2004-06-29 | 2006-09-14 | Damaka, Inc. | System and method for natural language processing in a peer-to-peer hybrid communications network |
DE102004037858A1 (en) | 2004-08-04 | 2006-03-16 | Harman Becker Automotive Systems Gmbh | Navigation system with voice-controlled indication of points of interest |
US7480618B2 (en) | 2004-09-02 | 2009-01-20 | Microsoft Corporation | Eliminating interference of noisy modality in a multimodal application |
US20060074660A1 (en) | 2004-09-29 | 2006-04-06 | France Telecom | Method and apparatus for enhancing speech recognition accuracy by using geographic data to filter a set of words |
US7376645B2 (en) | 2004-11-29 | 2008-05-20 | The Intellection Group, Inc. | Multimodal natural language query system and architecture for processing voice and proximity-based queries |
US20070214182A1 (en) | 2005-01-15 | 2007-09-13 | Outland Research, Llc | Establishment-based media and messaging service |
US7873654B2 (en) | 2005-01-24 | 2011-01-18 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US7437297B2 (en) | 2005-01-27 | 2008-10-14 | International Business Machines Corporation | Systems and methods for predicting consequences of misinterpretation of user commands in automated systems |
KR100718147B1 (en) | 2005-02-01 | 2007-05-14 | 삼성전자주식회사 | Apparatus and method of generating grammar network for speech recognition and dialogue speech recognition apparatus and method employing the same |
US7831433B1 (en) | 2005-02-03 | 2010-11-09 | Hrl Laboratories, Llc | System and method for using context in navigation dialog |
US7461059B2 (en) | 2005-02-23 | 2008-12-02 | Microsoft Corporation | Dynamically updated search results based upon continuously-evolving search query that is based at least in part upon phrase suggestion, search engine uses previous result sets performing additional search tasks |
US7283829B2 (en) | 2005-03-25 | 2007-10-16 | Cisco Technology, Inc. | Management of call requests in multi-modal communication environments |
US7813485B2 (en) | 2005-05-26 | 2010-10-12 | International Business Machines Corporation | System and method for seamlessly integrating an interactive visual menu with an voice menu provided in an interactive voice response system |
US7917365B2 (en) | 2005-06-16 | 2011-03-29 | Nuance Communications, Inc. | Synchronizing visual and speech events in a multimodal application |
US7873523B2 (en) | 2005-06-30 | 2011-01-18 | Microsoft Corporation | Computer implemented method of analyzing recognition results between a user and an interactive application utilizing inferred values instead of transcribed speech |
WO2007008798A2 (en) | 2005-07-07 | 2007-01-18 | V-Enable, Inc. | System and method for searching for network-based content in a multi-modal system using spoken keywords |
US7424431B2 (en) | 2005-07-11 | 2008-09-09 | Stragent, Llc | System, method and computer program product for adding voice activation and voice control to a media player |
US7640160B2 (en) | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7620549B2 (en) | 2005-08-10 | 2009-11-17 | Voicebox Technologies, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US7949529B2 (en) * | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
US7634409B2 (en) | 2005-08-31 | 2009-12-15 | Voicebox Technologies, Inc. | Dynamic speech sharpening |
US7672852B2 (en) | 2005-09-29 | 2010-03-02 | Microsoft Corporation | Localization of prompts |
US8626588B2 (en) | 2005-09-30 | 2014-01-07 | Google Inc. | Advertising with audio content |
US20070078708A1 (en) | 2005-09-30 | 2007-04-05 | Hua Yu | Using speech recognition to determine advertisements relevant to audio content and/or audio content relevant to advertisements |
US7477909B2 (en) | 2005-10-31 | 2009-01-13 | Nuance Communications, Inc. | System and method for conducting a search using a wireless mobile device |
US7587308B2 (en) | 2005-11-21 | 2009-09-08 | Hewlett-Packard Development Company, L.P. | Word recognition using ontologies |
US20070135101A1 (en) | 2005-12-08 | 2007-06-14 | Comverse, Ltd. | Enhanced visual IVR capabilities |
US8325398B2 (en) * | 2005-12-22 | 2012-12-04 | Canon Kabushiki Kaisha | Image editing system, image management apparatus, and image editing program |
US20070186165A1 (en) | 2006-02-07 | 2007-08-09 | Pudding Ltd. | Method And Apparatus For Electronically Providing Advertisements |
US8645991B2 (en) | 2006-03-30 | 2014-02-04 | Tout Industries, Inc. | Method and apparatus for annotating media streams |
US7533089B2 (en) | 2006-06-27 | 2009-05-12 | International Business Machines Corporation | Hybrid approach for query recommendation in conversation systems |
WO2008008729A2 (en) | 2006-07-10 | 2008-01-17 | Accenture Global Services Gmbh | Mobile personal services platform for providing feedback |
US8145493B2 (en) | 2006-09-11 | 2012-03-27 | Nuance Communications, Inc. | Establishing a preferred mode of interaction between a user and a multimodal application |
US8086463B2 (en) | 2006-09-12 | 2011-12-27 | Nuance Communications, Inc. | Dynamically generating a vocal help prompt in a multimodal application |
WO2008032329A2 (en) | 2006-09-13 | 2008-03-20 | Alon Atsmon | Providing content responsive to multimedia signals |
US7788084B2 (en) | 2006-09-19 | 2010-08-31 | Xerox Corporation | Labeling of work of art titles in text for natural language processing |
US8073681B2 (en) | 2006-10-16 | 2011-12-06 | Voicebox Technologies, Inc. | System and method for a cooperative conversational voice user interface |
JP5312771B2 (en) | 2006-10-26 | 2013-10-09 | 株式会社エム・シー・エヌ | Technology that determines relevant ads in response to queries |
WO2008056251A2 (en) | 2006-11-10 | 2008-05-15 | Audiogate Technologies Ltd. | System and method for providing advertisement based on speech recognition |
TWI342010B (en) | 2006-12-13 | 2011-05-11 | Delta Electronics Inc | Speech recognition method and system with intelligent classification and adjustment |
US8909532B2 (en) | 2007-03-23 | 2014-12-09 | Nuance Communications, Inc. | Supporting multi-lingual user interaction with a multimodal application |
US8060367B2 (en) | 2007-06-26 | 2011-11-15 | Targus Information Corporation | Spatially indexed grammar and methods of use |
US8219399B2 (en) * | 2007-07-11 | 2012-07-10 | Garmin Switzerland Gmbh | Automated speech recognition (ASR) tiling |
DE102007044792B4 (en) * | 2007-09-19 | 2012-12-13 | Siemens Ag | Method, control unit and system for control or operation |
DE102008051757A1 (en) | 2007-11-12 | 2009-05-14 | Volkswagen Ag | Multimodal user interface of a driver assistance system for entering and presenting information |
US8140335B2 (en) | 2007-12-11 | 2012-03-20 | Voicebox Technologies, Inc. | System and method for providing a natural language voice user interface in an integrated voice navigation services environment |
US8077975B2 (en) | 2008-02-26 | 2011-12-13 | Microsoft Corporation | Handwriting symbol recognition accuracy using speech input |
US8255224B2 (en) * | 2008-03-07 | 2012-08-28 | Google Inc. | Voice recognition grammar selection based on context |
US20090276700A1 (en) * | 2008-04-30 | 2009-11-05 | Nokia Corporation | Method, apparatus, and computer program product for determining user status indicators |
US8589161B2 (en) | 2008-05-27 | 2013-11-19 | Voicebox Technologies, Inc. | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8909810B2 (en) * | 2008-08-05 | 2014-12-09 | Isabella Products, Inc. | Systems and methods for multimedia content sharing |
US8224652B2 (en) | 2008-09-26 | 2012-07-17 | Microsoft Corporation | Speech and text driven HMM-based body animation synthesis |
US20100094707A1 (en) | 2008-10-10 | 2010-04-15 | Carl Johan Freer | Method and platform for voice and location-based services for mobile advertising |
US8326637B2 (en) * | 2009-02-20 | 2012-12-04 | Voicebox Technologies, Inc. | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
-
2010
- 2010-11-10 WO PCT/US2010/056109 patent/WO2011059997A1/en active Application Filing
- 2010-11-10 US US12/943,699 patent/US9502025B2/en active Active
-
2015
- 2015-02-25 US US14/631,772 patent/US20150170641A1/en not_active Abandoned
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040133793A1 (en) * | 1995-02-13 | 2004-07-08 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US5892900A (en) * | 1996-08-30 | 1999-04-06 | Intertrust Technologies Corp. | Systems and methods for secure transaction management and electronic rights protection |
US20020032752A1 (en) * | 2000-06-09 | 2002-03-14 | Gold Elliot M. | Method and system for electronic song dedication |
US20040199387A1 (en) * | 2000-07-31 | 2004-10-07 | Wang Avery Li-Chun | Method and system for purchasing pre-recorded music |
US6704576B1 (en) * | 2000-09-27 | 2004-03-09 | At&T Corp. | Method and system for communicating multimedia content in a unicast, multicast, simulcast or broadcast environment |
US20040193420A1 (en) * | 2002-07-15 | 2004-09-30 | Kennewick Robert A. | Mobile systems and methods for responding to natural language speech utterance |
US7706616B2 (en) * | 2004-02-27 | 2010-04-27 | International Business Machines Corporation | System and method for recognizing word patterns in a very large vocabulary based on a virtual keyboard layout |
US20080032622A1 (en) * | 2004-04-07 | 2008-02-07 | Nokia Corporation | Mobile station and interface adapted for feature extraction from an input media sample |
US20070266257A1 (en) * | 2004-07-15 | 2007-11-15 | Allan Camaisa | System and method for blocking unauthorized network log in using stolen password |
US20070174258A1 (en) * | 2006-01-23 | 2007-07-26 | Jones Scott A | Targeted mobile device advertisements |
US20070276651A1 (en) * | 2006-05-23 | 2007-11-29 | Motorola, Inc. | Grammar adaptation through cooperative client and server based speech recognition |
US20080010135A1 (en) * | 2006-07-10 | 2008-01-10 | Realnetworks, Inc. | Digital media content device incentive and provisioning method |
US20080140385A1 (en) * | 2006-12-07 | 2008-06-12 | Microsoft Corporation | Using automated content analysis for audio/video content consumption |
US20080189110A1 (en) * | 2007-02-06 | 2008-08-07 | Tom Freeman | System and method for selecting and presenting advertisements based on natural language processing of voice-based input |
US20100064025A1 (en) * | 2008-09-10 | 2010-03-11 | Nokia Corporation | Method and Apparatus for Providing Media Service |
US20100076778A1 (en) * | 2008-09-25 | 2010-03-25 | Kondrk Robert H | Method and System for Providing and Maintaining Limited-Subscriptions to Digital Media Assets |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10755699B2 (en) | 2006-10-16 | 2020-08-25 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10510341B1 (en) | 2006-10-16 | 2019-12-17 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10297249B2 (en) | 2006-10-16 | 2019-05-21 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10515628B2 (en) | 2006-10-16 | 2019-12-24 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US11222626B2 (en) | 2006-10-16 | 2022-01-11 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US11080758B2 (en) | 2007-02-06 | 2021-08-03 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9406078B2 (en) | 2007-02-06 | 2016-08-02 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US10134060B2 (en) | 2007-02-06 | 2018-11-20 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9620113B2 (en) | 2007-12-11 | 2017-04-11 | Voicebox Technologies Corporation | System and method for providing a natural language voice user interface |
US10347248B2 (en) | 2007-12-11 | 2019-07-09 | Voicebox Technologies Corporation | System and method for providing in-vehicle services via a natural language voice user interface |
US10553216B2 (en) | 2008-05-27 | 2020-02-04 | Oracle International Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9711143B2 (en) | 2008-05-27 | 2017-07-18 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US10089984B2 (en) | 2008-05-27 | 2018-10-02 | Vb Assets, Llc | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US10553213B2 (en) | 2009-02-20 | 2020-02-04 | Oracle International Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9570070B2 (en) | 2009-02-20 | 2017-02-14 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9953649B2 (en) | 2009-02-20 | 2018-04-24 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US20170228367A1 (en) * | 2012-04-20 | 2017-08-10 | Maluuba Inc. | Conversational agent |
US9971766B2 (en) * | 2012-04-20 | 2018-05-15 | Maluuba Inc. | Conversational agent |
US9575963B2 (en) * | 2012-04-20 | 2017-02-21 | Maluuba Inc. | Conversational agent |
US20150066479A1 (en) * | 2012-04-20 | 2015-03-05 | Maluuba Inc. | Conversational agent |
US11321756B1 (en) | 2013-11-07 | 2022-05-03 | Amazon Technologies, Inc. | Voice-assisted scanning |
US9767501B1 (en) * | 2013-11-07 | 2017-09-19 | Amazon Technologies, Inc. | Voice-assisted scanning |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US10430863B2 (en) | 2014-09-16 | 2019-10-01 | Vb Assets, Llc | Voice commerce |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US11087385B2 (en) | 2014-09-16 | 2021-08-10 | Vb Assets, Llc | Voice commerce |
US10216725B2 (en) | 2014-09-16 | 2019-02-26 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US10229673B2 (en) | 2014-10-15 | 2019-03-12 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
Also Published As
Publication number | Publication date |
---|---|
WO2011059997A1 (en) | 2011-05-19 |
US20110112921A1 (en) | 2011-05-12 |
US9502025B2 (en) | 2016-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9502025B2 (en) | System and method for providing a natural language content dedication service | |
US9171541B2 (en) | System and method for hybrid processing in a natural language voice services environment | |
US10553213B2 (en) | System and method for processing multi-modal device interactions in a natural language voice services environment | |
US10553216B2 (en) | System and method for an integrated, multi-modal, multi-device natural language voice services environment | |
US8589161B2 (en) | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VOICEBOX TECHNOLOGIES CORPORATION, WASHINGTON Free format text: MERGER;ASSIGNOR:VOICEBOX TECHNOLOGIES, INC.;REEL/FRAME:035037/0093 Effective date: 20080915 Owner name: VOICEBOX TECHNOLOGIES, INC., WASHINGTON Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNORS:KENNEWICK, MIKE;ARMSTRONG, LYNN ELISE;REEL/FRAME:035036/0922 Effective date: 20110125 |
|
AS | Assignment |
Owner name: ORIX GROWTH CAPITAL, LLC, TEXAS Free format text: SECURITY INTEREST;ASSIGNOR:VOICEBOX TECHNOLOGIES CORPORATION;REEL/FRAME:044949/0948 Effective date: 20171218 |
|
AS | Assignment |
Owner name: VOICEBOX TECHNOLOGIES CORPORATION, WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX GROWTH CAPITAL, LLC;REEL/FRAME:045581/0630 Effective date: 20180402 |
|
AS | Assignment |
Owner name: KENNEWICK, MICHAEL RYE, WASHINGTON Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:VOICEBOX TECHNOLOGIES CORPORATION;REEL/FRAME:046456/0655 Effective date: 20180312 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
AS | Assignment |
Owner name: AI THINKTANK LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KENNEWICK, MICHAEL RYE;REEL/FRAME:052483/0971 Effective date: 20200415 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: TC RETURN OF APPEAL |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL READY FOR REVIEW |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
AS | Assignment |
Owner name: VB ASSETS, LLC, WASHINGTON Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:AI THINKTANK LLC;REEL/FRAME:058659/0951 Effective date: 20211222 |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |