US20120042343A1 - Television Remote Control Data Transfer - Google Patents
Television Remote Control Data Transfer Download PDFInfo
- Publication number
- US20120042343A1 US20120042343A1 US13/248,912 US201113248912A US2012042343A1 US 20120042343 A1 US20120042343 A1 US 20120042343A1 US 201113248912 A US201113248912 A US 201113248912A US 2012042343 A1 US2012042343 A1 US 2012042343A1
- Authority
- US
- United States
- Prior art keywords
- computer
- television
- user
- computing device
- results
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234336—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
Definitions
- This document relates to submitting data, such as a voice-based search query, on a first computer, such as a smartphone, and having results from the submitted data, such as search results, appear automatically on a second computer, such as a television monitor or a desktop computer.
- desktop and laptop computers may have been the most prevalent computers in people's lives in the past, most people are more likely now to interact with smart phones, DVRs, televisions, and other consumer devices that include computers in them.
- Certain computers are well-suited for entering and editing information, such as desktop and laptop computers.
- Other devices are better suited to delivering information but not receiving it, such as televisions that do not include keyboards, or that have keyboards of limited size.
- some computers are best used in certain situations, and other computers in other situations.
- a smartphone is typically best used on-the-go and at close quarters.
- a television is better used while a user is stationary, and frequently from relatively long-distances.
- This document discusses systems and techniques by which a person may enter data using one computer, and may use associated data by employing another computer.
- the associated data may be generated at the other computer based on the user's submission at the first computer.
- the linking of the two computers may occur by the first computer submitting a request to a server system, receiving a response, and sending information directly to the second computer using the response.
- the second computer may then send that same or resulting information to a second server system (which may be part of, or operate in cooperation with, the first server system) and may use a response from the second server system to provide information to a user of the two computers.
- a viewer of a television program may be holding a smartphone and be using it as a remote control via an appropriate application or app installed on the smartphone.
- the smartphone may be programmed with APIs that accept voice input, that package audio of the voice input and send it to a speech-to-text server system via the internet, and that receive in response the text of what was spoken by the user.
- the smartphone app may then forward the text—which might be a media-related search, such as “movies with Danny DeVito”—to the television (which may be a television itself, a set-top box, a DVD player, or similar adjunct appliance that can be used with a television).
- the television may not have similar speech-to-text functionality, so that the use of the smartphone may effectively provide the television with that functionality.
- the television may then, according to an application running there, recognize that text coming from the smartphone app is to be treated in a particular manner.
- the television may submit the text to a remote search engine that searches on media-related terms and returns media-related results.
- the search engine may search programming information for an upcoming period and also media-related databases that reflect movies, songs, and programs, the artists that appeared with each, and summaries of such items, much like many well-known movie and TV-related web sites provide to users who visit those sites.
- the display may be delayed, such as if the second computer is not currently logged onto the system, and so that the results may be delivered when the user subsequently tunes in or logs on.
- the results for such a delayed delivery may be generated at the time the request is submitted (and may be stored) or at the time the user later gets them at the second computer (so that the request is stored and is then executed when delivery of the results is to occur).
- the user may speak the query “sports tonight” into a smartphone while driving in his car, but not be able to interact with it at the present time (because he is busy and/or because the results are not the type of thing that can be interacted with effectively on a smartphone).
- results in such a situation could, therefore, be sent automatically for display on the user's television, either on the backside through a server system or by being held on the smartphone until the smartphone identifies itself as being in the vicinity of the user's home WiFi router, and then by the smartphone checking to see if the television is on to be communicated with, and communicating the text of the query when such conditions occur.
- the user may then immediately be presented with such results on his television when he walks into the house and may quickly select one of them.
- the provision of the query to the television may occur when the user is within a set distance of his home also (e.g., by determining with GPS functionality on the smartphone that he is within 1 ⁇ 4 mile of the home), and the television may be turned on automatically as he approaches the home, with the television tuned to a channel that is determined to be most relevant to the query (e.g., to a sport that is on the highest-rated channel and a type of sport that the user has identified on a profile as being his favorite type of sport), with the user's ability to quickly change to another sporting event that is currently being displayed.
- a channel that is determined to be most relevant to the query e.g., to a sport that is on the highest-rated channel and a type of sport that the user has identified on a profile as being his favorite type of sport
- the data flow to and from the smartphone and television may occur in a variety of manners.
- the smartphone may initially communicate with a remote server system, receive results back from the remote server system, and forward the results or generate information that is derivative of the results, and send it directly to the television.
- the television may then send data to another computer server system, and receive results back from it, such as media-related results that can be displayed automatically in a list of results and as part of a program guide grid in a familiar manner.
- the communications with the servers may be over the internet while the communications between the smartphone and the television may be over only a local area network such as a WiFi or similar network.
- the smartphone may send a file of the spoken query to a remote server system and may receive the ultimate results, which it may then pass to the television for display, without the television having to communicate with a server system.
- the smartphone may have a speech-to-text app that sends the speech up to a server, receives text back from the server and sends it to a television remote control app running on the smartphone, and that app may then submit the text to a media-specific search engine, which may then return data for making a list and grid of programs, and the smartphone may forward that data to the television where it may be displayed.
- the smartphone may display results that are good for small-screen display, and the television may display results that are useful for large-screen display.
- the smartphone may display a vertical list of programs that are responsive to a query (e.g., all soon-upcoming episodes of Seinfeld if the user spoke “When is Seinfeld?”), while the television may show the same results but in the context of a two-dimensional program guide grid.
- the user may step through the results in the list on the smartphone, and the grid may automatically jump, in a synchronized manner, to the corresponding episode on the grid on the television. If the user selects one of the episodes on the list, the television may immediately tune to the episode if it is currently being shown, or may jump to it later when it starts and/or set it for recording on a personal video recorder (PVR).
- PVR personal video recorder
- a user of multiple computing devices may be allowed to submit information using a computing device that is best-suited to such submission, such as a smartphone that performs speech-to-text conversion (perhaps via a server system to which it sends audio files).
- the user may then review results from the information, on a different computing device that is better-suited for such review, such as a television.
- Such techniques may allow the user to easily extend the functionality of computers that they already own.
- software to enable such data submission and routing may be easily added to a smartphone, or a user may simply use a browser on the smartphone to log into an account on a hosted service that may then pass the information to a browser on another device, or the provider of the account may recognize that certain search results should be provided to a target computer that has previously been registered with, or logged into, the account.
- the user may employ an app for speech-to-text conversion on the smartphone to enable voice inputs to a television that does not itself support speech-to-text conversion.
- the two or more computing devices may interact directly or through server systems so that each of the computing devices can provide its best features, and the two devices together can provide functionality that is even better than the separate additive functionalities of the devices.
- a computer-implemented method for information sharing between a portable computing device and a television system comprises receiving a spoken input from a user of the portable computing device, by the portable computing device, submitting a digital recording of the spoken query from the portable computing device to a remote server system, receiving from the remote server system a textual representation of the spoken query, and automatically transmitting the textual representation from the portable computing device to the television system.
- the television system can be programmed to submit the textual representation as a search query and to present to the user media-related results that are determined to be responsive to the spoken query.
- the method can also comprise, before automatically transmitting the textual representation, pairing the mobile computing device and television system over a local area network using a pairing protocol by which the mobile computing device and television system communicate with each other in a predetermined manner.
- the method can also comprise using the textual representation to perform a local search of files stored on recordable media located in the television system.
- the method can include automatically submitting the textual representation from the television system to a remote search engine, receiving in return search results that are responsive to a query in the textual representation, and presenting by the television system the search results.
- the search results are presented as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the television system so that the user may select one or more of the items for viewing or listening.
- the method can include transmitting all or a portion of the search results from the television system to the mobile computing device.
- the method can further comprise providing to the search engine a request type for the search request that defines a type of information to be provided in the search results, and receiving search results that the search engine directed to the request type.
- the method can also include determining on the mobile computing device whether the spoken input is directed to the television system, and automatically transmitting the textual representation from the portable computing device to the television system only if the spoken input is determined to be directed to the television system.
- the method can include determining that the television system is not currently available to display the results, and storing the results or the textual representation until the television system is determined to be available to display the results.
- the method can further comprise receiving from the television system an indication that a user has selected a portion of the search results, and automatically causing a display on the portable computing device to change in response to receiving the indication, and receiving a subsequent user input on the portable computing device, and causing the presentation of the search results to change in response to receiving the subsequent user input.
- a computer-implemented method for information sharing between computers comprises receiving a spoken input at a first computer from a user of the first computer; providing the audio of the spoken request to a first remote server system; receiving a response from the first remote server system, the response including text of the spoken request; and automatically transmitting data generated from the response that includes the spoken request, from the first computer to a second computer that is nearby the first computer, wherein the second computer is programmed to automatically perform an action that causes a result generated by applying an operation to the transmitted data, to be presented to the user of the first computer.
- the method can also comprise automatically submitting the text of the spoken request from the second computer to a remote search engine, receiving in return search results that are responsive to a query in the text of the spoken request, and presenting by the second computer the search results.
- the search results can be presented on the second computer as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the second computer so that the user may select one or more of the items for viewing or listening.
- the method can include transmitting all or a portion of the search results from the second computer to the first computer.
- a computer-implemented system for information sharing comprises a mobile computing device, and software stored on the mobile computing device.
- the software is operable on one or more processors of the mobile computing device to transmit spoken commands made by a user of the mobile computing device, to a remote server system; receive in response, from the remote server system, text of the spoken commands; and automatically provide the text received from the remote server system to a second computer operating in the close vicinity of the mobile computing device.
- the system may also include the second computer, wherein the second computer is programmed to provide the text received from the remote server system to a second remote server system as a search query, and to use search results received in response from the second remote server system to present the search results on a display of the second computer.
- the second computer can comprises a television, and the first computer and the second computer can be programmed to automatically pair over a local data connection when each is within communication of the local data connection. Also, the second computer can be programmed to submit the text to a search engine that performs searches directed specifically to media-related content.
- FIG. 1A shows an example by which data may be submitted at a first computer and reviewed and handled at a second computer.
- FIG. 1B is a schematic diagram showing communication between user computers and server systems.
- FIG. 2A is a schematic diagram of a system for sharing information between computers.
- FIG. 2B is a block diagram of a mobile computing device and system for sharing information between computers.
- FIG. 3A is a flow chart that shows a process for receiving a request from a first computer in supplying information that is responsive to the request to a second computer.
- FIG. 3B is a flow chart that shows a process for processing speech input to a television remote control to affect a display on an associated television.
- FIGS. 4A-4B are swim lane diagrams for coordinating information submission and information provision between various computers and a central server system.
- FIG. 4C is an activity diagram for pairing of two computer systems in preparation for computer-to-computer communications.
- FIG. 4D is a schematic diagram showing example messages that may be used in a computer-to-computer communication protocol.
- FIG. 4E is a swim lane diagram of a process for providing voice input to a television from a mobile computing device.
- FIG. 5 is a block diagram of computing devices that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers.
- This document describes systems and related techniques for passing information from a first computer to a server system, creating information that is responsive to the passed information using the server system, and then automatically returning responsive information from the server system to the first computer and on to a second computer that is different than the first computer.
- the second computer may then uses the information it receives to send a request to a second server system, may obtain a response, and may provide output to a user of the two computers.
- a search query is spoken by a user into a first computer, such as a smartphone, and is submitted to a search engine that is remote from the smartphone, such as over the internet.
- a textual representation of the spoken query is then returned to the first computer, which in turn automatically forwards the textual representation to a second computer (perhaps after reformatting or otherwise modifying the textual representation).
- the second computer Upon receiving the textual representation, the second computer automatically processes it, typically by submitting it to a local search engine on the second computer (e.g., to determine whether media programming information on the second computer, such as recorded television programs and electronic program guide (EPG) information, matches the query) and to a server-based public search engine which may be directed particularly to media-based results (i.e., those relating to various types of electronic entertainment, education, and similar content). Those results may then be displayed by the second computer (e.g., as being displayed on a display connected to the second computer, such as a television that houses the second computer or that is connected to a set-top box that houses the second computer).
- a local search engine on the second computer e.g., to determine whether media programming information on the second computer, such as recorded television programs and electronic program guide (EPG) information, matches the query
- EPG electronic program guide
- the user may then interact with the results using the first computer, and such interaction can be reflected on a display of the first and/or second computer.
- a user can browse a basic representation of search results on a smartphone, and a more detailed representation of the results on a television.
- User input may occur via a touchscreen on the smartphone, physical buttons on the smartphone, or sensed motion of the smartphone.
- FIG. 1A shows an examples by which data may be submitted by a first computer and reviewed and handled at a second computer.
- a system 100 is shown, in which a user 102 of a smartphone 104 is shown sitting on a couch watching a television 106 .
- the user 102 may be sitting down for an evening of watching primetime television but may not know immediately what he wants to watch.
- the user may be watching a show that he does not like and may be interested in finding a better show.
- the user may also be interested in something other than television.
- the user may be watching the news, may hear reference to a certain geographic area, and may want to perform some quick research to follow up on what he has heard. Other similar interests of the user may be addressed by the system 100 .
- the user is shown speaking into the smartphone 104 , and asking the query “when is Seinfeld on?”
- This query indicates that the user would like to find out when the next episode or episodes of the television situation comedy Seinfeld is being shown by his television provider.
- the smartphone 104 may be equipped with voice search capabilities, by which certain requests spoken into the smartphone 104 are provided as sound files to a remote server system that may convert the sound files to text and then create search results that are responsive to the request.
- the smartphone 104 may execute an application or app that packages the sound, sends it to a server system for conversion to text, and then receives back the converted text and forwards it to one or multiple computers, such as television 106 , with which the smartphone 104 has paired up for communication (e.g., over a LAN such as a WiFi network).
- the television 106 may be a modern television that is provided with a certain level of computing capabilities, and may include WiFi or other data networking technologies built into the television 106 , or provided as an adjunct to the television 106 , such as in a cable or satellite box. References here to a television or television system are intended to cover both integrated and separate approaches.
- the smartphone 104 and television 106 may have been previously registered with a speech-to-text service and a search server system, respectively, and correlated to an account for user 102 (e.g., by the user logging into an account for the user, with the devices). In this manner, the relevant server systems and services may readily determine that the two devices are related to or registered to the user 102 , and may perform actions like those discussed here using such knowledge.
- search results may be sent back to the system 100 .
- the search results may be displayed on the smart phone 104 .
- the smartphone 104 may not be large enough to display a complete electronic program guide grid in the form in which the “Seinfeld” search results may be provided by the system.
- the smartphone 104 may not be equipped to take appropriate actions using the search results, such as switching automatically to a channel on which an episode of Seinfeld is being played, or programming a personal video recorder to record the current or future episode of Seinfeld that appears in the search results.
- the search results have been provided instead (or in addition) to the television 106 , and the user may then further interact with the system 100 to carry out their own particular wishes.
- the user may interact further with the smartphone 104 , such as by using a remote control application for the smartphone 104 , so as to cause channels on the television 106 (including with a cable or set top box), to be changed to the appropriate channel automatically.
- the results may get to the television 106 by various mechanisms. For example, a central server system may identify all devices that are currently logged in or registered for the user, and may then determine which devices may be able to display the relevant results. Where multiple active devices are capable of handling the results, the system may determine which device is most likely to be the target of the user's input. Such a determination may be made, for example, by identifying the active device that is geographically closest to the device that submitted the query, or the device that best matches a type of the results. For example, if the results are determined to be media-related (e.g., they are links to TV episodes and streaming movies), then a television can be preferred over other devices for receiving the results.
- media-related e.g., they are links to TV episodes and streaming movies
- the smartphone 104 may submit a sound file to one server system and receive equivalent text in return, and may then forward the text to the television 106 .
- the television 106 may in turn be programmed to send the text to a search engine service that is programmed especially to provide media-related results (e.g., music, movies, and television programs), and to then display the search results, such as by showing a program guide grid around upcoming episodes of Seinfeld.
- the television 106 may alternatively, or additionally, search its own local storage in a similar manner.
- FIG. 1B is a schematic diagram showing communication between user computers and server systems.
- the system 10 shown in the figure may correspond to the scenario just discussed with respect to FIG. 1A .
- a smartphone 114 may communicate over the internet with a speech-to-text server system 112 , and in turn with a television 116 that is proximate (essentially in the same room as) the smartphone 114 .
- the television then in turn communicates with a search engine 118 server system.
- Such communications may be triggered by a user speaking into a microphone on the smartphone 114 , and then all subsequent actions are taken automatically by the system 110 , without further intervention by the user (until the user may interact with results displayed, in the end, on the television 106 ).
- the smartphone 114 may take a familiar form, and may be generally a touchscreen device onto which a variety of applications or apps may be loaded and executed.
- One such application may be included with the operating system or loaded later, and may provide an application programming interface (API) that receives spoken input, converts it to text, and then provides the text to whatever application is currently the focus of the device.
- API application programming interface
- While the smartphone 114 may perform its own speech-to-text conversion, such conversion may also occur with the assistance of the speech-to-text server system 112 .
- an audio file of various forms may be passed up from the smartphone 114 to the server system 112 , where it may be analyzed and converted into corresponding text, which may be encoded, encrypted, or transmitted as plaintext back to the smartphone 114 , as indicated by flow Arrow B.
- the smartphone 114 can automatically forward the text to the television 116 .
- the smartphone 114 may serve as a voice input front end for the television 116 , which may not have a microphone or a mechanism by which to position a microphone directly in front of the user (e.g., for cost purposes, the remote control shipped with the television 116 may only communicate via traditional RF or IR mechanisms).
- the passing of information from the smartphone 114 to the television 116 is shown in the figure by Arrow C.
- the television 116 may then process the information, either internally to the television or externally by passing the information, or a derivation of the information, to the search engine 118 server system, or another form of server system.
- the textual version of the spoken input form the user may be passed up to the search engine 118 along with other appropriate information, such as a flag that indicates the television 116 is seeking media-related results, as opposed to general web results, images, shopping results, or other such common sub-categories of results.
- the television may also perform other transformations or formatting changes to the data or information.
- Arrow D shows the communication and request passing up from the television 116 to the search engine 118 .
- the search engine 118 may be a public search engine that can be accessed manually by users, and has also published an API by which automated systems may submit queries and receive search results in a predetermined and defined format.
- Arrow E shows the results being passed down from the search engine 118 to the television 116 .
- the results may be particular episodes of a program like Seinfeld, and the display may show the various episodes in a numbered list of search results on the left side, and a grid surrounding whatever episode is currently selected in the list.
- certain of the search result information may be communicated form the television 116 to the smartphone 114 .
- the information for generating the list of results may be transmitted, and as shown in the figure, the smartphone 114 is displaying the same search result list, but is not also displaying the program guide grid because it is a small device for which there is no room for such an extensive display of content.
- the user may then select particular episodes by tapping on the corresponding episode on the display, which may cause the program guide grid to scroll so that it again surrounds the newly selected episode.
- Other controls may be shown on the smartphone, such as buttons to let a user choose to tune to a selected program immediately, or to set a PVR to record the episode in the future.
- other relevant remote control functionality may be provided with an application executing on the smartphone 114 , such as buttons for changing channels, colors, volume, and the like.
- the system 110 may, in certain implementations, provide for synergies by using speech input and conversion capabilities that are available on a user's telephone, tablet, or other portable computing device, in order to control interactivity with a television.
- the voice input by a user may be directed to a chat session being displayed on the television 116 .
- a user may speak into the smartphone 114 , have the speech transcribed, and then have the transcribed text posted to the chat session, either fully automatically or after the user confirms that the text returned form the server system 112 is accurate (or if the user does not respond during a pre-set delay period).
- the conversion may also be accompanied by a language translation, e.g., from a language of the user into a language of the other person on the chat session.
- a language translation e.g., from a language of the user into a language of the other person on the chat session.
- an audio file may be sent to the server system 112 in English (Arrow A in the figure), and text may be returned in French (Arrow B) and then supplied to a chat application.
- the French user may employ similar functionality, or text may arrive at the television 116 in French and can be passed from the television 116 to the smartphone and then to a server system for translation, or passed directly from the television 116 to the server system for translation.
- Other such applications may likewise employ speech-to-text conversion and language translation using one device (e.g., smartphone 114 ) that then causes the converted and/or translated text or audio to be passed to another computing device (e.g., television 116 ).
- FIG. 2A is a schematic diagram of a system 200 for sharing information between computers.
- the system 200 is established to allow a user who owns multiple computer devices, to share certain data between devices, including by passing one form of data to the central server system, and having the central server system obtain other data in response to the submissions and provide that other data to a separate target computer that is associated with the user, which association may be identified by determining that the two devices are logged into the same user account.
- the selection of which device to send the data to may be made automatically, such as using data stored in the user's device or by a determination made by the central server system, so that the user need not identify the target of the information when the user asks for the information to be sent.
- two consumer devices in the form of smartphone 208 and a television 206 are shown and may be owned by a single individual or family.
- both devices have been logged into a central server system 204 and that communication sessions have been established for both such devices 208 , 206 , and/or that the two devices have been paired with each other, such as in the manner discussed below in relation to FIG. 4C .
- submissions could be made separately to the central server system 204 by either of the devices 208 , 206 , and normal interaction, such as web surfing and other similar interaction that is well known, may be performed in appropriate circumstances with either of the devices.
- the various server-side functionality is shown as residing in a single server for ease of explanation, but multiple servers and server systems may be employed.
- an arrow and the label “voice” is shown entering the smartphone 208 to indicate that a user is speaking voice commands into the smartphone 208 .
- the smartphone 208 may be programmed to recognize certain words that are stated into its microphone, as being words to trigger a search query that involves passing sound data up to the central server system 204 through the network 202 , such as the internet.
- a user may press an on-screen icon on the smartphone 208 in order to change it from a mode for typed input into a mode for spoken input.
- the spoken input may be converted and/or translated according to a operating system-based service provide don the smartphone 208 and made available to subscribing applications that execute on the smartphone 208 .
- the voice entry is a search query
- the central server system 204 is provided with a number of components to assist in providing search results in response to the search query.
- a central server system may involve a large number of servers and a large number of other components and services beyond those shown here.
- a voice interface 210 may be provided, and a web server that is part of a central server system 204 may route to the voice interface 210 data received in the form of voice search queries.
- the voice interface 210 may initially convert the provided voice input to a textual form and may also perform formatting and conversion on such text.
- the interface 210 or another component may perform language translation on the submitted input in appropriate circumstances, and make a determination of the target language based on information in the form of meta data that has been passed with the digitized spoken audio file.
- the search system may be implemented so that a user wanting to submit a voice query is required to use a trigger word before the query, either to start the device listening for the query, or to define a context for the query (e.g., “television”).
- the voice interface 210 may be programmed to extract the trigger word from the text after the speech-to-text conversion occurs, because the trigger word is not truly part of the user's intended query.
- a search engine 204 may receive processed text from the voice interface 210 , and may further process the text, such as by adding search terms for synonyms or other information in ways that are readily familiar.
- the search engine 204 may access information in a search index 218 to provide one or more search results in response to any submitted search query.
- the context of the search may also be taken into account to limit the types of search results that are provided to the user. For example, voice search may generate particular types of search results more often than other search results, such as local search results that indicate information in a geographical area around the user.
- certain search terms such as the titles of television shows may indicate to the search engine 214 that the user is presenting a certain type of search, i.e., a media-related search.
- the search engine 214 may format the search results in a particular form, such as in the form of an electronic program guide grid for television shows. Such results may also be provided with additional information or meta data, such that a user could select a cell in a program guide so as to provide a message to a personal video recorder to set a recording of that episode.
- the search engine 214 may also obtain a query from an external source, such as the television 206 .
- the voice interface 210 may convert spoken input into text and return the text to the smartphone 208 , which may forward the text to television 206 , which may in turn submit the text as a query to the search engine 214 .
- the responses to queries made by the search engine 214 may be based on information that is stored in a search index 218 , which may contain a variety of types of information, but may have media-related information set out from the other information so that media-directed search results may be returned by the system.
- a results router 212 is responsible for receiving search results from the search engine 214 and providing them to an appropriate target device.
- the target device is the device from which the search query was submitted.
- the target device may be a different device, and the results may be provided to it either directly from the central server system 204 , or may be provided to the smartphone 208 and then forwarded to the target device, which in this situation may be the television 206 .
- the television may receive the results from the search engine 214 , and then may pass all or some of the results to the smartphone 208 .
- the results router 212 may refer to data in a user device information database 216 to identify the addresses of devices that are associated with an account for the user who is logged in with the particular devices.
- the search system 204 may determine how to properly route results to each of the devices—the system 204 may simply respond to requests in a normal manner, and not need to correlate two different devices as being related to each other in any manner.
- the system 204 may determine to send the results directly to television 206 , rather than back to smartphone 208 .
- system 204 may generate results in a manner that is formatted to best work with television 206 , but deliver those results to device 208 in a manner so the device 208 automatically forwards the results for display on television 206 .
- the system 204 may determine which of those televisions is currently logged on and operating, and may determine to send the search results to that particular television.
- FIG. 2B is a block diagram of a mobile device 222 and system 220 for sharing information between computers.
- the system 220 is similar to the system 200 in FIG. 2A , but in this instance, additional details about the mobile device 222 , which acts as a client here, is provided.
- the mobile device 222 is a cellular telephone.
- the mobile device 222 can be a personal digital assistant, a laptop computer, a net book, a camera, a wrist watch, or another type of mobile electronic device.
- the mobile device 222 includes a camera (not shown) with camera controller 232 , and a display screen 223 for displaying text, images, and graphics to a user, including images captured by the camera.
- the display screen 223 is a touch screen for receiving user input. For example, a user contacts the display screen 223 using a finger or stylus in order to select items displayed by the display screen 223 , to enter text, or to control functions of the mobile device 222 .
- the mobile device 222 further includes one or more input keys such as a track ball 224 for receiving user input.
- the track ball 224 can be used to make selections, return to a home screen, or control functions of the mobile device 222 .
- the one or more input keys can include a click wheel for scrolling through menus and text.
- the mobile device 222 includes a number of modules for controlling functions of the mobile device 222 , including modules to control the receipt of information and for triggering the providing of corresponding information to other devices (which other devices may, in turn, also include the structural components described here for device 222 ).
- the modules can be implemented using hardware, software, or a combination of the two.
- the mobile device 222 includes a display controller 226 , which may be responsible for rendering content for presentation on the display screen 203 .
- the display controller 226 may receive graphic-related content from a number of sources and may determine how the content is to be provided to a user. For example, a number of different windows for various applications 242 on the mobile device 222 may need to be displayed, and the display controller 226 may determine which to display, which to hide, and what to display or hide when there is overlap between various graphical objects.
- the display controller 226 can include various components to provide particular functionality for interacting with displayed components, which may be shared across multiple applications, and may be supplied, for example, by an operating system of the mobile device 222 .
- An input controller 228 may be responsible for translating commands provided by a user of mobile device 222 .
- commands may come from a keyboard, from touch screen functionality of the display screen 223 , from trackball 224 , or from other such sources, including dedicated buttons or soft buttons (e.g., buttons whose functions may change over time, and whose functions may be displayed on areas of the display screen 223 that are adjacent to the particular buttons).
- the input controller 228 may determine, for example, in what area of the display commands are being received, and thus in what application being shown on the display the commands are intended for.
- the input controller 228 may interpret input motions on the touch screen 223 into a common format and pass those interpreted motions (e.g., short press, long press, flicks, and straight-line drags) to the appropriate application.
- the input controller 228 may also report such inputs to an event manager (not shown) that in turn reports them to the appropriate modules or applications. For example, a user viewing an options menu displayed on the display screen 203 selects one of the options using one of the track ball 224 or touch screen functionality of the mobile device 222 .
- the input controller 228 receives the input and causes the mobile device 222 to perform functions based on the input.
- a variety of applications 242 may operate, generally on a common microprocessor, on the mobile device 222 .
- the applications 242 may take a variety of forms, such as mapping applications, e-mail and other messaging applications, image viewing and editing applications, video capture and editing applications, web browser applications, music and video players, and various applications running within a web browser or running extensions of a web browser.
- an information sharing application 230 e.g., a television remote control application
- a wireless interface 240 manages communication with a wireless network, which may be a data network that also carries voice communications.
- the wireless interface 240 may operate in a familiar manner, such as according to the examples discussed below, and may provide for communication by the mobile device 222 with messaging services such as text messaging, e-mail, and telephone voice mail messaging.
- the wireless interface 240 may support downloads and uploads of content and computer code over a wireless network.
- the wireless interface 240 may also communicate over short-range networks, such as with other devices in the same room as device 222 , such as when results are provided to the device 222 and need to be forwarded automatically to another device in the manners discussed above and below.
- a camera controller 232 of the mobile device 222 receives image data from the camera and controls functionality of the camera.
- the camera controller 232 can receive image data for one or more images (e.g. stationary pictures or real-time video images) from the camera and provide the image data to the display controller 226 .
- the display controller 226 can then display the one or more images captured by the camera on the display screen 223 .
- the information sharing application 230 uses a GPS Unit 238 of the mobile device 222 to determine the location of the mobile device 222 .
- the GPS Unit 238 receives signals from one or more global positioning satellites, and can use the signals to determine the current location of the mobile device 222 .
- the mobile device 222 includes a module that determines a location of the mobile device 222 using transmission tower triangulation (which may also be performed on a server system) or another method of location identification.
- the mobile device 222 uses location information that is determined using the GPS Unit 238 so as to identify geo-coded information that is associated with the location of the mobile device 222 .
- location information obtained or determined by the GPS Unit 238 is provided to the information sharing application 230 .
- the information sharing application 230 can use the location information to identify geo-coded data 246 stored on the mobile device 222 .
- the geo-coded data 246 includes information associated with particular geographic locations.
- geo-coded data can include building names, business names and information, historical information, images, video files, and audio files associated with a particular location.
- geo-coded data associated with a location of a park may include hours for the park, the name of the park, information on plants located within the park, information on statues located within the park, historical information about the park, and park rules (e.g. “no dogs allowed”).
- the information sharing application 230 can use the current location of the mobile device 222 to identify information associated with geographic locations that are in close proximity to the location of the mobile device 222 .
- the geo-coded data 246 can be stored on a memory of the mobile device 222 , such as a hard drive, flash drive, or SD card.
- the mobile device 222 may also contain no pre-stored geo-coded data.
- the geographical information can be used in various ways, such as passing the data to the central server system 232 , so that the central server system may identify a closest logged-in device to the mobile device 222 , as that device may be most likely the one to which the system 220 is to send content submitted by the device 220 , or a result of the content submitted by the device.
- the device 222 uses a compass unit 236 , or magnetometer, in some examples, e.g., to determine a current viewing direction of a camera on the device 222 , within the horizontal plane of the camera.
- the compass unit 236 determines a direction in which a user of the mobile device 222 is looking with the mobile device 220 .
- Viewing direction information provided by the compass unit 236 can be used to determine where information is to be shared with other devices, such as by a system determining to share information with a device in the direction of the user where the user is pointing his or her mobile device 222 .
- the mobile device 222 further includes an accelerometer unit 234 which may be further used to identify a user's location, movement, or other such factors.
- the mobile device 222 includes user data 248 .
- the user data 248 can include user preferences or other information associated with a user of the mobile device 222 .
- the user data 248 can include a list of contacts and a list of ID's for other devices registered to a user. Such information can be used to ensure that information is passed from one person to another.
- the particular mobile device 222 shown here is generally directed to a smartphone such as smartphone 104 in FIG. 1 above and smartphone 206 in FIG. 2A above.
- a television including structures to enable the television to receive communications from the smartphone, and to submit queries and other communications over the internet, such as to search media-related databases for purposes of responding to user requests entered on a smartphone, but displayed on the television.
- FIG. 3A is a flow chart that shows a process for receiving a request from a first computer and supplying, to a second computer, information that is responsive to the request.
- the process involves handling requests from one computing device, generating information responsive to those requests, and providing that generated information to a second computing device that is related to the first computing device via a particular user who has been assigned to both devices (e.g., by the fact of both devices being logged into the same user account when the process occurs).
- the process begins at box 302 , where speech data is received by the process.
- speech data For example, a search engine that is available to the public may receive various search queries that users of mobile telephones provide in spoken form. The system may recognize such submissions as being spoken queries in appropriate circumstances and may route them for proper processing.
- the speech data may in one example be sent with information identifying the device on which the data was received and a location of the device, in familiar manners. Such information may subsequently be used to identify an account for a user of the device, and to determine other devices that are registered to the user in the geographic location of the submitting device.
- the speech is converted to text form.
- Such conversion may occur by normal mechanisms, though particular techniques may be used to improve the accuracy of the conversion without requiring users of the system to train the system for their particular voices.
- a field of a form in which the cursor for the user was placed when they entered the query may include a label that describes the sort of information that is provided in the field, and such label information may be provided to a speech-to-text conversion system so as to improve the results of the conversion.
- a speech model may be selected or modified so as to address television-related terms better, such as by elevating the importance of television titles and television character names in a speech model.
- language translation may also be performed on a submission, and text or audio may be returned in the target language, of the submission.
- the query is parsed and formatted. For example, certain control terms may be removed from the query (e.g., terms that precede the main body of the query and are determined not to be what the user is searching for, but are instead intended to control how the query is carried out), synonyms may be added to the query, and other changes may be made to the query to make a better candidate as a search query.
- certain control terms may be removed from the query (e.g., terms that precede the main body of the query and are determined not to be what the user is searching for, but are instead intended to control how the query is carried out)
- synonyms may be added to the query
- other changes may be made to the query to make a better candidate as a search query.
- the query is submitted to a search engine and results are received back from the search engine and formatted in an appropriate manner.
- the results may be formatted into an HTML or similar mark-up document that provides an interactive electronic program guide showing the search results in a guide grid.
- a user who is reviewing the guide may then navigate up and down through channels in the guide, and back and forth during times in the guide, in order to see other shows being broadcast around the same time, and on different channels, as the identified television program search result.
- the process identifies a related computer, meaning a computer that is related to the computer that submitted the query. Such a determination may be made, for example, by consulting profile information about a user who submitted the query, to identify all of the computing devices that the user has currently or previously registered with the system, and/or that are currently logged into the system. Thus, at box 312 , the process determines whether a particular one of the computers that are associated with the user are currently logged in. If no such computer is currently logged in or no such computer that is appropriate to receive the content (e.g., because it is a type of computer that can display the content or is a computer geographically near the device that submitted the query), the process may store the results 314 that were to be sent to the other computer.
- a related computer meaning a computer that is related to the computer that submitted the query. Such a determination may be made, for example, by consulting profile information about a user who submitted the query, to identify all of the computing devices that the user has currently or previously registered with the system, and/or that are currently
- a user may make search queries while they are not able to view results at home, but such results may be presented to them at home as soon as they log back into their account with their home system (Box 316 ).
- the system may notify them of pending deliveries from the previously-submitted queries, and they may be allowed to obtain delivery of the information from the queries when they would like.
- results are delivered to the related computer that was selected in box 310 .
- Such delivery may occur in a variety of forms, including by simply providing a common search results list or grouping to such related computer.
- the information may ordinarily be delivered via HTML or similar mark-up document that may also call JavaScript or similar executable computer code. In this manner, for instance, a user of a smartphone may speak a command into the smartphone, have the command converted and/or translated, and provided to a second computer for processing by that second computer.
- FIG. 3B is a flow chart that shows a process for processing speech input to a television remote control to affect a display on an associated television.
- the process is similar to that for FIG. 3A , but the process is centered here on a mobile device that receives voice input, submits it to a server system for resolution, and then passes the resolved text to a television system for further processing, such as for submission to a search engine and display of the search results on the television.
- the spoken input to the smartphone may be converted to text and/or translated in order to be submitted to a communication application that is executing on the television, such as a chat application, so that a user can sit on his couch and provide spoken input to a textual chat application that is running on his television.
- the process in this example begins at box 320 , where the smartphone is paired with a television.
- Such pairing may involve the two devices recognizing that they are able to access a network such as a WiFi LAN, and exchanging messages so as to recognize each other's existence, in order to facilitate future communication between the devices.
- a particular pairing example is discussed in more detail below with respect to FIG. 4C .
- the smartphone receives a spoken query form its user.
- the smartphone may be equipped with a microphone in a familiar manner, and may be loaded with an application that assists in converting spoken input into text or into another language.
- an application may be independent of any particular other application on the device, and may act as a universal converter or translator for any application that is currently operating on the device and has followed an API for receiving converted or translated information from the application.
- the application captures the spoken input and places it into a digital audio file that is immediately uploaded, at box 324 , to a server system for conversion and/or translation.
- the conversion/translation may also occur on the device if it has sufficient resources to perform an accurate conversion/translation.
- the smartphone receives, in response to uploading the audio, text that has been created by the server system, and forwards the text to the television with which it is paired.
- the server system may return another audio file that represents the spoken input but in a different language.
- the smartphone may then wait while the television processes the transmitted text (box 328 ).
- text of the query may be transmitted to the television and the television may perform a local or server-based search using the text.
- the text may relate, for example, to a program, episodes of a program, actors, or songs—i.e., if a user wants to watch some television, music, or movie programming, and is trying to find something he or she will like.
- the user may seek to be presented with a list of all movies or television programs in which George Clooney has appeared, simply by speaking “George Clooney.”
- the television may limits its search to media properties, and exclude searches of news relating to George Clooney, by determining the context of the query—i.e., that the user is watching television and spoke the command into a television remote control application.
- results are also returned to the smartphone, either from the television or from the smartphone communicating with another remote server system.
- the results may be a sub-set of the results displayed on the television, or may be data for generating controls that permit the user to interact with results that are displayed on the television. For example, if ten results are returned for the most popular George Clooney projects, the television may display detail about each result along with graphical information for each.
- the smartphone may display basic textual labels and perhaps small thumbnail images for each of the results. A one-to-one correspondence between results displayed on the smartphone and results displayed on the television may be used to allow the user to look at the television but press on a corresponding result on the smartphone in order to select it (e.g., to have it played or recorded).
- the user may interact with the search results, and may provide inputs to the smartphone (box 332 ), which inputs may be transmitted to the television (box 334 ), and reflected on the display there.
- the user may select one of the search results, and the television may change channels to that result, be set to record the result, or begin streaming the result if it is available for streaming.
- a user may take advantage of his or her smartphone's ability to convert spoken input to another language or to textual input. Such converted or translated input may then be automatically passed to the user's television and employed there is various useful manners. Such functionality may not have been available on the television, and the smartphone may not have provided access to the same experiences as did the smartphone. As a result, the user may obtain an experience—using devices the user already owns for use for other purposes—that greatly exceeds the experience from using the devices separately.
- FIGS. 4A and 4B are swim lane diagrams for coordinating information submission and information provision between various computers and a central server system. In general, these figures show processes similar to that shown in FIG. 3A , but with particular emphasis showing examples by which certain operations may be performed by particular components in a system.
- the process begins at boxes 402 , 404 , and 405 , where two different computers log in to a central server system and the server system starts sessions for those computers.
- the two systems may typically log into a central server system at different times.
- sessions may be kept open for those computers so that communication may continue in a typical manner with the computers. For example, one evening, a user may log into a service from a set-top box or from hardware integrated into a television, while watching prime time sports. The user may use such a media-watching device to search for information, including web and media-related information, and to have media programs streamed to his or her television.
- each of the devices may be related or correlated to the account, and by extension, to each other. Separately, the devices may have paired with each other if they were within range for direct communication or on the same LAN together.
- the first computer receives a query in a spoken manner from its user and submits that query to the server system.
- Such submission may involve packaging the spoken text into a sound file and submitting the sound file to the server system.
- the submission may occur by the user pressing a microphone button on a smart phone and turning on a recording capability for the smart phone that then automatically passes to the server system whatever sound was recorded by the user.
- the device may also crop the recording so that only spoken input, and not background noise, is passed to the server system.
- the server system receives, converts, and formats the query.
- the converting involves converting from a sound format to a textual speech format using various speech-to-text techniques.
- the converting may also, or alternatively, involve translation from the language in which the query was spoken and into a target language.
- the formatting may involve preparing the query in a manner that maximizes the chances of obtaining relevant results to the query, where such formatting may be needed to address an API for the particular search engine.
- the appropriate formatted query is applied to a search engine to generate search results, and the search results are returned back from the search engine.
- the conversion may occur during a first communication by the smartphone with a server system, and execution by the search engine may occur via a subsequent communication from another computer such as a television, after the smartphone passes the input to the other computer.
- a target computer for the search query is identified, and may be any of a number of computers that have been associated with an account for which the computing device that has submitted the query was associated. If there are multiple such computers available, various rules may be used to select the most appropriate device to receive the information, such as by identifying the geographic locations of the computer from which the query was received and the geographic locations of the other devices, and sending the results to the device that is closest to the originating device. Such associating another device with the results may occur at the time the results are generated or may occur at a later time. For example, the results may be generated and stored, and then the target device can be determined only after a user logs into the account from the determined target computer.
- the search results are addressed and formatted, and they are sent to the target computer. Such sending of the results has been discussed above and may occur in a variety of manners.
- the target computer in this example computer 2 , updates its display and status to show the search results and then to potentially permit follow-up interaction by a user of the target computer. Simultaneously in this example, a confirmation is sent to the source computer, or in this example computer 1 . That computer updates its display and its status, such as by removing indications of the search query that was previously submitted, and switching into a different mode that is relevant to the submission that the user provided.
- a user when a user opens a search box on their device and then chooses voice input, the user may search for the title of a television program, and data for generating an electronic program guide may be supplied to the user's television.
- the user's smart phone may be made automatically to convert to a remote control device for navigating the program guide, so that the user may perform follow-up actions on their search results.
- a tablet computer may be the target computer, and a user may interact with search results on the tablet computer, including by selecting content on the tablet computer and sweeping it with a finger input to cause it to be copied to the source computer, such as by being added to a clipboard corresponding to the source computer.
- the process is similar to the process in FIG. 4A , but the results are routed through the first computer before ending up at the second computer.
- a short-range connection is created between the first and second computer.
- both of the computers may be provided with WiFi technology or BLUETOOTH technology, and may perform a handshake, or pairing process, to establish a connection between them.
- the first computer receives a voice query from its user and submits that voice query to a server system. Such submissions have been described above.
- the server system receives, converts, and formats the query. Again, such operations have been described in detail above.
- the server system applies the query to a search engine, which generates results that are passed back to the server system from the search engine.
- the formatted results are sent by the server system to the first computer which then receives those results at box 432 .
- the submission of the query to the search engine may be by a second computer after the first computer causes the spoken input to be converted to text and passes the text to the second computer.
- the first computer then transmits the results at box 434 over the previously-created short range data connection to the second computer.
- the second computer receives those results and displays the results.
- Such a forwarding of the results from the first computer to the second computer may be automatic and transparent to the user so that the user does not even know the results are passing from the first computer to the second computer, but instead simply sees that the results are appearing on the second computer.
- An information handling application on the first computer may be programmed to identify related devices that are known to belong to the same user as the initiating device, so as to cause information to be displayed on those devices rather than on the initiating, or source, device.
- the display and status of the first computer is updated.
- results generated by a hosted server system for user interaction may be directed to a computer other than the computer on which the user interaction occurred.
- Such re-directed delivery of the results may provide a variety of benefits, such as allowing a user to direct information to a device that is best able to handle, display, or manipulate the results.
- the user may be able to split duties among multiple devices, so that the user can enter queries on one device and then review results on another device (and then pass portions of the results back to the first device for further manipulation).
- FIG. 4C is an activity diagram for pairing of two computer systems in preparation for computer-to-computer communications.
- the messages shown here are sent across a TCP channel established between a mobile device and a television device, where the mobile device takes the role of a client and the television device takes the role of a server.
- the messages which may be variable in length, may include an unsigned integer (e.g., a 4-bit integer) that indicates the length of the message payload, followed by a serialized message.
- the messaging sequence for pairing occurs in a short-lived SSL connection.
- a client such as a smartphone, sends a sequence of messages to a server, such as a television, where each message calls for a specific acknowledgement form the server, and where the logic of the protocol does not branch.
- the protocol in this example begins with communication 440 , where the client sends a PairingRequest message to initiate the pairing process.
- the server then acknowledges the pairing request at communication 442 .
- the client sends to the server its options for handling challenges, or identification of the types of challenges it can handle.
- the server sends its options—the kinds of challenges it can issue and the kinds of responses inputs it can receive.
- the client then sends, at communication 448 , configuration details for the challenge, and the server response with an acknowledgement (communication 450 ).
- the client and server then exchange a secret (communications 452 and 454 ).
- the server may issue an appropriate challenge, such as by displaying a code.
- the client responds to it (e.g., via the user interacting with the client), such as by echoing the code back.
- the client checks the response and if it is correct, sends a secret message
- the server checks the response and if it is correct, sends a secret acknowledgement (communication 456 ). Subsequent communications may then be made on the channel that has been established by the process, in manners like those described above and below.
- FIG. 4D is a schematic diagram showing example messages that may be used in a computer-to-computer communication protocol.
- Each message in the protocol includes an outer message 458 , which includes fields for identifying a version of the protocol that the transmitting device is using (box 460 ) in the form of a integer, and a status integer that defines the status of the protocol.
- a status of okay implies that a previous message was accepted, and that the next message in the protocol may be sent. Any other value indicates that the sender has experienced a fault, which may cause the session to terminate. A next session may then be attempted by the devices.
- the outer message 458 encapsulates all messages exchanged on the wire (or in a wireless mode) and contains common header fields.
- the type field 464 contains an integer type number that describes the payload, while the payload field 466 contains the encapsulated message whose type matches the “type” field 464 .
- Each of the remaining fields 468 - 480 may appear in the payload in different communications, where only one of the fields 468 - 480 would appear in any particular communications. As shown, the particular example fields here match respective communications in the activity diagram of FIG. 4C .
- FIG. 4E is a swim lane diagram of a process for providing voice input to a television from a mobile computing device.
- this process shows the approach described above (e.g., with respect to FIG. 1B ) by which a mobile device like a smartphone causes a speech-to-text conversion to be performed, passes the text to another computer, such as a television, and the television then performs a search or other operation using the converted text.
- the process begins at box 482 , where the mobile device receives a spoken input from a user.
- the input may be assumed to be intended as a search query, such as when the user speaks the query while a search box is being displayed on the mobile device.
- the mobile device transmits to the speech-to-text server system an audio file that includes the spoken input.
- the server system processes the audio file (box 485 ), such as by recognizing the type of input from meta data that is provided with the digital audio file.
- the server system then converts the spoken audio to corresponding text and transmits the text back to the mobile device (box 486 ).
- the mobile device receives the text that has been converted from the spoken input, and forwards the text to a television system at box 488 .
- the text may be converted, reformatted, or transferred, into different forms by the mobile device before being forwarded.
- the text may be provided into a transmission like that shown in FIG. 4E below so as to match an agree-upon protocol for communications between the mobile device and the television.
- the television then processes the text according to an automated sequence that is triggered by receiving text in a particular type of transmission form the mobile device.
- the television processes the text and then places it into a query that is transmitted to a search engine.
- the search engine receives the query (box 490 ), and generates and transmits search results for the query in a conventional manner (box 491 ).
- the corpus for the query may be limited to media-related items so that the search results represent instances of media for a user to watch or listen to—as contrasted to ordinary web search results, and other such results.
- the television then processes the results (box 492 ) in various manners.
- the television may pass information about the results to the mobile device (box 493 ), and the device may display a portion of that information as a list of the search results (box 494 ).
- the television may also display the search results, in the same form as on the mobile device or in a different form (box 495 ), which may be a “richer” form that is more attuned to a larger display, such as by providing larger images, additional text, or animations (e.g., similar to the video clips that are often looped on the main screens for DVDs).
- a user of the mobile device and simultaneous viewer of the television may then interact with the results in various manners. For example, if the results are media-related search results, the user may choose to view or listen to one of the results. If the results are statements by other users in a chat session, the user may choose to respond—such as by again speaking a statement into the mobile device.
- the mobile device receives such user interaction, and transmits control signals at box 497 to the television.
- the television may then be made to respond to the actions (box 498 ), such as by changing channels, setting the recording for a PVR, starting the streaming of a program, or other familiar mechanisms by which a user may interact with a television or other form of computer.
- FIG. 5 is a block diagram of computing devices 500 , 550 that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers.
- Computing device 500 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
- Computing device 550 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices.
- Additionally computing device 500 or 550 can include Universal Serial Bus (USB) flash drives.
- the USB flash drives may store operating systems and other applications.
- the USB flash drives can include input/output components, such as a wireless transmitter or USB connector that may be inserted into a USB port of another computing device.
- the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
- Computing device 500 includes a processor 502 , memory 504 , a storage device 506 , a high-speed interface 508 connecting to memory 504 and high-speed expansion ports 510 , and a low speed interface 512 connecting to low speed bus 514 and storage device 506 .
- Each of the components 502 , 504 , 506 , 508 , 510 , and 512 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
- the processor 502 can process instructions for execution within the computing device 500 , including instructions stored in the memory 504 or on the storage device 506 to display graphical information for a GUI on an external input/output device, such as display 516 coupled to high speed interface 508 .
- multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
- multiple computing devices 500 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
- the memory 504 stores information within the computing device 500 .
- the memory 504 is a volatile memory unit or units.
- the memory 504 is a non-volatile memory unit or units.
- the memory 504 may also be another form of computer-readable medium, such as a magnetic or optical disk.
- the storage device 506 is capable of providing mass storage for the computing device 500 .
- the storage device 506 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
- a computer program product can be tangibly embodied in an information carrier.
- the computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 504 , the storage device 506 , or memory on processor 502 .
- the high speed controller 508 manages bandwidth-intensive operations for the computing device 500 , while the low speed controller 512 manages lower bandwidth-intensive operations.
- the high-speed controller 508 is coupled to memory 504 , display 516 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 510 , which may accept various expansion cards (not shown).
- low-speed controller 512 is coupled to storage device 506 and low-speed expansion port 514 .
- the low-speed expansion port which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- the computing device 500 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 520 , or multiple times in a group of such servers. It may also be implemented as part of a rack server system 524 . In addition, it may be implemented in a personal computer such as a laptop computer 522 . Alternatively, components from computing device 500 may be combined with other components in a mobile device (not shown), such as device 550 . Each of such devices may contain one or more of computing device 500 , 550 , and an entire system may be made up of multiple computing devices 500 , 550 communicating with each other.
- Computing device 550 includes a processor 552 , memory 564 , an input/output device such as a display 554 , a communication interface 566 , and a transceiver 568 , among other components.
- the device 550 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage.
- a storage device such as a microdrive or other device, to provide additional storage.
- Each of the components 550 , 552 , 564 , 554 , 566 , and 568 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
- the processor 552 can execute instructions within the computing device 550 , including instructions stored in the memory 564 .
- the processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. Additionally, the processor may be implemented using any of a number of architectures.
- the processor 410 may be a CISC (Complex Instruction Set Computers) processor, a RISC (Reduced Instruction Set Computer) processor, or a MISC (Minimal Instruction Set Computer) processor.
- the processor may provide, for example, for coordination of the other components of the device 550 , such as control of user interfaces, applications run by device 550 , and wireless communication by device 550 .
- Processor 552 may communicate with a user through control interface 558 and display interface 556 coupled to a display 554 .
- the display 554 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
- the display interface 556 may comprise appropriate circuitry for driving the display 554 to present graphical and other information to a user.
- the control interface 558 may receive commands from a user and convert them for submission to the processor 552 .
- an external interface 562 may be provide in communication with processor 552 , so as to enable near area communication of device 550 with other devices. External interface 562 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
- the memory 564 stores information within the computing device 550 .
- the memory 564 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
- Expansion memory 574 may also be provided and connected to device 550 through expansion interface 572 , which may include, for example, a SIMM (Single In Line Memory Module) card interface.
- SIMM Single In Line Memory Module
- expansion memory 574 may provide extra storage space for device 550 , or may also store applications or other information for device 550 .
- expansion memory 574 may include instructions to carry out or supplement the processes described above, and may include secure information also.
- expansion memory 574 may be provide as a security module for device 550 , and may be programmed with instructions that permit secure use of device 550 .
- secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
- the memory may include, for example, flash memory and/or NVRAM memory, as discussed below.
- a computer program product is tangibly embodied in an information carrier.
- the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 564 , expansion memory 574 , or memory on processor 552 that may be received, for example, over transceiver 568 or external interface 562 .
- Device 550 may communicate wirelessly through communication interface 566 , which may include digital signal processing circuitry where necessary. Communication interface 566 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 568 . In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 570 may provide additional navigation- and location-related wireless data to device 550 , which may be used as appropriate by applications running on device 550 .
- GPS Global Positioning System
- Device 550 may also communicate audibly using audio codec 560 , which may receive spoken information from a user and convert it to usable digital information. Audio codec 560 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 550 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 550 .
- Audio codec 560 may receive spoken information from a user and convert it to usable digital information. Audio codec 560 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 550 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 550 .
- the computing device 550 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 580 . It may also be implemented as part of a smartphone 582 , personal digital assistant, or other similar mobile device.
- implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
- ASICs application specific integrated circuits
- These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
- the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
- the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
- the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), peer-to-peer networks (having ad-hoc or static members), grid computing infrastructures, and the Internet.
- LAN local area network
- WAN wide area network
- peer-to-peer networks having ad-hoc or static members
- grid computing infrastructures and the Internet.
- the computing system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Abstract
A computer-implemented method for information sharing between a portable computing device and a television system includes receiving a spoken input from a user of the portable computing device, by the portable computing device, submitting a digital recording of the spoken query from the portable computing device to a remote server system, receiving from the remote server system a textual representation of the spoken query, and automatically transmitting the textual representation from the portable computing device to the television system. The television system is programmed to submit the textual representation as a search query and to present to the user media-related results that are determined to be responsive to the spoken query.
Description
- This application is a continuation of U.S. application Ser. No. 13/111,853, filed May 19, 2011, which claims priority to U.S. Provisional Application Ser. No. 61/346,870, filed May 20, 2010, entitled “Computer-to-Computer Communication,” the entire contents of which are hereby incorporated by reference.
- This document relates to submitting data, such as a voice-based search query, on a first computer, such as a smartphone, and having results from the submitted data, such as search results, appear automatically on a second computer, such as a television monitor or a desktop computer.
- People interact more and more with computers, and they also interact more and more with different kinds of computers. While desktop and laptop computers may have been the most prevalent computers in people's lives in the past, most people are more likely now to interact with smart phones, DVRs, televisions, and other consumer devices that include computers in them.
- Certain computers are well-suited for entering and editing information, such as desktop and laptop computers. Other devices are better suited to delivering information but not receiving it, such as televisions that do not include keyboards, or that have keyboards of limited size. Also, some computers are best used in certain situations, and other computers in other situations. For example, a smartphone is typically best used on-the-go and at close quarters. In contrast, a television is better used while a user is stationary, and frequently from relatively long-distances.
- This document discusses systems and techniques by which a person may enter data using one computer, and may use associated data by employing another computer. The associated data may be generated at the other computer based on the user's submission at the first computer. The linking of the two computers may occur by the first computer submitting a request to a server system, receiving a response, and sending information directly to the second computer using the response. The second computer may then send that same or resulting information to a second server system (which may be part of, or operate in cooperation with, the first server system) and may use a response from the second server system to provide information to a user of the two computers.
- In one such example, a viewer of a television program may be holding a smartphone and be using it as a remote control via an appropriate application or app installed on the smartphone. The smartphone may be programmed with APIs that accept voice input, that package audio of the voice input and send it to a speech-to-text server system via the internet, and that receive in response the text of what was spoken by the user. The smartphone app may then forward the text—which might be a media-related search, such as “movies with Danny DeVito”—to the television (which may be a television itself, a set-top box, a DVD player, or similar adjunct appliance that can be used with a television). The television may not have similar speech-to-text functionality, so that the use of the smartphone may effectively provide the television with that functionality. The television may then, according to an application running there, recognize that text coming from the smartphone app is to be treated in a particular manner. For example, the television may submit the text to a remote search engine that searches on media-related terms and returns media-related results. For example, the search engine may search programming information for an upcoming period and also media-related databases that reflect movies, songs, and programs, the artists that appeared with each, and summaries of such items, much like many well-known movie and TV-related web sites provide to users who visit those sites.
- The display may be delayed, such as if the second computer is not currently logged onto the system, and so that the results may be delivered when the user subsequently tunes in or logs on. The results for such a delayed delivery may be generated at the time the request is submitted (and may be stored) or at the time the user later gets them at the second computer (so that the request is stored and is then executed when delivery of the results is to occur). For example, the user may speak the query “sports tonight” into a smartphone while driving in his car, but not be able to interact with it at the present time (because he is busy and/or because the results are not the type of thing that can be interacted with effectively on a smartphone). The results in such a situation could, therefore, be sent automatically for display on the user's television, either on the backside through a server system or by being held on the smartphone until the smartphone identifies itself as being in the vicinity of the user's home WiFi router, and then by the smartphone checking to see if the television is on to be communicated with, and communicating the text of the query when such conditions occur. The user may then immediately be presented with such results on his television when he walks into the house and may quickly select one of them. The provision of the query to the television may occur when the user is within a set distance of his home also (e.g., by determining with GPS functionality on the smartphone that he is within ¼ mile of the home), and the television may be turned on automatically as he approaches the home, with the television tuned to a channel that is determined to be most relevant to the query (e.g., to a sport that is on the highest-rated channel and a type of sport that the user has identified on a profile as being his favorite type of sport), with the user's ability to quickly change to another sporting event that is currently being displayed.
- The data flow to and from the smartphone and television may occur in a variety of manners. For example, as discussed above, the smartphone may initially communicate with a remote server system, receive results back from the remote server system, and forward the results or generate information that is derivative of the results, and send it directly to the television. The television may then send data to another computer server system, and receive results back from it, such as media-related results that can be displayed automatically in a list of results and as part of a program guide grid in a familiar manner. The communications with the servers may be over the internet while the communications between the smartphone and the television may be over only a local area network such as a WiFi or similar network. In another example, the smartphone may send a file of the spoken query to a remote server system and may receive the ultimate results, which it may then pass to the television for display, without the television having to communicate with a server system. For example, the smartphone may have a speech-to-text app that sends the speech up to a server, receives text back from the server and sends it to a television remote control app running on the smartphone, and that app may then submit the text to a media-specific search engine, which may then return data for making a list and grid of programs, and the smartphone may forward that data to the television where it may be displayed.
- In certain examples, the smartphone may display results that are good for small-screen display, and the television may display results that are useful for large-screen display. For example, the smartphone may display a vertical list of programs that are responsive to a query (e.g., all soon-upcoming episodes of Seinfeld if the user spoke “When is Seinfeld?”), while the television may show the same results but in the context of a two-dimensional program guide grid. The user may step through the results in the list on the smartphone, and the grid may automatically jump, in a synchronized manner, to the corresponding episode on the grid on the television. If the user selects one of the episodes on the list, the television may immediately tune to the episode if it is currently being shown, or may jump to it later when it starts and/or set it for recording on a personal video recorder (PVR).
- The techniques discussed here may, in certain implementations, provide one or more advantages. For example, a user of multiple computing devices—such as a smartphone and a television—may be allowed to submit information using a computing device that is best-suited to such submission, such as a smartphone that performs speech-to-text conversion (perhaps via a server system to which it sends audio files). The user may then review results from the information, on a different computing device that is better-suited for such review, such as a television. Such techniques may allow the user to easily extend the functionality of computers that they already own. For example, software to enable such data submission and routing may be easily added to a smartphone, or a user may simply use a browser on the smartphone to log into an account on a hosted service that may then pass the information to a browser on another device, or the provider of the account may recognize that certain search results should be provided to a target computer that has previously been registered with, or logged into, the account. Also, the user may employ an app for speech-to-text conversion on the smartphone to enable voice inputs to a television that does not itself support speech-to-text conversion. In these various ways, the two or more computing devices may interact directly or through server systems so that each of the computing devices can provide its best features, and the two devices together can provide functionality that is even better than the separate additive functionalities of the devices.
- In one implementations, a computer-implemented method for information sharing between a portable computing device and a television system is disclosed. The method comprises receiving a spoken input from a user of the portable computing device, by the portable computing device, submitting a digital recording of the spoken query from the portable computing device to a remote server system, receiving from the remote server system a textual representation of the spoken query, and automatically transmitting the textual representation from the portable computing device to the television system. The television system can be programmed to submit the textual representation as a search query and to present to the user media-related results that are determined to be responsive to the spoken query. The method can also comprise, before automatically transmitting the textual representation, pairing the mobile computing device and television system over a local area network using a pairing protocol by which the mobile computing device and television system communicate with each other in a predetermined manner. The method can also comprise using the textual representation to perform a local search of files stored on recordable media located in the television system. In addition, the method can include automatically submitting the textual representation from the television system to a remote search engine, receiving in return search results that are responsive to a query in the textual representation, and presenting by the television system the search results.
- In some aspects, the search results are presented as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the television system so that the user may select one or more of the items for viewing or listening. Also, the method can include transmitting all or a portion of the search results from the television system to the mobile computing device. The method can further comprise providing to the search engine a request type for the search request that defines a type of information to be provided in the search results, and receiving search results that the search engine directed to the request type. As another aspect, the method can also include determining on the mobile computing device whether the spoken input is directed to the television system, and automatically transmitting the textual representation from the portable computing device to the television system only if the spoken input is determined to be directed to the television system.
- In other aspects, the method can include determining that the television system is not currently available to display the results, and storing the results or the textual representation until the television system is determined to be available to display the results. The method can further comprise receiving from the television system an indication that a user has selected a portion of the search results, and automatically causing a display on the portable computing device to change in response to receiving the indication, and receiving a subsequent user input on the portable computing device, and causing the presentation of the search results to change in response to receiving the subsequent user input.
- In another implementation, a computer-implemented method for information sharing between computers is disclosed. The method comprises receiving a spoken input at a first computer from a user of the first computer; providing the audio of the spoken request to a first remote server system; receiving a response from the first remote server system, the response including text of the spoken request; and automatically transmitting data generated from the response that includes the spoken request, from the first computer to a second computer that is nearby the first computer, wherein the second computer is programmed to automatically perform an action that causes a result generated by applying an operation to the transmitted data, to be presented to the user of the first computer. The method can also comprise automatically submitting the text of the spoken request from the second computer to a remote search engine, receiving in return search results that are responsive to a query in the text of the spoken request, and presenting by the second computer the search results. The search results can be presented on the second computer as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the second computer so that the user may select one or more of the items for viewing or listening. Also, the method can include transmitting all or a portion of the search results from the second computer to the first computer.
- In yet another implementation, a computer-implemented system for information sharing is described. The system comprises a mobile computing device, and software stored on the mobile computing device. The software is operable on one or more processors of the mobile computing device to transmit spoken commands made by a user of the mobile computing device, to a remote server system; receive in response, from the remote server system, text of the spoken commands; and automatically provide the text received from the remote server system to a second computer operating in the close vicinity of the mobile computing device. The system may also include the second computer, wherein the second computer is programmed to provide the text received from the remote server system to a second remote server system as a search query, and to use search results received in response from the second remote server system to present the search results on a display of the second computer. The second computer can comprises a television, and the first computer and the second computer can be programmed to automatically pair over a local data connection when each is within communication of the local data connection. Also, the second computer can be programmed to submit the text to a search engine that performs searches directed specifically to media-related content.
- The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
-
FIG. 1A shows an example by which data may be submitted at a first computer and reviewed and handled at a second computer. -
FIG. 1B is a schematic diagram showing communication between user computers and server systems. -
FIG. 2A is a schematic diagram of a system for sharing information between computers. -
FIG. 2B is a block diagram of a mobile computing device and system for sharing information between computers. -
FIG. 3A is a flow chart that shows a process for receiving a request from a first computer in supplying information that is responsive to the request to a second computer. -
FIG. 3B is a flow chart that shows a process for processing speech input to a television remote control to affect a display on an associated television. -
FIGS. 4A-4B are swim lane diagrams for coordinating information submission and information provision between various computers and a central server system. -
FIG. 4C is an activity diagram for pairing of two computer systems in preparation for computer-to-computer communications. -
FIG. 4D is a schematic diagram showing example messages that may be used in a computer-to-computer communication protocol. -
FIG. 4E is a swim lane diagram of a process for providing voice input to a television from a mobile computing device. -
FIG. 5 is a block diagram of computing devices that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers. - This document describes systems and related techniques for passing information from a first computer to a server system, creating information that is responsive to the passed information using the server system, and then automatically returning responsive information from the server system to the first computer and on to a second computer that is different than the first computer. The second computer may then uses the information it receives to send a request to a second server system, may obtain a response, and may provide output to a user of the two computers.
- In one example, a search query is spoken by a user into a first computer, such as a smartphone, and is submitted to a search engine that is remote from the smartphone, such as over the internet. A textual representation of the spoken query is then returned to the first computer, which in turn automatically forwards the textual representation to a second computer (perhaps after reformatting or otherwise modifying the textual representation). Upon receiving the textual representation, the second computer automatically processes it, typically by submitting it to a local search engine on the second computer (e.g., to determine whether media programming information on the second computer, such as recorded television programs and electronic program guide (EPG) information, matches the query) and to a server-based public search engine which may be directed particularly to media-based results (i.e., those relating to various types of electronic entertainment, education, and similar content). Those results may then be displayed by the second computer (e.g., as being displayed on a display connected to the second computer, such as a television that houses the second computer or that is connected to a set-top box that houses the second computer).
- The user may then interact with the results using the first computer, and such interaction can be reflected on a display of the first and/or second computer. For example, a user can browse a basic representation of search results on a smartphone, and a more detailed representation of the results on a television. User input may occur via a touchscreen on the smartphone, physical buttons on the smartphone, or sensed motion of the smartphone.
-
FIG. 1A shows an examples by which data may be submitted by a first computer and reviewed and handled at a second computer. InFIG. 1A , asystem 100 is shown, in which auser 102 of asmartphone 104 is shown sitting on a couch watching atelevision 106. For example, theuser 102 may be sitting down for an evening of watching primetime television but may not know immediately what he wants to watch. Alternatively, the user may be watching a show that he does not like and may be interested in finding a better show. The user may also be interested in something other than television. For example, the user may be watching the news, may hear reference to a certain geographic area, and may want to perform some quick research to follow up on what he has heard. Other similar interests of the user may be addressed by thesystem 100. - In this example, the user is shown speaking into the
smartphone 104, and asking the query “when is Seinfeld on?” This query, of course, indicates that the user would like to find out when the next episode or episodes of the television situation comedy Seinfeld is being shown by his television provider. Thesmartphone 104 may be equipped with voice search capabilities, by which certain requests spoken into thesmartphone 104 are provided as sound files to a remote server system that may convert the sound files to text and then create search results that are responsive to the request. In certain implementations, thesmartphone 104 may execute an application or app that packages the sound, sends it to a server system for conversion to text, and then receives back the converted text and forwards it to one or multiple computers, such astelevision 106, with which thesmartphone 104 has paired up for communication (e.g., over a LAN such as a WiFi network). - The
television 106 may be a modern television that is provided with a certain level of computing capabilities, and may include WiFi or other data networking technologies built into thetelevision 106, or provided as an adjunct to thetelevision 106, such as in a cable or satellite box. References here to a television or television system are intended to cover both integrated and separate approaches. Thesmartphone 104 andtelevision 106 may have been previously registered with a speech-to-text service and a search server system, respectively, and correlated to an account for user 102 (e.g., by the user logging into an account for the user, with the devices). In this manner, the relevant server systems and services may readily determine that the two devices are related to or registered to theuser 102, and may perform actions like those discussed here using such knowledge. - When the
user 102 speaks a voice command and a sound file is sent to a server system, search results may be sent back to thesystem 100. In certain implementations, and in a traditional manner, the search results may be displayed on thesmart phone 104. However, thesmartphone 104 may not be large enough to display a complete electronic program guide grid in the form in which the “Seinfeld” search results may be provided by the system. Also, thesmartphone 104 may not be equipped to take appropriate actions using the search results, such as switching automatically to a channel on which an episode of Seinfeld is being played, or programming a personal video recorder to record the current or future episode of Seinfeld that appears in the search results. As a result, in this example, the search results have been provided instead (or in addition) to thetelevision 106, and the user may then further interact with thesystem 100 to carry out their own particular wishes. As one example, the user may interact further with thesmartphone 104, such as by using a remote control application for thesmartphone 104, so as to cause channels on the television 106 (including with a cable or set top box), to be changed to the appropriate channel automatically. - The results may get to the
television 106 by various mechanisms. For example, a central server system may identify all devices that are currently logged in or registered for the user, and may then determine which devices may be able to display the relevant results. Where multiple active devices are capable of handling the results, the system may determine which device is most likely to be the target of the user's input. Such a determination may be made, for example, by identifying the active device that is geographically closest to the device that submitted the query, or the device that best matches a type of the results. For example, if the results are determined to be media-related (e.g., they are links to TV episodes and streaming movies), then a television can be preferred over other devices for receiving the results. In another example, thesmartphone 104 may submit a sound file to one server system and receive equivalent text in return, and may then forward the text to thetelevision 106. Thetelevision 106 may in turn be programmed to send the text to a search engine service that is programmed especially to provide media-related results (e.g., music, movies, and television programs), and to then display the search results, such as by showing a program guide grid around upcoming episodes of Seinfeld. Thetelevision 106 may alternatively, or additionally, search its own local storage in a similar manner. -
FIG. 1B is a schematic diagram showing communication between user computers and server systems. The system 10 shown in the figure may correspond to the scenario just discussed with respect toFIG. 1A . In particular, asmartphone 114 may communicate over the internet with a speech-to-text server system 112, and in turn with atelevision 116 that is proximate (essentially in the same room as) thesmartphone 114. The television then in turn communicates with asearch engine 118 server system. Such communications may be triggered by a user speaking into a microphone on thesmartphone 114, and then all subsequent actions are taken automatically by thesystem 110, without further intervention by the user (until the user may interact with results displayed, in the end, on the television 106). - The
smartphone 114 may take a familiar form, and may be generally a touchscreen device onto which a variety of applications or apps may be loaded and executed. One such application may be included with the operating system or loaded later, and may provide an application programming interface (API) that receives spoken input, converts it to text, and then provides the text to whatever application is currently the focus of the device. - While the
smartphone 114 may perform its own speech-to-text conversion, such conversion may also occur with the assistance of the speech-to-text server system 112. As shown by flow arrow A, an audio file of various forms may be passed up from thesmartphone 114 to theserver system 112, where it may be analyzed and converted into corresponding text, which may be encoded, encrypted, or transmitted as plaintext back to thesmartphone 114, as indicated by flow Arrow B. Upon performing its own conversion or receiving the text fromserver system 112, thesmartphone 114 can automatically forward the text to thetelevision 116. In this manner, thesmartphone 114 may serve as a voice input front end for thetelevision 116, which may not have a microphone or a mechanism by which to position a microphone directly in front of the user (e.g., for cost purposes, the remote control shipped with thetelevision 116 may only communicate via traditional RF or IR mechanisms). - The passing of information from the
smartphone 114 to thetelevision 116 is shown in the figure by Arrow C. Thetelevision 116 may then process the information, either internally to the television or externally by passing the information, or a derivation of the information, to thesearch engine 118 server system, or another form of server system. For example, the textual version of the spoken input form the user may be passed up to thesearch engine 118 along with other appropriate information, such as a flag that indicates thetelevision 116 is seeking media-related results, as opposed to general web results, images, shopping results, or other such common sub-categories of results. The television may also perform other transformations or formatting changes to the data or information. - Arrow D shows the communication and request passing up from the
television 116 to thesearch engine 118. Thesearch engine 118 may be a public search engine that can be accessed manually by users, and has also published an API by which automated systems may submit queries and receive search results in a predetermined and defined format. Arrow E shows the results being passed down from thesearch engine 118 to thetelevision 116. As shown on the screen of thetelevision 116, the results may be particular episodes of a program like Seinfeld, and the display may show the various episodes in a numbered list of search results on the left side, and a grid surrounding whatever episode is currently selected in the list. - Although not shown by an arrow, certain of the search result information may be communicated form the
television 116 to thesmartphone 114. For example, the information for generating the list of results may be transmitted, and as shown in the figure, thesmartphone 114 is displaying the same search result list, but is not also displaying the program guide grid because it is a small device for which there is no room for such an extensive display of content. The user may then select particular episodes by tapping on the corresponding episode on the display, which may cause the program guide grid to scroll so that it again surrounds the newly selected episode. Other controls may be shown on the smartphone, such as buttons to let a user choose to tune to a selected program immediately, or to set a PVR to record the episode in the future. In addition, other relevant remote control functionality may be provided with an application executing on thesmartphone 114, such as buttons for changing channels, colors, volume, and the like. - In this manner then, the
system 110 may, in certain implementations, provide for synergies by using speech input and conversion capabilities that are available on a user's telephone, tablet, or other portable computing device, in order to control interactivity with a television. In other examples, the voice input by a user may be directed to a chat session being displayed on thetelevision 116. A user may speak into thesmartphone 114, have the speech transcribed, and then have the transcribed text posted to the chat session, either fully automatically or after the user confirms that the text returned form theserver system 112 is accurate (or if the user does not respond during a pre-set delay period). The conversion may also be accompanied by a language translation, e.g., from a language of the user into a language of the other person on the chat session. For example, an audio file may be sent to theserver system 112 in English (Arrow A in the figure), and text may be returned in French (Arrow B) and then supplied to a chat application. The French user may employ similar functionality, or text may arrive at thetelevision 116 in French and can be passed from thetelevision 116 to the smartphone and then to a server system for translation, or passed directly from thetelevision 116 to the server system for translation. Other such applications may likewise employ speech-to-text conversion and language translation using one device (e.g., smartphone 114) that then causes the converted and/or translated text or audio to be passed to another computing device (e.g., television 116). -
FIG. 2A is a schematic diagram of asystem 200 for sharing information between computers. In general, thesystem 200 is established to allow a user who owns multiple computer devices, to share certain data between devices, including by passing one form of data to the central server system, and having the central server system obtain other data in response to the submissions and provide that other data to a separate target computer that is associated with the user, which association may be identified by determining that the two devices are logged into the same user account. The selection of which device to send the data to may be made automatically, such as using data stored in the user's device or by a determination made by the central server system, so that the user need not identify the target of the information when the user asks for the information to be sent. - As shown in the figure, two consumer devices in the form of
smartphone 208 and atelevision 206 are shown and may be owned by a single individual or family. In this example, we will assume that both devices have been logged into acentral server system 204 and that communication sessions have been established for bothsuch devices FIG. 4C . Thus, at the time shown here, submissions could be made separately to thecentral server system 204 by either of thedevices - In this particular example, an arrow and the label “voice” is shown entering the
smartphone 208 to indicate that a user is speaking voice commands into thesmartphone 208. Thesmartphone 208 may be programmed to recognize certain words that are stated into its microphone, as being words to trigger a search query that involves passing sound data up to thecentral server system 204 through thenetwork 202, such as the internet. Alternatively, a user may press an on-screen icon on thesmartphone 208 in order to change it from a mode for typed input into a mode for spoken input. The spoken input may be converted and/or translated according to a operating system-based service provide don thesmartphone 208 and made available to subscribing applications that execute on thesmartphone 208. - In this example, the voice entry is a search query, and the
central server system 204 is provided with a number of components to assist in providing search results in response to the search query. For clarity, a certain number of components are shown here, though in actual implementation, a central server system may involve a large number of servers and a large number of other components and services beyond those shown here. - As one example, a
voice interface 210 may be provided, and a web server that is part of acentral server system 204 may route to thevoice interface 210 data received in the form of voice search queries. Thevoice interface 210 may initially convert the provided voice input to a textual form and may also perform formatting and conversion on such text. In addition, theinterface 210 or another component may perform language translation on the submitted input in appropriate circumstances, and make a determination of the target language based on information in the form of meta data that has been passed with the digitized spoken audio file. In one example, the search system may be implemented so that a user wanting to submit a voice query is required to use a trigger word before the query, either to start the device listening for the query, or to define a context for the query (e.g., “television”). Thevoice interface 210 may be programmed to extract the trigger word from the text after the speech-to-text conversion occurs, because the trigger word is not truly part of the user's intended query. - A
search engine 204 may receive processed text from thevoice interface 210, and may further process the text, such as by adding search terms for synonyms or other information in ways that are readily familiar. Thesearch engine 204 may access information in asearch index 218 to provide one or more search results in response to any submitted search query. In certain instances, the context of the search may also be taken into account to limit the types of search results that are provided to the user. For example, voice search may generate particular types of search results more often than other search results, such as local search results that indicate information in a geographical area around the user. Also, certain search terms such as the titles of television shows may indicate to thesearch engine 214 that the user is presenting a certain type of search, i.e., a media-related search. As a result, thesearch engine 214 may format the search results in a particular form, such as in the form of an electronic program guide grid for television shows. Such results may also be provided with additional information or meta data, such that a user could select a cell in a program guide so as to provide a message to a personal video recorder to set a recording of that episode. - The
search engine 214 may also obtain a query from an external source, such as thetelevision 206. For example, thevoice interface 210 may convert spoken input into text and return the text to thesmartphone 208, which may forward the text totelevision 206, which may in turn submit the text as a query to thesearch engine 214. The responses to queries made by thesearch engine 214 may be based on information that is stored in asearch index 218, which may contain a variety of types of information, but may have media-related information set out from the other information so that media-directed search results may be returned by the system. - A
results router 212 is responsible for receiving search results from thesearch engine 214 and providing them to an appropriate target device. In normal operation of a search engine, the target device is the device from which the search query was submitted. In this example, though, the target device may be a different device, and the results may be provided to it either directly from thecentral server system 204, or may be provided to thesmartphone 208 and then forwarded to the target device, which in this situation may be thetelevision 206. Alternatively, in the example where the text is submitted to the search engine by thetelevision 206, the television may receive the results from thesearch engine 214, and then may pass all or some of the results to thesmartphone 208. - The
results router 212 may refer to data in a userdevice information database 216 to identify the addresses of devices that are associated with an account for the user who is logged in with the particular devices. In this manner, thesearch system 204 may determine how to properly route results to each of the devices—thesystem 204 may simply respond to requests in a normal manner, and not need to correlate two different devices as being related to each other in any manner. Thus, for example, if the user provides a television or media-related request by voice, and thesystem 204 determines from GPS data provided with the request that the user is at home, it may determine to send the results directly totelevision 206, rather than back tosmartphone 208. Also, thesystem 204 may generate results in a manner that is formatted to best work withtelevision 206, but deliver those results todevice 208 in a manner so thedevice 208 automatically forwards the results for display ontelevision 206. In addition, where a user has multiple televisions, thesystem 204 may determine which of those televisions is currently logged on and operating, and may determine to send the search results to that particular television. -
FIG. 2B is a block diagram of amobile device 222 andsystem 220 for sharing information between computers. In general, thesystem 220 is similar to thesystem 200 inFIG. 2A , but in this instance, additional details about themobile device 222, which acts as a client here, is provided. - In the example shown, the
mobile device 222 is a cellular telephone. In other implementations, themobile device 222 can be a personal digital assistant, a laptop computer, a net book, a camera, a wrist watch, or another type of mobile electronic device. Themobile device 222 includes a camera (not shown) withcamera controller 232, and adisplay screen 223 for displaying text, images, and graphics to a user, including images captured by the camera. In some implementations, thedisplay screen 223 is a touch screen for receiving user input. For example, a user contacts thedisplay screen 223 using a finger or stylus in order to select items displayed by thedisplay screen 223, to enter text, or to control functions of themobile device 222. Themobile device 222 further includes one or more input keys such as atrack ball 224 for receiving user input. For example, thetrack ball 224 can be used to make selections, return to a home screen, or control functions of themobile device 222. As another example, the one or more input keys can include a click wheel for scrolling through menus and text. - The
mobile device 222 includes a number of modules for controlling functions of themobile device 222, including modules to control the receipt of information and for triggering the providing of corresponding information to other devices (which other devices may, in turn, also include the structural components described here for device 222). The modules can be implemented using hardware, software, or a combination of the two. - For example, the
mobile device 222 includes adisplay controller 226, which may be responsible for rendering content for presentation on the display screen 203. Thedisplay controller 226 may receive graphic-related content from a number of sources and may determine how the content is to be provided to a user. For example, a number of different windows forvarious applications 242 on themobile device 222 may need to be displayed, and thedisplay controller 226 may determine which to display, which to hide, and what to display or hide when there is overlap between various graphical objects. Thedisplay controller 226 can include various components to provide particular functionality for interacting with displayed components, which may be shared across multiple applications, and may be supplied, for example, by an operating system of themobile device 222. - An
input controller 228 may be responsible for translating commands provided by a user ofmobile device 222. For example, such commands may come from a keyboard, from touch screen functionality of thedisplay screen 223, fromtrackball 224, or from other such sources, including dedicated buttons or soft buttons (e.g., buttons whose functions may change over time, and whose functions may be displayed on areas of thedisplay screen 223 that are adjacent to the particular buttons). Theinput controller 228 may determine, for example, in what area of the display commands are being received, and thus in what application being shown on the display the commands are intended for. In addition, it may interpret input motions on thetouch screen 223 into a common format and pass those interpreted motions (e.g., short press, long press, flicks, and straight-line drags) to the appropriate application. Theinput controller 228 may also report such inputs to an event manager (not shown) that in turn reports them to the appropriate modules or applications. For example, a user viewing an options menu displayed on the display screen 203 selects one of the options using one of thetrack ball 224 or touch screen functionality of themobile device 222. Theinput controller 228 receives the input and causes themobile device 222 to perform functions based on the input. - A variety of
applications 242 may operate, generally on a common microprocessor, on themobile device 222. Theapplications 242 may take a variety of forms, such as mapping applications, e-mail and other messaging applications, image viewing and editing applications, video capture and editing applications, web browser applications, music and video players, and various applications running within a web browser or running extensions of a web browser. In certain instances, one of the applications, an information sharing application 230 (e.g., a television remote control application), may be programmed to communicate information toserver system 232 vianetwork 250, along with meta data indicating the user ofdevice 222 wants to have corresponding information provided to a different device that is registered with thesystem 220 to the user. Communications may also be made directly to another device neardevice 222, without passing through the internet and a separate server system. - A
wireless interface 240 manages communication with a wireless network, which may be a data network that also carries voice communications. Thewireless interface 240 may operate in a familiar manner, such as according to the examples discussed below, and may provide for communication by themobile device 222 with messaging services such as text messaging, e-mail, and telephone voice mail messaging. In addition, thewireless interface 240 may support downloads and uploads of content and computer code over a wireless network. Thewireless interface 240 may also communicate over short-range networks, such as with other devices in the same room asdevice 222, such as when results are provided to thedevice 222 and need to be forwarded automatically to another device in the manners discussed above and below. - A
camera controller 232 of themobile device 222 receives image data from the camera and controls functionality of the camera. For example, thecamera controller 232 can receive image data for one or more images (e.g. stationary pictures or real-time video images) from the camera and provide the image data to thedisplay controller 226. Thedisplay controller 226 can then display the one or more images captured by the camera on thedisplay screen 223. - Still referring to
FIG. 2B , in accordance with some implementations, theinformation sharing application 230 uses aGPS Unit 238 of themobile device 222 to determine the location of themobile device 222. For example, theGPS Unit 238 receives signals from one or more global positioning satellites, and can use the signals to determine the current location of themobile device 222. In some implementations, rather than theGPS Unit 238, themobile device 222 includes a module that determines a location of themobile device 222 using transmission tower triangulation (which may also be performed on a server system) or another method of location identification. In some implementations, themobile device 222 uses location information that is determined using theGPS Unit 238 so as to identify geo-coded information that is associated with the location of themobile device 222. In such implementations, location information obtained or determined by theGPS Unit 238 is provided to theinformation sharing application 230. Theinformation sharing application 230 can use the location information to identify geo-codeddata 246 stored on themobile device 222. - The geo-coded
data 246 includes information associated with particular geographic locations. For example, geo-coded data can include building names, business names and information, historical information, images, video files, and audio files associated with a particular location. As another example, geo-coded data associated with a location of a park may include hours for the park, the name of the park, information on plants located within the park, information on statues located within the park, historical information about the park, and park rules (e.g. “no dogs allowed”). Theinformation sharing application 230 can use the current location of themobile device 222 to identify information associated with geographic locations that are in close proximity to the location of themobile device 222. The geo-codeddata 246 can be stored on a memory of themobile device 222, such as a hard drive, flash drive, or SD card. Themobile device 222 may also contain no pre-stored geo-coded data. The geographical information can be used in various ways, such as passing the data to thecentral server system 232, so that the central server system may identify a closest logged-in device to themobile device 222, as that device may be most likely the one to which thesystem 220 is to send content submitted by thedevice 220, or a result of the content submitted by the device. - The
device 222 uses acompass unit 236, or magnetometer, in some examples, e.g., to determine a current viewing direction of a camera on thedevice 222, within the horizontal plane of the camera. In other words, thecompass unit 236 determines a direction in which a user of themobile device 222 is looking with themobile device 220. Viewing direction information provided by thecompass unit 236 can be used to determine where information is to be shared with other devices, such as by a system determining to share information with a device in the direction of the user where the user is pointing his or hermobile device 222. In some implementations, themobile device 222 further includes anaccelerometer unit 234 which may be further used to identify a user's location, movement, or other such factors. - Still referring to
FIG. 2B , in accordance with some implementations, themobile device 222 includesuser data 248. Theuser data 248 can include user preferences or other information associated with a user of themobile device 222. For example, theuser data 248 can include a list of contacts and a list of ID's for other devices registered to a user. Such information can be used to ensure that information is passed from one person to another. - The particular
mobile device 222 shown here is generally directed to a smartphone such assmartphone 104 inFIG. 1 above andsmartphone 206 inFIG. 2A above. Some or all of the features described here may also be provided with a television, including structures to enable the television to receive communications from the smartphone, and to submit queries and other communications over the internet, such as to search media-related databases for purposes of responding to user requests entered on a smartphone, but displayed on the television. -
FIG. 3A is a flow chart that shows a process for receiving a request from a first computer and supplying, to a second computer, information that is responsive to the request. In general, the process involves handling requests from one computing device, generating information responsive to those requests, and providing that generated information to a second computing device that is related to the first computing device via a particular user who has been assigned to both devices (e.g., by the fact of both devices being logged into the same user account when the process occurs). - The process begins at
box 302, where speech data is received by the process. For example, a search engine that is available to the public may receive various search queries that users of mobile telephones provide in spoken form. The system may recognize such submissions as being spoken queries in appropriate circumstances and may route them for proper processing. The speech data may in one example be sent with information identifying the device on which the data was received and a location of the device, in familiar manners. Such information may subsequently be used to identify an account for a user of the device, and to determine other devices that are registered to the user in the geographic location of the submitting device. - Thus, at
box 304, the speech is converted to text form. Such conversion may occur by normal mechanisms, though particular techniques may be used to improve the accuracy of the conversion without requiring users of the system to train the system for their particular voices. For example, a field of a form in which the cursor for the user was placed when they entered the query may include a label that describes the sort of information that is provided in the field, and such label information may be provided to a speech-to-text conversion system so as to improve the results of the conversion. As one example, if a user is entering text into a field of a television-related widget or gadget, the term “television” may be passed to the conversion system, and as a result, a speech model may be selected or modified so as to address television-related terms better, such as by elevating the importance of television titles and television character names in a speech model. Along with the speech-to-text conversion, in appropriate circumstances, language translation may also be performed on a submission, and text or audio may be returned in the target language, of the submission. - At
box 306, the query is parsed and formatted. For example, certain control terms may be removed from the query (e.g., terms that precede the main body of the query and are determined not to be what the user is searching for, but are instead intended to control how the query is carried out), synonyms may be added to the query, and other changes may be made to the query to make a better candidate as a search query. - At
box 308, the query is submitted to a search engine and results are received back from the search engine and formatted in an appropriate manner. For example, if the search results are results for various times that a television show is to be played, the results may be formatted into an HTML or similar mark-up document that provides an interactive electronic program guide showing the search results in a guide grid. A user who is reviewing the guide may then navigate up and down through channels in the guide, and back and forth during times in the guide, in order to see other shows being broadcast around the same time, and on different channels, as the identified television program search result. - At
box 310, the process identifies a related computer, meaning a computer that is related to the computer that submitted the query. Such a determination may be made, for example, by consulting profile information about a user who submitted the query, to identify all of the computing devices that the user has currently or previously registered with the system, and/or that are currently logged into the system. Thus, atbox 312, the process determines whether a particular one of the computers that are associated with the user are currently logged in. If no such computer is currently logged in or no such computer that is appropriate to receive the content (e.g., because it is a type of computer that can display the content or is a computer geographically near the device that submitted the query), the process may store theresults 314 that were to be sent to the other computer. Thus, for example, a user may make search queries while they are not able to view results at home, but such results may be presented to them at home as soon as they log back into their account with their home system (Box 316). Alternatively, when the user logs in at another device, the system may notify them of pending deliveries from the previously-submitted queries, and they may be allowed to obtain delivery of the information from the queries when they would like. - At
box 318, results are delivered to the related computer that was selected inbox 310. Such delivery may occur in a variety of forms, including by simply providing a common search results list or grouping to such related computer. The information may ordinarily be delivered via HTML or similar mark-up document that may also call JavaScript or similar executable computer code. In this manner, for instance, a user of a smartphone may speak a command into the smartphone, have the command converted and/or translated, and provided to a second computer for processing by that second computer. -
FIG. 3B is a flow chart that shows a process for processing speech input to a television remote control to affect a display on an associated television. In general, the process is similar to that forFIG. 3A , but the process is centered here on a mobile device that receives voice input, submits it to a server system for resolution, and then passes the resolved text to a television system for further processing, such as for submission to a search engine and display of the search results on the television. As an alternative implementation, the spoken input to the smartphone may be converted to text and/or translated in order to be submitted to a communication application that is executing on the television, such as a chat application, so that a user can sit on his couch and provide spoken input to a textual chat application that is running on his television. - The process in this example begins at
box 320, where the smartphone is paired with a television. Such pairing may involve the two devices recognizing that they are able to access a network such as a WiFi LAN, and exchanging messages so as to recognize each other's existence, in order to facilitate future communication between the devices. A particular pairing example is discussed in more detail below with respect toFIG. 4C . - At
box 322, the smartphone receives a spoken query form its user. To receive such input, the smartphone may be equipped with a microphone in a familiar manner, and may be loaded with an application that assists in converting spoken input into text or into another language. Such an application may be independent of any particular other application on the device, and may act as a universal converter or translator for any application that is currently operating on the device and has followed an API for receiving converted or translated information from the application. - The application captures the spoken input and places it into a digital audio file that is immediately uploaded, at
box 324, to a server system for conversion and/or translation. The conversion/translation may also occur on the device if it has sufficient resources to perform an accurate conversion/translation. Atbox 326, the smartphone receives, in response to uploading the audio, text that has been created by the server system, and forwards the text to the television with which it is paired. Alternatively, the server system may return another audio file that represents the spoken input but in a different language. The smartphone may then wait while the television processes the transmitted text (box 328). In one example, if the user spoke a search query into the smartphone, text of the query may be transmitted to the television and the television may perform a local or server-based search using the text. The text may relate, for example, to a program, episodes of a program, actors, or songs—i.e., if a user wants to watch some television, music, or movie programming, and is trying to find something he or she will like. As one example, the user may seek to be presented with a list of all movies or television programs in which George Clooney has appeared, simply by speaking “George Clooney.” The television may limits its search to media properties, and exclude searches of news relating to George Clooney, by determining the context of the query—i.e., that the user is watching television and spoke the command into a television remote control application. - The user may then review the results that may be presented on the television. At
box 330, certain search results are also returned to the smartphone, either from the television or from the smartphone communicating with another remote server system. The results may be a sub-set of the results displayed on the television, or may be data for generating controls that permit the user to interact with results that are displayed on the television. For example, if ten results are returned for the most popular George Clooney projects, the television may display detail about each result along with graphical information for each. In turn, the smartphone may display basic textual labels and perhaps small thumbnail images for each of the results. A one-to-one correspondence between results displayed on the smartphone and results displayed on the television may be used to allow the user to look at the television but press on a corresponding result on the smartphone in order to select it (e.g., to have it played or recorded). - Thus, as noted, the user may interact with the search results, and may provide inputs to the smartphone (box 332), which inputs may be transmitted to the television (box 334), and reflected on the display there. For example, the user may select one of the search results, and the television may change channels to that result, be set to record the result, or begin streaming the result if it is available for streaming.
- Thus, using this process, a user may take advantage of his or her smartphone's ability to convert spoken input to another language or to textual input. Such converted or translated input may then be automatically passed to the user's television and employed there is various useful manners. Such functionality may not have been available on the television, and the smartphone may not have provided access to the same experiences as did the smartphone. As a result, the user may obtain an experience—using devices the user already owns for use for other purposes—that greatly exceeds the experience from using the devices separately.
-
FIGS. 4A and 4B are swim lane diagrams for coordinating information submission and information provision between various computers and a central server system. In general, these figures show processes similar to that shown inFIG. 3A , but with particular emphasis showing examples by which certain operations may be performed by particular components in a system. - Referring now to
FIG. 4A , the process begins atboxes - At
box 406, the first computer receives a query in a spoken manner from its user and submits that query to the server system. Such submission may involve packaging the spoken text into a sound file and submitting the sound file to the server system. The submission may occur by the user pressing a microphone button on a smart phone and turning on a recording capability for the smart phone that then automatically passes to the server system whatever sound was recorded by the user. The device may also crop the recording so that only spoken input, and not background noise, is passed to the server system. - At
box 408, the server system receives, converts, and formats the query. The converting involves converting from a sound format to a textual speech format using various speech-to-text techniques. The converting may also, or alternatively, involve translation from the language in which the query was spoken and into a target language. The formatting may involve preparing the query in a manner that maximizes the chances of obtaining relevant results to the query, where such formatting may be needed to address an API for the particular search engine. Atbox 410, the appropriate formatted query is applied to a search engine to generate search results, and the search results are returned back from the search engine. In other examples described above and below, the conversion may occur during a first communication by the smartphone with a server system, and execution by the search engine may occur via a subsequent communication from another computer such as a television, after the smartphone passes the input to the other computer. - At
box 412, a target computer for the search query is identified, and may be any of a number of computers that have been associated with an account for which the computing device that has submitted the query was associated. If there are multiple such computers available, various rules may be used to select the most appropriate device to receive the information, such as by identifying the geographic locations of the computer from which the query was received and the geographic locations of the other devices, and sending the results to the device that is closest to the originating device. Such associating another device with the results may occur at the time the results are generated or may occur at a later time. For example, the results may be generated and stored, and then the target device can be determined only after a user logs into the account from the determined target computer. - At
box 414, the search results are addressed and formatted, and they are sent to the target computer. Such sending of the results has been discussed above and may occur in a variety of manners. Atbox 418, the target computer, in thisexample computer 2, updates its display and status to show the search results and then to potentially permit follow-up interaction by a user of the target computer. Simultaneously in this example, a confirmation is sent to the source computer, or in thisexample computer 1. That computer updates its display and its status, such as by removing indications of the search query that was previously submitted, and switching into a different mode that is relevant to the submission that the user provided. For example, when a user opens a search box on their device and then chooses voice input, the user may search for the title of a television program, and data for generating an electronic program guide may be supplied to the user's television. At the same time, the user's smart phone may be made automatically to convert to a remote control device for navigating the program guide, so that the user may perform follow-up actions on their search results. In other examples, a tablet computer may be the target computer, and a user may interact with search results on the tablet computer, including by selecting content on the tablet computer and sweeping it with a finger input to cause it to be copied to the source computer, such as by being added to a clipboard corresponding to the source computer. - Referring now to
FIG. 4B , the process is similar to the process inFIG. 4A , but the results are routed through the first computer before ending up at the second computer. Thus, atboxes box 424, the first computer receives a voice query from its user and submits that voice query to a server system. Such submissions have been described above. Atbox 426, the server system receives, converts, and formats the query. Again, such operations have been described in detail above. Atbox 428, the server system applies the query to a search engine, which generates results that are passed back to the server system from the search engine. Atbox 430, the formatted results are sent by the server system to the first computer which then receives those results atbox 432. Again, in an alternative implementation, the submission of the query to the search engine may be by a second computer after the first computer causes the spoken input to be converted to text and passes the text to the second computer. - The first computer then transmits the results at
box 434 over the previously-created short range data connection to the second computer. The second computer then receives those results and displays the results. Such a forwarding of the results from the first computer to the second computer may be automatic and transparent to the user so that the user does not even know the results are passing from the first computer to the second computer, but instead simply sees that the results are appearing on the second computer. An information handling application on the first computer may be programmed to identify related devices that are known to belong to the same user as the initiating device, so as to cause information to be displayed on those devices rather than on the initiating, or source, device. - At
box 436, the display and status of the first computer is updated. Thus, for example, it may be determined that the user does not want to have a search box or voice search functionality continue to be displayed to them after they've receive search results. Rather, the display of the first computer and its status may be changed to a different mode that has been determined to be suited for interaction with whatever information has been provided to the second computer. - In this manner, results generated by a hosted server system for user interaction may be directed to a computer other than the computer on which the user interaction occurred. Such re-directed delivery of the results may provide a variety of benefits, such as allowing a user to direct information to a device that is best able to handle, display, or manipulate the results. Also, the user may be able to split duties among multiple devices, so that the user can enter queries on one device and then review results on another device (and then pass portions of the results back to the first device for further manipulation).
-
FIG. 4C is an activity diagram for pairing of two computer systems in preparation for computer-to-computer communications. The messages shown here are sent across a TCP channel established between a mobile device and a television device, where the mobile device takes the role of a client and the television device takes the role of a server. The messages, which may be variable in length, may include an unsigned integer (e.g., a 4-bit integer) that indicates the length of the message payload, followed by a serialized message. - The messaging sequence for pairing occurs in a short-lived SSL connection. A client, such as a smartphone, sends a sequence of messages to a server, such as a television, where each message calls for a specific acknowledgement form the server, and where the logic of the protocol does not branch.
- The protocol in this example begins with
communication 440, where the client sends a PairingRequest message to initiate the pairing process. The server then acknowledges the pairing request atcommunication 442. Atcommunication 444, the client sends to the server its options for handling challenges, or identification of the types of challenges it can handle. And atcommunication 446, the server sends its options—the kinds of challenges it can issue and the kinds of responses inputs it can receive. - The client then sends, at
communication 448, configuration details for the challenge, and the server response with an acknowledgement (communication 450). The client and server then exchange a secret (communications 452 and 454). The server may issue an appropriate challenge, such as by displaying a code. The client responds to it (e.g., via the user interacting with the client), such as by echoing the code back. When the user responds to the challenge, the client checks the response and if it is correct, sends a secret message, and the server checks the response and if it is correct, sends a secret acknowledgement (communication 456). Subsequent communications may then be made on the channel that has been established by the process, in manners like those described above and below. -
FIG. 4D is a schematic diagram showing example messages that may be used in a computer-to-computer communication protocol. Each message in the protocol includes anouter message 458, which includes fields for identifying a version of the protocol that the transmitting device is using (box 460) in the form of a integer, and a status integer that defines the status of the protocol. A status of okay implies that a previous message was accepted, and that the next message in the protocol may be sent. Any other value indicates that the sender has experienced a fault, which may cause the session to terminate. A next session may then be attempted by the devices. Theouter message 458 encapsulates all messages exchanged on the wire (or in a wireless mode) and contains common header fields. - Two other fields in the
outer message 458 may be made optional in some protocols, so that they are required when the status is okay, but not if it is not okay. Thetype field 464 contains an integer type number that describes the payload, while thepayload field 466 contains the encapsulated message whose type matches the “type”field 464. Each of the remaining fields 468-480 may appear in the payload in different communications, where only one of the fields 468-480 would appear in any particular communications. As shown, the particular example fields here match respective communications in the activity diagram ofFIG. 4C . -
FIG. 4E is a swim lane diagram of a process for providing voice input to a television from a mobile computing device. In general, this process shows the approach described above (e.g., with respect toFIG. 1B ) by which a mobile device like a smartphone causes a speech-to-text conversion to be performed, passes the text to another computer, such as a television, and the television then performs a search or other operation using the converted text. - The process begins at
box 482, where the mobile device receives a spoken input from a user. Depending on the context, the input may be assumed to be intended as a search query, such as when the user speaks the query while a search box is being displayed on the mobile device. Atbox 484, the mobile device transmits to the speech-to-text server system an audio file that includes the spoken input. The server system then processes the audio file (box 485), such as by recognizing the type of input from meta data that is provided with the digital audio file. The server system then converts the spoken audio to corresponding text and transmits the text back to the mobile device (box 486). - At
box 487, the mobile device receives the text that has been converted from the spoken input, and forwards the text to a television system atbox 488. The text may be converted, reformatted, or transferred, into different forms by the mobile device before being forwarded. For example, the text may be provided into a transmission like that shown inFIG. 4E below so as to match an agree-upon protocol for communications between the mobile device and the television. - The television then processes the text according to an automated sequence that is triggered by receiving text in a particular type of transmission form the mobile device. At
box 489, for example, the television processes the text and then places it into a query that is transmitted to a search engine. The search engine in turn receives the query (box 490), and generates and transmits search results for the query in a conventional manner (box 491). In this example, the corpus for the query may be limited to media-related items so that the search results represent instances of media for a user to watch or listen to—as contrasted to ordinary web search results, and other such results. - The television then processes the results (box 492) in various manners. In one example, where the data returned from the search engine includes search results, the television may pass information about the results to the mobile device (box 493), and the device may display a portion of that information as a list of the search results (box 494). The television may also display the search results, in the same form as on the mobile device or in a different form (box 495), which may be a “richer” form that is more attuned to a larger display, such as by providing larger images, additional text, or animations (e.g., similar to the video clips that are often looped on the main screens for DVDs).
- A user of the mobile device and simultaneous viewer of the television may then interact with the results in various manners. For example, if the results are media-related search results, the user may choose to view or listen to one of the results. If the results are statements by other users in a chat session, the user may choose to respond—such as by again speaking a statement into the mobile device. At
box 496, the mobile device receives such user interaction, and transmits control signals atbox 497 to the television. The television may then be made to respond to the actions (box 498), such as by changing channels, setting the recording for a PVR, starting the streaming of a program, or other familiar mechanisms by which a user may interact with a television or other form of computer. -
FIG. 5 is a block diagram ofcomputing devices Computing device 500 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.Computing device 550 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. Additionally computingdevice -
Computing device 500 includes aprocessor 502,memory 504, astorage device 506, a high-speed interface 508 connecting tomemory 504 and high-speed expansion ports 510, and alow speed interface 512 connecting tolow speed bus 514 andstorage device 506. Each of thecomponents processor 502 can process instructions for execution within thecomputing device 500, including instructions stored in thememory 504 or on thestorage device 506 to display graphical information for a GUI on an external input/output device, such asdisplay 516 coupled tohigh speed interface 508. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also,multiple computing devices 500 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). - The
memory 504 stores information within thecomputing device 500. In one implementation, thememory 504 is a volatile memory unit or units. In another implementation, thememory 504 is a non-volatile memory unit or units. Thememory 504 may also be another form of computer-readable medium, such as a magnetic or optical disk. - The
storage device 506 is capable of providing mass storage for thecomputing device 500. In one implementation, thestorage device 506 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as thememory 504, thestorage device 506, or memory onprocessor 502. - The
high speed controller 508 manages bandwidth-intensive operations for thecomputing device 500, while thelow speed controller 512 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 508 is coupled tomemory 504, display 516 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 510, which may accept various expansion cards (not shown). In the implementation, low-speed controller 512 is coupled tostorage device 506 and low-speed expansion port 514. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. - The
computing device 500 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as astandard server 520, or multiple times in a group of such servers. It may also be implemented as part of arack server system 524. In addition, it may be implemented in a personal computer such as alaptop computer 522. Alternatively, components fromcomputing device 500 may be combined with other components in a mobile device (not shown), such asdevice 550. Each of such devices may contain one or more ofcomputing device multiple computing devices -
Computing device 550 includes aprocessor 552,memory 564, an input/output device such as adisplay 554, acommunication interface 566, and atransceiver 568, among other components. Thedevice 550 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of thecomponents - The
processor 552 can execute instructions within thecomputing device 550, including instructions stored in thememory 564. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. Additionally, the processor may be implemented using any of a number of architectures. For example, theprocessor 410 may be a CISC (Complex Instruction Set Computers) processor, a RISC (Reduced Instruction Set Computer) processor, or a MISC (Minimal Instruction Set Computer) processor. The processor may provide, for example, for coordination of the other components of thedevice 550, such as control of user interfaces, applications run bydevice 550, and wireless communication bydevice 550. -
Processor 552 may communicate with a user throughcontrol interface 558 anddisplay interface 556 coupled to adisplay 554. Thedisplay 554 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Thedisplay interface 556 may comprise appropriate circuitry for driving thedisplay 554 to present graphical and other information to a user. Thecontrol interface 558 may receive commands from a user and convert them for submission to theprocessor 552. In addition, anexternal interface 562 may be provide in communication withprocessor 552, so as to enable near area communication ofdevice 550 with other devices.External interface 562 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. - The
memory 564 stores information within thecomputing device 550. Thememory 564 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.Expansion memory 574 may also be provided and connected todevice 550 throughexpansion interface 572, which may include, for example, a SIMM (Single In Line Memory Module) card interface.Such expansion memory 574 may provide extra storage space fordevice 550, or may also store applications or other information fordevice 550. Specifically,expansion memory 574 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example,expansion memory 574 may be provide as a security module fordevice 550, and may be programmed with instructions that permit secure use ofdevice 550. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. - The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the
memory 564,expansion memory 574, or memory onprocessor 552 that may be received, for example, overtransceiver 568 orexternal interface 562. -
Device 550 may communicate wirelessly throughcommunication interface 566, which may include digital signal processing circuitry where necessary.Communication interface 566 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 568. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System)receiver module 570 may provide additional navigation- and location-related wireless data todevice 550, which may be used as appropriate by applications running ondevice 550. -
Device 550 may also communicate audibly usingaudio codec 560, which may receive spoken information from a user and convert it to usable digital information.Audio codec 560 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset ofdevice 550. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating ondevice 550. - The
computing device 550 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as acellular telephone 580. It may also be implemented as part of asmartphone 582, personal digital assistant, or other similar mobile device. - Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
- These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
- To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
- The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), peer-to-peer networks (having ad-hoc or static members), grid computing infrastructures, and the Internet.
- The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made. For example, advantageous results may be achieved if the steps of the disclosed techniques were performed in a different sequence, if components in the disclosed systems were combined in a different manner, or if the components were replaced or supplemented by other components. Accordingly, other embodiments are within the scope of the following claims.
Claims (20)
1. A computer-implemented method for information sharing between a portable computing device and a television system, the method comprising:
receiving a spoken input from a user of the portable computing device, by the portable computing device;
submitting a digital recording of the spoken query from the portable computing device to a remote server system;
receiving from the remote server system a textual representation of the spoken query; and
automatically transmitting the textual representation from the portable computing device to the television system,
wherein the television system is programmed to submit the textual representation as a search query and to present to the user media-related results that are determined to be responsive to the spoken query.
2. The computer-implemented method of claim 1 , further comprising, before automatically transmitting the textual representation, pairing the mobile computing device and television system over a local area network using a pairing protocol by which the mobile computing device and television system communicate with each other in a predetermined manner.
3. The computer-implemented method of claim 1 , further comprising using the textual representation to perform a local search of files stored on recordable media located in the television system.
4. The computer-implemented method of claim 1 , further comprising automatically submitting the textual representation from the television system to a remote search engine, receiving in return search results that are responsive to a query in the textual representation, and presenting by the television system the search results.
5. The computer-implemented method of claim 4 , wherein the search results are presented as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the television system so that the user may select one or more of the items for viewing or listening.
6. The computer-implemented method of claim 4 , further comprising transmitting all or a portion of the search results from the television system to the mobile computing device.
7. The computer-implemented method of claim 1 , further comprising providing to the search engine a request type for the search request that defines a type of information to be provided in the search results, and receiving search results that the search engine directed to the request type.
8. The computer-implemented method of claim 1 , further comprising determining on the mobile computing device whether the spoken input is directed to the television system, and automatically transmitting the textual representation from the portable computing device to the television system only if the spoken input is determined to be directed to the television system.
9. The computer-implemented method of claim 1 , further comprising determining that the television system is not currently available to display the results, and storing the results or the textual representation until the television system is determined to be available to display the results.
10. The computer-implemented method of claim 1 , further comprising receiving from the television system an indication that a user has selected a portion of the search results, and automatically causing a display on the portable computing device to change in response to receiving the indication.
11. The computer-implemented method of claim 1 , further comprising receiving a subsequent user input on the portable computing device, and causing the presentation of the search results to change in response to receiving the subsequent user input.
12. A computer-implemented method for information sharing between computers, the method comprising:
receiving a spoken input at a first computer from a user of the first computer;
providing the audio of the spoken request to a first remote server system;
receiving a response from the first remote server system, the response including text of the spoken request; and
automatically transmitting data generated from the response that includes the spoken request, from the first computer to a second computer that is nearby the first computer,
wherein the second computer is programmed to automatically perform an action that causes a result generated by applying an operation to the transmitted data, to be presented to the user of the first computer.
13. The computer-implemented method of claim 12 , further comprising automatically submitting the text of the spoken request from the second computer to a remote search engine, receiving in return search results that are responsive to a query in the text of the spoken request, and presenting by the second computer the search results.
14. The computer-implemented method of claim 13 , wherein the search results are presented on the second computer as a group of music, movie, or television items that are determined to be responsive to the query, and are presented on the second computer so that the user may select one or more of the items for viewing or listening.
15. The computer-implemented method of claim 13 , further comprising transmitting all or a portion of the search results from the second computer to the first computer.
16. A computer-implemented system for information sharing, the system comprising:
a mobile computing device; and
software stored on the mobile computing device and operable on one or more processors of the mobile computing device to:
transmit spoken commands made by a user of the mobile computing device, to a remote server system;
receive in response, from the remote server system, text of the spoken commands; and
automatically provide the text received from the remote server system to a second computer operating in the close vicinity of the mobile computing device.
17. The system of claim 16 , further comprising the second computer, wherein the second computer is programmed to provide the text received from the remote server system to a second remote server system as a search query, and to use search results received in response from the second remote server system to present the search results on a display of the second computer.
18. The system of claim 17 , wherein the second computer comprises a television.
19. The system of claim 17 , wherein the first computer and the second computer are programmed to automatically pair over a local data connection when each is within communication of the local data connection.
20. The system of claim 16 , wherein the second computer is programmed to submit the text to a search engine that performs searches directed specifically to media-related content.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/248,912 US20120042343A1 (en) | 2010-05-20 | 2011-09-29 | Television Remote Control Data Transfer |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US34687010P | 2010-05-20 | 2010-05-20 | |
US13/111,853 US8522283B2 (en) | 2010-05-20 | 2011-05-19 | Television remote control data transfer |
US13/248,912 US20120042343A1 (en) | 2010-05-20 | 2011-09-29 | Television Remote Control Data Transfer |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/111,853 Continuation US8522283B2 (en) | 2010-05-20 | 2011-05-19 | Television remote control data transfer |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120042343A1 true US20120042343A1 (en) | 2012-02-16 |
Family
ID=45329440
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/111,853 Active 2031-07-24 US8522283B2 (en) | 2010-05-20 | 2011-05-19 | Television remote control data transfer |
US13/248,912 Abandoned US20120042343A1 (en) | 2010-05-20 | 2011-09-29 | Television Remote Control Data Transfer |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/111,853 Active 2031-07-24 US8522283B2 (en) | 2010-05-20 | 2011-05-19 | Television remote control data transfer |
Country Status (1)
Country | Link |
---|---|
US (2) | US8522283B2 (en) |
Cited By (287)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070100790A1 (en) * | 2005-09-08 | 2007-05-03 | Adam Cheyer | Method and apparatus for building an intelligent automated assistant |
US20070156910A1 (en) * | 2003-05-02 | 2007-07-05 | Apple Computer, Inc. | Method and apparatus for displaying information during an instant messaging session |
US20070186148A1 (en) * | 1999-08-13 | 2007-08-09 | Pixo, Inc. | Methods and apparatuses for display and traversing of links in page character array |
US20070294083A1 (en) * | 2000-03-16 | 2007-12-20 | Bellegarda Jerome R | Fast, language-independent method for user authentication by voice |
US20080129520A1 (en) * | 2006-12-01 | 2008-06-05 | Apple Computer, Inc. | Electronic device with enhanced audio feedback |
US20080248797A1 (en) * | 2007-04-03 | 2008-10-09 | Daniel Freeman | Method and System for Operating a Multi-Function Portable Electronic Device Using Voice-Activation |
US20090112647A1 (en) * | 2007-10-26 | 2009-04-30 | Christopher Volkert | Search Assistant for Digital Media Assets |
US20090164441A1 (en) * | 2007-12-20 | 2009-06-25 | Adam Cheyer | Method and apparatus for searching using an active ontology |
US20090225041A1 (en) * | 2008-03-04 | 2009-09-10 | Apple Inc. | Language input interface on a device |
US20100048256A1 (en) * | 2005-09-30 | 2010-02-25 | Brian Huppi | Automated Response To And Sensing Of User Activity In Portable Devices |
US20100076767A1 (en) * | 2001-10-22 | 2010-03-25 | Braintexter, Inc. | Text to speech conversion of text messages from mobile communication devices |
US20100082346A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for text to speech synthesis |
US20100082348A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for text normalization for text to speech synthesis |
US20100082347A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US20100082344A1 (en) * | 2008-09-29 | 2010-04-01 | Apple, Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US20100082328A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for speech preprocessing in text to speech synthesis |
US20100088100A1 (en) * | 2008-10-02 | 2010-04-08 | Lindahl Aram M | Electronic devices with voice command and contextual data processing capabilities |
US20100228549A1 (en) * | 2009-03-09 | 2010-09-09 | Apple Inc | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US20110010179A1 (en) * | 2009-07-13 | 2011-01-13 | Naik Devang K | Voice synthesis and processing |
US20110066438A1 (en) * | 2009-09-15 | 2011-03-17 | Apple Inc. | Contextual voiceover |
US20110161315A1 (en) * | 2007-02-23 | 2011-06-30 | Olivier Bonnet | Pattern Searching Methods and Apparatuses |
US20110166856A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Noise profile determination for voice-related feature |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US20110172994A1 (en) * | 2010-01-13 | 2011-07-14 | Apple Inc. | Processing of voice inputs |
US20110208524A1 (en) * | 2010-02-25 | 2011-08-25 | Apple Inc. | User profiling for voice input processing |
US8359234B2 (en) | 2007-07-26 | 2013-01-22 | Braintexter, Inc. | System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system |
US8543407B1 (en) | 2007-10-04 | 2013-09-24 | Great Northern Research, LLC | Speech interface system and method for control and interaction with applications on a computing system |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US20130325466A1 (en) * | 2012-05-10 | 2013-12-05 | Clickberry, Inc. | System and method for controlling interactive video using voice |
US20130332172A1 (en) * | 2012-06-08 | 2013-12-12 | Apple Inc. | Transmitting data from an automated assistant to an accessory |
US8620662B2 (en) | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
CN103516711A (en) * | 2012-06-27 | 2014-01-15 | 三星电子株式会社 | Display apparatus, method for controlling display apparatus, and interactive system |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8660849B2 (en) | 2010-01-18 | 2014-02-25 | Apple Inc. | Prioritizing selection criteria by automated assistant |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8688446B2 (en) | 2008-02-22 | 2014-04-01 | Apple Inc. | Providing text input using speech data and non-speech data |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US20140123185A1 (en) * | 2012-10-31 | 2014-05-01 | Samsung Electronics Co., Ltd. | Broadcast receiving apparatus, server and control methods thereof |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US20140188486A1 (en) * | 2012-12-31 | 2014-07-03 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
US20140195230A1 (en) * | 2013-01-07 | 2014-07-10 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling the same |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US20140208363A1 (en) * | 2013-01-21 | 2014-07-24 | Ali (Zhuhai) Corporation | Searching method and digital stream system |
CN103970791A (en) * | 2013-02-01 | 2014-08-06 | 华为技术有限公司 | Method and device for recommending video from video database |
US20140223466A1 (en) * | 2013-02-01 | 2014-08-07 | Huawei Technologies Co., Ltd. | Method and Apparatus for Recommending Video from Video Library |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
US20140350925A1 (en) * | 2013-05-21 | 2014-11-27 | Samsung Electronics Co., Ltd. | Voice recognition apparatus, voice recognition server and voice recognition guide method |
US20140379599A1 (en) * | 2013-06-20 | 2014-12-25 | Fourthwall Media, Inc. | System and method for generating and transmitting data without personally identifiable information |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US20150189391A1 (en) * | 2014-01-02 | 2015-07-02 | Samsung Electronics Co., Ltd. | Display device, server device, voice input system and methods thereof |
US20150189362A1 (en) * | 2013-12-27 | 2015-07-02 | Samsung Electronics Co., Ltd. | Display apparatus, server apparatus, display system including them, and method for providing content thereof |
KR20150130635A (en) * | 2014-05-13 | 2015-11-24 | 한국전자통신연구원 | Method and apparatus for speech recognition using smart remote control |
WO2016003509A1 (en) * | 2014-06-30 | 2016-01-07 | Apple Inc. | Intelligent automated assistant for tv user interactions |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9311043B2 (en) | 2010-01-13 | 2016-04-12 | Apple Inc. | Adaptive audio feedback system and method |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330381B2 (en) | 2008-01-06 | 2016-05-03 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9357250B1 (en) * | 2013-03-15 | 2016-05-31 | Apple Inc. | Multi-screen video user interface |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
WO2016109529A1 (en) * | 2014-12-29 | 2016-07-07 | Quixey, Inc. | Viewing search results using multiple different devices |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US20170154625A1 (en) * | 2014-06-17 | 2017-06-01 | Lg Electronics Inc. | Video display device and operation method therefor |
US9693083B1 (en) | 2014-12-31 | 2017-06-27 | The Directv Group, Inc. | Systems and methods for controlling purchasing and/or reauthorization to access content using quick response codes and text messages |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
WO2018005334A1 (en) * | 2016-06-27 | 2018-01-04 | Amazon Technologies, Inc. | Systems and methods for routing content to an associated output device |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9946706B2 (en) | 2008-06-07 | 2018-04-17 | Apple Inc. | Automatic language identification for dynamic text processing |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US10019994B2 (en) | 2012-06-08 | 2018-07-10 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10078487B2 (en) | 2013-03-15 | 2018-09-18 | Apple Inc. | Context-sensitive handling of interruptions |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
KR20190021407A (en) * | 2016-06-27 | 2019-03-05 | 아마존 테크놀로지스, 인크. | System and method for routing content to an associated output device |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10271093B1 (en) | 2016-06-27 | 2019-04-23 | Amazon Technologies, Inc. | Systems and methods for routing content to an associated output device |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US20190132622A1 (en) * | 2018-08-07 | 2019-05-02 | Setos Family Trust | System for temporary access to subscriber content over non-proprietary networks |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
WO2019164049A1 (en) * | 2018-02-21 | 2019-08-29 | Lg Electronics Inc. | Display device and operating method thereof |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10409454B2 (en) | 2014-03-05 | 2019-09-10 | Samsung Electronics Co., Ltd. | Smart watch device and user interface thereof |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10771848B1 (en) * | 2019-01-07 | 2020-09-08 | Alphonso Inc. | Actionable contents of interest |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
WO2021061304A1 (en) * | 2019-09-26 | 2021-04-01 | Dish Network L.L.C. | Method and system for implementing an elastic cloud-based voice search utilized by set-top box (stb) clients |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US10999636B1 (en) | 2014-10-27 | 2021-05-04 | Amazon Technologies, Inc. | Voice-based content searching on a television based on receiving candidate search strings from a remote server |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11057682B2 (en) | 2019-03-24 | 2021-07-06 | Apple Inc. | User interfaces including selectable representations of content items |
US11070889B2 (en) | 2012-12-10 | 2021-07-20 | Apple Inc. | Channel bar user interface |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11151899B2 (en) | 2013-03-15 | 2021-10-19 | Apple Inc. | User training by intelligent digital assistant |
US11150922B2 (en) * | 2017-04-25 | 2021-10-19 | Google Llc | Initializing a conversation with an automated agent via selectable graphical element |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11194546B2 (en) | 2012-12-31 | 2021-12-07 | Apple Inc. | Multi-user TV user interface |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11206449B2 (en) * | 2014-06-12 | 2021-12-21 | Google Llc | Adapting search query processing according to locally detected video content consumption |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11245967B2 (en) | 2012-12-13 | 2022-02-08 | Apple Inc. | TV side bar user interface |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11290762B2 (en) | 2012-11-27 | 2022-03-29 | Apple Inc. | Agnostic media delivery system |
US11297392B2 (en) | 2012-12-18 | 2022-04-05 | Apple Inc. | Devices and method for providing remote control hints on a display |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11334037B2 (en) | 2013-03-01 | 2022-05-17 | Comcast Cable Communications, Llc | Systems and methods for controlling devices |
US11341963B2 (en) * | 2017-12-06 | 2022-05-24 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling same |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11461397B2 (en) | 2014-06-24 | 2022-10-04 | Apple Inc. | Column interface for navigating in a user interface |
US11467726B2 (en) | 2019-03-24 | 2022-10-11 | Apple Inc. | User interfaces for viewing and accessing content on an electronic device |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11520858B2 (en) | 2016-06-12 | 2022-12-06 | Apple Inc. | Device-level authorization for viewing content |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11543938B2 (en) | 2016-06-12 | 2023-01-03 | Apple Inc. | Identifying applications on which content is available |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11599332B1 (en) | 2007-10-04 | 2023-03-07 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
US11609678B2 (en) | 2016-10-26 | 2023-03-21 | Apple Inc. | User interfaces for browsing content from multiple content applications on an electronic device |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11683565B2 (en) | 2019-03-24 | 2023-06-20 | Apple Inc. | User interfaces for interacting with channels that provide content that plays in a media browsing application |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11720229B2 (en) | 2020-12-07 | 2023-08-08 | Apple Inc. | User interfaces for browsing and presenting content |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11797606B2 (en) | 2019-05-31 | 2023-10-24 | Apple Inc. | User interfaces for a podcast browsing and playback application |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11843838B2 (en) | 2020-03-24 | 2023-12-12 | Apple Inc. | User interfaces for accessing episodes of a content series |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11863837B2 (en) | 2019-05-31 | 2024-01-02 | Apple Inc. | Notification of augmented reality content on an electronic device |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US11899895B2 (en) | 2020-06-21 | 2024-02-13 | Apple Inc. | User interfaces for setting up an electronic device |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11934640B2 (en) | 2021-01-29 | 2024-03-19 | Apple Inc. | User interfaces for record labels |
US11962836B2 (en) | 2020-03-24 | 2024-04-16 | Apple Inc. | User interfaces for a media browsing application |
Families Citing this family (101)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020059415A1 (en) | 2000-11-01 | 2002-05-16 | Chang William Ho | Manager for device-to-device pervasive digital output |
US10915296B2 (en) | 2000-11-01 | 2021-02-09 | Flexiworld Technologies, Inc. | Information apparatus that includes a touch sensitive screen interface for managing or replying to e-mails |
US10860290B2 (en) | 2000-11-01 | 2020-12-08 | Flexiworld Technologies, Inc. | Mobile information apparatuses that include a digital camera, a touch sensitive screen interface, support for voice activated commands, and a wireless communication chip or chipset supporting IEEE 802.11 |
US11204729B2 (en) | 2000-11-01 | 2021-12-21 | Flexiworld Technologies, Inc. | Internet based digital content services for pervasively providing protected digital content to smart devices based on having subscribed to the digital content service |
AU2002226948A1 (en) | 2000-11-20 | 2002-06-03 | Flexiworld Technologies, Inc. | Tobile and pervasive output components |
US20020097408A1 (en) | 2001-01-19 | 2002-07-25 | Chang William Ho | Output device for universal data output |
KR100955316B1 (en) * | 2007-12-15 | 2010-04-29 | 한국전자통신연구원 | Multimodal fusion apparatus capable of remotely controlling electronic device and method thereof |
US10116972B2 (en) | 2009-05-29 | 2018-10-30 | Inscape Data, Inc. | Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device |
US9055335B2 (en) * | 2009-05-29 | 2015-06-09 | Cognitive Networks, Inc. | Systems and methods for addressing a media database using distance associative hashing |
US10375451B2 (en) | 2009-05-29 | 2019-08-06 | Inscape Data, Inc. | Detection of common media segments |
US8769584B2 (en) | 2009-05-29 | 2014-07-01 | TVI Interactive Systems, Inc. | Methods for displaying contextually targeted content on a connected television |
US9449090B2 (en) | 2009-05-29 | 2016-09-20 | Vizio Inscape Technologies, Llc | Systems and methods for addressing a media database using distance associative hashing |
US10949458B2 (en) | 2009-05-29 | 2021-03-16 | Inscape Data, Inc. | System and method for improving work load management in ACR television monitoring system |
US10192138B2 (en) | 2010-05-27 | 2019-01-29 | Inscape Data, Inc. | Systems and methods for reducing data density in large datasets |
US9838753B2 (en) | 2013-12-23 | 2017-12-05 | Inscape Data, Inc. | Monitoring individual viewing of television events using tracking pixels and cookies |
US20120030712A1 (en) * | 2010-08-02 | 2012-02-02 | At&T Intellectual Property I, L.P. | Network-integrated remote control with voice activation |
US20120059655A1 (en) * | 2010-09-08 | 2012-03-08 | Nuance Communications, Inc. | Methods and apparatus for providing input to a speech-enabled application program |
US8677402B2 (en) * | 2010-11-10 | 2014-03-18 | Sony Corporation | Second display support of character set unsupported on playback device |
US9426510B2 (en) * | 2011-02-11 | 2016-08-23 | Sony Corporation | Method and apparatus for searching over a network |
JP6034551B2 (en) * | 2011-03-16 | 2016-11-30 | 任天堂株式会社 | Information processing system, information processing apparatus, information processing program, and image display method |
US9380336B2 (en) * | 2011-06-20 | 2016-06-28 | Enseo, Inc. | Set-top box with enhanced content and system and method for use of same |
WO2013012107A1 (en) * | 2011-07-19 | 2013-01-24 | 엘지전자 주식회사 | Electronic device and method for controlling same |
US20130030789A1 (en) * | 2011-07-29 | 2013-01-31 | Reginald Dalce | Universal Language Translator |
KR101590332B1 (en) | 2012-01-09 | 2016-02-18 | 삼성전자주식회사 | Imaging apparatus and controlling method thereof |
KR101631594B1 (en) * | 2012-01-09 | 2016-06-24 | 삼성전자주식회사 | Display apparatus and controlling method thereof |
US8543398B1 (en) | 2012-02-29 | 2013-09-24 | Google Inc. | Training an automatic speech recognition system using compressed word frequencies |
US8374865B1 (en) | 2012-04-26 | 2013-02-12 | Google Inc. | Sampling training data for an automatic speech recognition system based on a benchmark classification distribution |
US8805684B1 (en) | 2012-05-31 | 2014-08-12 | Google Inc. | Distributed speaker adaptation |
US8571859B1 (en) | 2012-05-31 | 2013-10-29 | Google Inc. | Multi-stage speaker adaptation |
US8880398B1 (en) | 2012-07-13 | 2014-11-04 | Google Inc. | Localized speech recognition with offload |
KR20140029049A (en) * | 2012-08-31 | 2014-03-10 | 삼성전자주식회사 | Display apparat and input signal processing method using the same |
US9123333B2 (en) | 2012-09-12 | 2015-09-01 | Google Inc. | Minimum bayesian risk methods for automatic speech recognition |
JP2014085780A (en) * | 2012-10-23 | 2014-05-12 | Samsung Electronics Co Ltd | Broadcast program recommending device and broadcast program recommending program |
JP2014109889A (en) * | 2012-11-30 | 2014-06-12 | Toshiba Corp | Content retrieval device, content retrieval method and control program |
US10424291B2 (en) * | 2012-12-28 | 2019-09-24 | Saturn Licensing Llc | Information processing device, information processing method, and program |
KR102009316B1 (en) * | 2013-01-07 | 2019-08-09 | 삼성전자주식회사 | Interactive server, display apparatus and controlling method thereof |
KR20140089871A (en) * | 2013-01-07 | 2014-07-16 | 삼성전자주식회사 | Interactive server, control method thereof and interactive system |
US20140201241A1 (en) * | 2013-01-15 | 2014-07-17 | EasyAsk | Apparatus for Accepting a Verbal Query to be Executed Against Structured Data |
US9894312B2 (en) * | 2013-02-22 | 2018-02-13 | The Directv Group, Inc. | Method and system for controlling a user receiving device using voice commands |
US10133546B2 (en) * | 2013-03-14 | 2018-11-20 | Amazon Technologies, Inc. | Providing content on multiple devices |
US9842584B1 (en) | 2013-03-14 | 2017-12-12 | Amazon Technologies, Inc. | Providing content on multiple devices |
US9747899B2 (en) | 2013-06-27 | 2017-08-29 | Amazon Technologies, Inc. | Detecting self-generated wake expressions |
US10440165B2 (en) | 2013-07-26 | 2019-10-08 | SkyBell Technologies, Inc. | Doorbell communication and electrical systems |
US11651665B2 (en) | 2013-07-26 | 2023-05-16 | Skybell Technologies Ip, Llc | Doorbell communities |
US20180343141A1 (en) | 2015-09-22 | 2018-11-29 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US20170263067A1 (en) | 2014-08-27 | 2017-09-14 | SkyBell Technologies, Inc. | Smart lock systems and methods |
US11004312B2 (en) | 2015-06-23 | 2021-05-11 | Skybell Technologies Ip, Llc | Doorbell communities |
US11889009B2 (en) | 2013-07-26 | 2024-01-30 | Skybell Technologies Ip, Llc | Doorbell communication and electrical systems |
US9142214B2 (en) * | 2013-07-26 | 2015-09-22 | SkyBell Technologies, Inc. | Light socket cameras |
US10672238B2 (en) | 2015-06-23 | 2020-06-02 | SkyBell Technologies, Inc. | Doorbell communities |
US10708404B2 (en) | 2014-09-01 | 2020-07-07 | Skybell Technologies Ip, Llc | Doorbell communication and electrical systems |
US9443527B1 (en) * | 2013-09-27 | 2016-09-13 | Amazon Technologies, Inc. | Speech recognition capability generation and control |
US9877080B2 (en) * | 2013-09-27 | 2018-01-23 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling thereof |
CN103593213B (en) * | 2013-11-04 | 2017-04-05 | 华为技术有限公司 | text information input method and device |
KR102227599B1 (en) | 2013-11-12 | 2021-03-16 | 삼성전자 주식회사 | Voice recognition system, voice recognition server and control method of display apparatus |
US9584871B2 (en) * | 2013-12-19 | 2017-02-28 | Echostar Technologies L.L.C. | Smartphone bluetooth headset receiver |
US9955192B2 (en) | 2013-12-23 | 2018-04-24 | Inscape Data, Inc. | Monitoring individual viewing of television events using tracking pixels and cookies |
US9282358B2 (en) * | 2014-04-08 | 2016-03-08 | Yahoo! Inc. | Secure information exchange between devices using location and unique codes |
US10089985B2 (en) | 2014-05-01 | 2018-10-02 | At&T Intellectual Property I, L.P. | Smart interactive media content guide |
CN103974111B (en) * | 2014-05-22 | 2017-12-29 | 华为技术有限公司 | By the method, apparatus of the data transfer on intelligent terminal to television terminal |
US10687029B2 (en) | 2015-09-22 | 2020-06-16 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US11184589B2 (en) | 2014-06-23 | 2021-11-23 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US9888216B2 (en) | 2015-09-22 | 2018-02-06 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US20170085843A1 (en) | 2015-09-22 | 2017-03-23 | SkyBell Technologies, Inc. | Doorbell communication systems and methods |
US20150373295A1 (en) * | 2014-06-24 | 2015-12-24 | Samsung Electronics Co., Ltd. | Apparatus and method for device configuration |
US10430156B2 (en) * | 2014-06-27 | 2019-10-01 | Nuance Communications, Inc. | System and method for allowing user intervention in a speech recognition process |
US9997036B2 (en) | 2015-02-17 | 2018-06-12 | SkyBell Technologies, Inc. | Power outlet cameras |
US9420329B2 (en) | 2014-10-21 | 2016-08-16 | Bby Solutions, Inc. | Multistream tuner stick device for receiving and streaming digital content |
US9420214B2 (en) | 2014-10-21 | 2016-08-16 | Bby Solutions, Inc. | Television tuner device for processing digital audiovisual content |
CN108337925B (en) | 2015-01-30 | 2024-02-27 | 构造数据有限责任公司 | Method for identifying video clips and displaying options viewed from alternative sources and/or on alternative devices |
US10742938B2 (en) | 2015-03-07 | 2020-08-11 | Skybell Technologies Ip, Llc | Garage door communication systems and methods |
US11575537B2 (en) | 2015-03-27 | 2023-02-07 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
CN104822093B (en) * | 2015-04-13 | 2017-12-19 | 腾讯科技(北京)有限公司 | Barrage dissemination method and device |
US11381686B2 (en) | 2015-04-13 | 2022-07-05 | Skybell Technologies Ip, Llc | Power outlet cameras |
US10204104B2 (en) * | 2015-04-14 | 2019-02-12 | Google Llc | Methods, systems, and media for processing queries relating to presented media content |
CA2982797C (en) | 2015-04-17 | 2023-03-14 | Inscape Data, Inc. | Systems and methods for reducing data density in large datasets |
US20180047269A1 (en) | 2015-06-23 | 2018-02-15 | SkyBell Technologies, Inc. | Doorbell communities |
EP3113138A1 (en) | 2015-06-30 | 2017-01-04 | Orange | Remote control of an electronic device with a selectable element |
EP3113140A1 (en) | 2015-06-30 | 2017-01-04 | Orange | User input processing for controlling remote devices |
CN108293140B (en) | 2015-07-16 | 2020-10-02 | 构造数据有限责任公司 | Detection of common media segments |
WO2017011792A1 (en) | 2015-07-16 | 2017-01-19 | Vizio Inscape Technologies, Llc | Prediction of future views of video segments to optimize system resource utilization |
KR20180030885A (en) | 2015-07-16 | 2018-03-26 | 인스케이프 데이터, 인코포레이티드 | System and method for dividing search indexes for improved efficiency in identifying media segments |
US10080062B2 (en) | 2015-07-16 | 2018-09-18 | Inscape Data, Inc. | Optimizing media fingerprint retention to improve system resource utilization |
US10706702B2 (en) | 2015-07-30 | 2020-07-07 | Skybell Technologies Ip, Llc | Doorbell package detection systems and methods |
US10770067B1 (en) * | 2015-09-08 | 2020-09-08 | Amazon Technologies, Inc. | Dynamic voice search transitioning |
US10379808B1 (en) * | 2015-09-29 | 2019-08-13 | Amazon Technologies, Inc. | Audio associating of computing devices |
CN105704547A (en) * | 2016-03-28 | 2016-06-22 | 苏州乐聚堂电子科技有限公司 | Intelligent television system |
US10043332B2 (en) | 2016-05-27 | 2018-08-07 | SkyBell Technologies, Inc. | Doorbell package detection systems and methods |
KR102542766B1 (en) * | 2016-11-17 | 2023-06-14 | 엘지전자 주식회사 | Display device and operating method thereof |
WO2018112089A1 (en) * | 2016-12-13 | 2018-06-21 | Viatouch Media Inc. | Methods and utilities for consumer interaction with a self service system |
US11157886B2 (en) | 2017-02-03 | 2021-10-26 | Viatouch Media Inc. | Cantilevered weight sensitive shelf, rail, and mounting system |
BR112019019430A2 (en) | 2017-04-06 | 2020-04-14 | Inscape Data Inc | computer program system, method and product |
US10909825B2 (en) | 2017-09-18 | 2021-02-02 | Skybell Technologies Ip, Llc | Outdoor security systems and methods |
US10063910B1 (en) * | 2017-10-31 | 2018-08-28 | Rovi Guides, Inc. | Systems and methods for customizing a display of information associated with a media asset |
USD905159S1 (en) | 2017-11-15 | 2020-12-15 | ViaTouch Media, Inc. | Vending machine |
CN108833983A (en) * | 2018-07-04 | 2018-11-16 | 百度在线网络技术(北京)有限公司 | Played data acquisition methods, device, equipment and storage medium |
US11211063B2 (en) * | 2018-11-27 | 2021-12-28 | Lg Electronics Inc. | Multimedia device for processing voice command |
US11211071B2 (en) * | 2018-12-14 | 2021-12-28 | American International Group, Inc. | System, method, and computer program product for home appliance care |
US11074790B2 (en) | 2019-08-24 | 2021-07-27 | Skybell Technologies Ip, Llc | Doorbell communication systems and methods |
US20220286726A1 (en) * | 2019-09-03 | 2022-09-08 | Lg Electronics Inc. | Display device and control method therefor |
CN113674742B (en) * | 2021-08-18 | 2022-09-27 | 北京百度网讯科技有限公司 | Man-machine interaction method, device, equipment and storage medium |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5267323A (en) * | 1989-12-29 | 1993-11-30 | Pioneer Electronic Corporation | Voice-operated remote control system |
AR020608A1 (en) | 1998-07-17 | 2002-05-22 | United Video Properties Inc | A METHOD AND A PROVISION TO SUPPLY A USER REMOTE ACCESS TO AN INTERACTIVE PROGRAMMING GUIDE BY A REMOTE ACCESS LINK |
US6721953B1 (en) * | 2000-02-11 | 2004-04-13 | International Business Machines Corporation | Display of television program information using dynamically-adjusted scroll rate |
US7392281B1 (en) | 2000-02-25 | 2008-06-24 | Navic Systems, Inc. | System and method for providing guaranteed delivery of messages to embedded devices over a data network |
US6980120B2 (en) | 2000-03-10 | 2005-12-27 | Yu Philip K | Universal remote control with digital recorder |
CN1196324C (en) * | 2000-08-21 | 2005-04-06 | 皇家菲利浦电子有限公司 | A voice controlled remote control with downloadable set of voice commands |
US20020087987A1 (en) | 2000-11-16 | 2002-07-04 | Dudkiewicz Gil Gavriel | System and method for creating and editing a viewer profile used in determining the desirability of video programming events |
US6747566B2 (en) * | 2001-03-12 | 2004-06-08 | Shaw-Yuan Hou | Voice-activated remote control unit for multiple electrical apparatuses |
US7324947B2 (en) | 2001-10-03 | 2008-01-29 | Promptu Systems Corporation | Global speech user interface |
US7023498B2 (en) * | 2001-11-19 | 2006-04-04 | Matsushita Electric Industrial Co. Ltd. | Remote-controlled apparatus, a remote control system, and a remote-controlled image-processing apparatus |
US7260538B2 (en) * | 2002-01-08 | 2007-08-21 | Promptu Systems Corporation | Method and apparatus for voice control of a television control device |
US7519534B2 (en) | 2002-10-31 | 2009-04-14 | Agiletv Corporation | Speech controlled access to content on a presentation medium |
US7885963B2 (en) * | 2003-03-24 | 2011-02-08 | Microsoft Corporation | Free text and attribute searching of electronic program guide (EPG) data |
US7460050B2 (en) * | 2003-09-19 | 2008-12-02 | Universal Electronics, Inc. | Controlling device using cues to convey information |
WO2005122144A1 (en) | 2004-06-10 | 2005-12-22 | Matsushita Electric Industrial Co., Ltd. | Speech recognition device, speech recognition method, and program |
JP4207900B2 (en) | 2004-12-22 | 2009-01-14 | ソニー株式会社 | Remote control system, remote commander, and remote control server |
US20070197164A1 (en) | 2006-02-23 | 2007-08-23 | Arnold Sheynman | Method and device for automatic bluetooth pairing |
DE102006042014B4 (en) * | 2006-09-07 | 2016-01-21 | Fm Marketing Gmbh | Remote control |
US9311394B2 (en) | 2006-10-31 | 2016-04-12 | Sony Corporation | Speech recognition for internet video search and navigation |
US7940338B2 (en) * | 2006-10-31 | 2011-05-10 | Inventec Corporation | Voice-controlled TV set |
JP5002283B2 (en) | 2007-02-20 | 2012-08-15 | キヤノン株式会社 | Information processing apparatus and information processing method |
US8631440B2 (en) * | 2007-04-30 | 2014-01-14 | Google Inc. | Program guide user interface |
US8000972B2 (en) | 2007-10-26 | 2011-08-16 | Sony Corporation | Remote controller with speech recognition |
US8789108B2 (en) * | 2007-11-20 | 2014-07-22 | Samsung Electronics Co., Ltd. | Personalized video system |
US20090320076A1 (en) * | 2008-06-20 | 2009-12-24 | At&T Intellectual Property I, L.P. | System and Method for Processing an Interactive Advertisement |
US20100071007A1 (en) | 2008-09-12 | 2010-03-18 | Echostar Global B.V. | Method and Apparatus for Control of a Set-Top Box/Digital Video Recorder Using a Mobile Device |
US8281343B2 (en) | 2009-05-19 | 2012-10-02 | Cisco Technology, Inc. | Management and display of video content |
US11012732B2 (en) * | 2009-06-25 | 2021-05-18 | DISH Technologies L.L.C. | Voice enabled media presentation systems and methods |
US20110067059A1 (en) | 2009-09-15 | 2011-03-17 | At&T Intellectual Property I, L.P. | Media control |
US8629940B2 (en) * | 2009-12-09 | 2014-01-14 | Echostar Technologies L.L.C. | Apparatus, systems and methods for media device operation preferences based on remote control identification |
-
2011
- 2011-05-19 US US13/111,853 patent/US8522283B2/en active Active
- 2011-09-29 US US13/248,912 patent/US20120042343A1/en not_active Abandoned
Cited By (508)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070186148A1 (en) * | 1999-08-13 | 2007-08-09 | Pixo, Inc. | Methods and apparatuses for display and traversing of links in page character array |
US8527861B2 (en) | 1999-08-13 | 2013-09-03 | Apple Inc. | Methods and apparatuses for display and traversing of links in page character array |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US20070294083A1 (en) * | 2000-03-16 | 2007-12-20 | Bellegarda Jerome R | Fast, language-independent method for user authentication by voice |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US8345665B2 (en) | 2001-10-22 | 2013-01-01 | Apple Inc. | Text to speech conversion of text messages from mobile communication devices |
US20100076767A1 (en) * | 2001-10-22 | 2010-03-25 | Braintexter, Inc. | Text to speech conversion of text messages from mobile communication devices |
US8718047B2 (en) | 2001-10-22 | 2014-05-06 | Apple Inc. | Text to speech conversion of text messages from mobile communication devices |
US10348654B2 (en) | 2003-05-02 | 2019-07-09 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US10623347B2 (en) | 2003-05-02 | 2020-04-14 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US8458278B2 (en) | 2003-05-02 | 2013-06-04 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US20070156910A1 (en) * | 2003-05-02 | 2007-07-05 | Apple Computer, Inc. | Method and apparatus for displaying information during an instant messaging session |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US20070100790A1 (en) * | 2005-09-08 | 2007-05-03 | Adam Cheyer | Method and apparatus for building an intelligent automated assistant |
US11928604B2 (en) | 2005-09-08 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9501741B2 (en) | 2005-09-08 | 2016-11-22 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US8614431B2 (en) | 2005-09-30 | 2013-12-24 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US9389729B2 (en) | 2005-09-30 | 2016-07-12 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US20100048256A1 (en) * | 2005-09-30 | 2010-02-25 | Brian Huppi | Automated Response To And Sensing Of User Activity In Portable Devices |
US9958987B2 (en) | 2005-09-30 | 2018-05-01 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US9619079B2 (en) | 2005-09-30 | 2017-04-11 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US8942986B2 (en) | 2006-09-08 | 2015-01-27 | Apple Inc. | Determining user intent based on ontologies of domains |
US9117447B2 (en) | 2006-09-08 | 2015-08-25 | Apple Inc. | Using event alert text as input to an automated assistant |
US8930191B2 (en) | 2006-09-08 | 2015-01-06 | Apple Inc. | Paraphrasing of user requests and results by automated digital assistant |
US20080129520A1 (en) * | 2006-12-01 | 2008-06-05 | Apple Computer, Inc. | Electronic device with enhanced audio feedback |
US20110161315A1 (en) * | 2007-02-23 | 2011-06-30 | Olivier Bonnet | Pattern Searching Methods and Apparatuses |
US11012942B2 (en) | 2007-04-03 | 2021-05-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US20080248797A1 (en) * | 2007-04-03 | 2008-10-09 | Daniel Freeman | Method and System for Operating a Multi-Function Portable Electronic Device Using Voice-Activation |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8359234B2 (en) | 2007-07-26 | 2013-01-22 | Braintexter, Inc. | System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system |
US8909545B2 (en) | 2007-07-26 | 2014-12-09 | Braintexter, Inc. | System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8543407B1 (en) | 2007-10-04 | 2013-09-24 | Great Northern Research, LLC | Speech interface system and method for control and interaction with applications on a computing system |
US11599332B1 (en) | 2007-10-04 | 2023-03-07 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
US8364694B2 (en) | 2007-10-26 | 2013-01-29 | Apple Inc. | Search assistant for digital media assets |
US8943089B2 (en) | 2007-10-26 | 2015-01-27 | Apple Inc. | Search assistant for digital media assets |
US20090112647A1 (en) * | 2007-10-26 | 2009-04-30 | Christopher Volkert | Search Assistant for Digital Media Assets |
US9305101B2 (en) | 2007-10-26 | 2016-04-05 | Apple Inc. | Search assistant for digital media assets |
US8639716B2 (en) | 2007-10-26 | 2014-01-28 | Apple Inc. | Search assistant for digital media assets |
US8620662B2 (en) | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US20090164441A1 (en) * | 2007-12-20 | 2009-06-25 | Adam Cheyer | Method and apparatus for searching using an active ontology |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US11126326B2 (en) | 2008-01-06 | 2021-09-21 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9330381B2 (en) | 2008-01-06 | 2016-05-03 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US10503366B2 (en) | 2008-01-06 | 2019-12-10 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US8688446B2 (en) | 2008-02-22 | 2014-04-01 | Apple Inc. | Providing text input using speech data and non-speech data |
US9361886B2 (en) | 2008-02-22 | 2016-06-07 | Apple Inc. | Providing text input using speech data and non-speech data |
US20090225041A1 (en) * | 2008-03-04 | 2009-09-10 | Apple Inc. | Language input interface on a device |
US8289283B2 (en) | 2008-03-04 | 2012-10-16 | Apple Inc. | Language input interface on a device |
USRE46139E1 (en) | 2008-03-04 | 2016-09-06 | Apple Inc. | Language input interface on a device |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9946706B2 (en) | 2008-06-07 | 2018-04-17 | Apple Inc. | Automatic language identification for dynamic text processing |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9691383B2 (en) | 2008-09-05 | 2017-06-27 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
US20100082344A1 (en) * | 2008-09-29 | 2010-04-01 | Apple, Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US20100082346A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for text to speech synthesis |
US8352272B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for text to speech synthesis |
US20100082328A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for speech preprocessing in text to speech synthesis |
US8396714B2 (en) | 2008-09-29 | 2013-03-12 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US20100082348A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for text normalization for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8352268B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US20100082347A1 (en) * | 2008-09-29 | 2010-04-01 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US8355919B2 (en) | 2008-09-29 | 2013-01-15 | Apple Inc. | Systems and methods for text normalization for text to speech synthesis |
US8762469B2 (en) | 2008-10-02 | 2014-06-24 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US8713119B2 (en) | 2008-10-02 | 2014-04-29 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US20100088100A1 (en) * | 2008-10-02 | 2010-04-08 | Lindahl Aram M | Electronic devices with voice command and contextual data processing capabilities |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9412392B2 (en) | 2008-10-02 | 2016-08-09 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US8296383B2 (en) | 2008-10-02 | 2012-10-23 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US20100228549A1 (en) * | 2009-03-09 | 2010-09-09 | Apple Inc | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US8751238B2 (en) | 2009-03-09 | 2014-06-10 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20110010179A1 (en) * | 2009-07-13 | 2011-01-13 | Naik Devang K | Voice synthesis and processing |
US20110066438A1 (en) * | 2009-09-15 | 2011-03-17 | Apple Inc. | Contextual voiceover |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8600743B2 (en) | 2010-01-06 | 2013-12-03 | Apple Inc. | Noise profile determination for voice-related feature |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US20110166856A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Noise profile determination for voice-related feature |
US8311838B2 (en) | 2010-01-13 | 2012-11-13 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US8670985B2 (en) | 2010-01-13 | 2014-03-11 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US20110172994A1 (en) * | 2010-01-13 | 2011-07-14 | Apple Inc. | Processing of voice inputs |
US9311043B2 (en) | 2010-01-13 | 2016-04-12 | Apple Inc. | Adaptive audio feedback system and method |
US8731942B2 (en) | 2010-01-18 | 2014-05-20 | Apple Inc. | Maintaining context information between user interactions with a voice assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US8903716B2 (en) | 2010-01-18 | 2014-12-02 | Apple Inc. | Personalized vocabulary for digital assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US8799000B2 (en) | 2010-01-18 | 2014-08-05 | Apple Inc. | Disambiguation based on active input elicitation by intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US8660849B2 (en) | 2010-01-18 | 2014-02-25 | Apple Inc. | Prioritizing selection criteria by automated assistant |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8706503B2 (en) | 2010-01-18 | 2014-04-22 | Apple Inc. | Intent deduction based on previous user interactions with voice assistant |
US8670979B2 (en) | 2010-01-18 | 2014-03-11 | Apple Inc. | Active input elicitation by intelligent automated assistant |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9190062B2 (en) | 2010-02-25 | 2015-11-17 | Apple Inc. | User profiling for voice input processing |
US20110208524A1 (en) * | 2010-02-25 | 2011-08-25 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10446167B2 (en) | 2010-06-04 | 2019-10-15 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US9075783B2 (en) | 2010-09-27 | 2015-07-07 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US20130325466A1 (en) * | 2012-05-10 | 2013-12-05 | Clickberry, Inc. | System and method for controlling interactive video using voice |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US20130332172A1 (en) * | 2012-06-08 | 2013-12-12 | Apple Inc. | Transmitting data from an automated assistant to an accessory |
US9674331B2 (en) * | 2012-06-08 | 2017-06-06 | Apple Inc. | Transmitting data from an automated assistant to an accessory |
US10019994B2 (en) | 2012-06-08 | 2018-07-10 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
CN103516711A (en) * | 2012-06-27 | 2014-01-15 | 三星电子株式会社 | Display apparatus, method for controlling display apparatus, and interactive system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
US20140123185A1 (en) * | 2012-10-31 | 2014-05-01 | Samsung Electronics Co., Ltd. | Broadcast receiving apparatus, server and control methods thereof |
CN103796044A (en) * | 2012-10-31 | 2014-05-14 | 三星电子株式会社 | Broadcast receiving apparatus, server and control methods thereof |
US11290762B2 (en) | 2012-11-27 | 2022-03-29 | Apple Inc. | Agnostic media delivery system |
US11070889B2 (en) | 2012-12-10 | 2021-07-20 | Apple Inc. | Channel bar user interface |
US11317161B2 (en) | 2012-12-13 | 2022-04-26 | Apple Inc. | TV side bar user interface |
US11245967B2 (en) | 2012-12-13 | 2022-02-08 | Apple Inc. | TV side bar user interface |
US11297392B2 (en) | 2012-12-18 | 2022-04-05 | Apple Inc. | Devices and method for providing remote control hints on a display |
US11194546B2 (en) | 2012-12-31 | 2021-12-07 | Apple Inc. | Multi-user TV user interface |
CN103916686A (en) * | 2012-12-31 | 2014-07-09 | 三星电子株式会社 | Display apparatus and controlling method thereof |
US20140188486A1 (en) * | 2012-12-31 | 2014-07-03 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US11822858B2 (en) | 2012-12-31 | 2023-11-21 | Apple Inc. | Multi-user TV user interface |
US20140195230A1 (en) * | 2013-01-07 | 2014-07-10 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling the same |
CN104904227A (en) * | 2013-01-07 | 2015-09-09 | 三星电子株式会社 | Display apparatus and method for controlling the same |
US20140208363A1 (en) * | 2013-01-21 | 2014-07-24 | Ali (Zhuhai) Corporation | Searching method and digital stream system |
US20140223466A1 (en) * | 2013-02-01 | 2014-08-07 | Huawei Technologies Co., Ltd. | Method and Apparatus for Recommending Video from Video Library |
CN103970791A (en) * | 2013-02-01 | 2014-08-06 | 华为技术有限公司 | Method and device for recommending video from video database |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US11334037B2 (en) | 2013-03-01 | 2022-05-17 | Comcast Cable Communications, Llc | Systems and methods for controlling devices |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US11151899B2 (en) | 2013-03-15 | 2021-10-19 | Apple Inc. | User training by intelligent digital assistant |
US10078487B2 (en) | 2013-03-15 | 2018-09-18 | Apple Inc. | Context-sensitive handling of interruptions |
US9357250B1 (en) * | 2013-03-15 | 2016-05-31 | Apple Inc. | Multi-screen video user interface |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US10629196B2 (en) * | 2013-05-21 | 2020-04-21 | Samsung Electronics Co., Ltd. | Apparatus, system, and method for generating voice recognition guide by transmitting voice signal data to a voice recognition server which contains voice recognition guide information to send back to the voice recognition apparatus |
US11869500B2 (en) | 2013-05-21 | 2024-01-09 | Samsung Electronics Co., Ltd. | Apparatus, system, and method for generating voice recognition guide by transmitting voice signal data to a voice recognition server which contains voice recognition guide information to send back to the voice recognition apparatus |
US11024312B2 (en) | 2013-05-21 | 2021-06-01 | Samsung Electronics Co., Ltd. | Apparatus, system, and method for generating voice recognition guide by transmitting voice signal data to a voice recognition server which contains voice recognition guide information to send back to the voice recognition apparatus |
US20140350925A1 (en) * | 2013-05-21 | 2014-11-27 | Samsung Electronics Co., Ltd. | Voice recognition apparatus, voice recognition server and voice recognition guide method |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US20140379599A1 (en) * | 2013-06-20 | 2014-12-25 | Fourthwall Media, Inc. | System and method for generating and transmitting data without personally identifiable information |
US10019770B2 (en) * | 2013-06-20 | 2018-07-10 | Fourthwall Media, Inc. | System and method for generating and transmitting data without personally identifiable information |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US20150189362A1 (en) * | 2013-12-27 | 2015-07-02 | Samsung Electronics Co., Ltd. | Display apparatus, server apparatus, display system including them, and method for providing content thereof |
US9749699B2 (en) * | 2014-01-02 | 2017-08-29 | Samsung Electronics Co., Ltd. | Display device, server device, voice input system and methods thereof |
KR102210933B1 (en) * | 2014-01-02 | 2021-02-02 | 삼성전자주식회사 | Display device, server device, voice input system comprising them and methods thereof |
KR20150080684A (en) * | 2014-01-02 | 2015-07-10 | 삼성전자주식회사 | Display device, server device, voice input system comprising them and methods thereof |
US20150189391A1 (en) * | 2014-01-02 | 2015-07-02 | Samsung Electronics Co., Ltd. | Display device, server device, voice input system and methods thereof |
US10409454B2 (en) | 2014-03-05 | 2019-09-10 | Samsung Electronics Co., Ltd. | Smart watch device and user interface thereof |
US10649621B2 (en) | 2014-03-05 | 2020-05-12 | Samsung Electronics Co., Ltd. | Facilitating performing searches and accessing search results using different devices |
KR102098894B1 (en) * | 2014-05-13 | 2020-04-10 | 한국전자통신연구원 | Method and apparatus for speech recognition using smart remote control |
KR20150130635A (en) * | 2014-05-13 | 2015-11-24 | 한국전자통신연구원 | Method and apparatus for speech recognition using smart remote control |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US11206449B2 (en) * | 2014-06-12 | 2021-12-21 | Google Llc | Adapting search query processing according to locally detected video content consumption |
US11924507B2 (en) | 2014-06-12 | 2024-03-05 | Google Llc | Adapting search query processing according to locally detected video content consumption |
US10115395B2 (en) * | 2014-06-17 | 2018-10-30 | Lg Electronics Inc. | Video display device and operation method therefor |
US20170154625A1 (en) * | 2014-06-17 | 2017-06-01 | Lg Electronics Inc. | Video display device and operation method therefor |
US11461397B2 (en) | 2014-06-24 | 2022-10-04 | Apple Inc. | Column interface for navigating in a user interface |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
JP2017530567A (en) * | 2014-06-30 | 2017-10-12 | アップル インコーポレイテッド | Intelligent automatic assistant for TV user interaction |
WO2016003509A1 (en) * | 2014-06-30 | 2016-01-07 | Apple Inc. | Intelligent automated assistant for tv user interactions |
US10904611B2 (en) * | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US20170230709A1 (en) * | 2014-06-30 | 2017-08-10 | Apple Inc. | Intelligent automated assistant for tv user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10999636B1 (en) | 2014-10-27 | 2021-05-04 | Amazon Technologies, Inc. | Voice-based content searching on a television based on receiving candidate search strings from a remote server |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
WO2016109529A1 (en) * | 2014-12-29 | 2016-07-07 | Quixey, Inc. | Viewing search results using multiple different devices |
US10743048B2 (en) | 2014-12-31 | 2020-08-11 | The Directv Group, Inc. | Systems and methods for controlling purchasing and/or reauthorization to access content using quick response codes and text messages |
US9693083B1 (en) | 2014-12-31 | 2017-06-27 | The Directv Group, Inc. | Systems and methods for controlling purchasing and/or reauthorization to access content using quick response codes and text messages |
US10298981B2 (en) | 2014-12-31 | 2019-05-21 | The Directv Group, Inc. | Systems and methods for controlling purchasing and/or reauthorization to access content using quick response codes and text messages |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US11520858B2 (en) | 2016-06-12 | 2022-12-06 | Apple Inc. | Device-level authorization for viewing content |
US11543938B2 (en) | 2016-06-12 | 2023-01-03 | Apple Inc. | Identifying applications on which content is available |
WO2018005334A1 (en) * | 2016-06-27 | 2018-01-04 | Amazon Technologies, Inc. | Systems and methods for routing content to an associated output device |
CN109643548A (en) * | 2016-06-27 | 2019-04-16 | 亚马逊技术公司 | System and method for content to be routed to associated output equipment |
US11064248B2 (en) | 2016-06-27 | 2021-07-13 | Amazon Technologies, Inc. | Systems and methods for routing content to an associated output device |
EP4195025A1 (en) * | 2016-06-27 | 2023-06-14 | Amazon Technologies Inc. | Systems and methods for routing content to an associated output device |
US10271093B1 (en) | 2016-06-27 | 2019-04-23 | Amazon Technologies, Inc. | Systems and methods for routing content to an associated output device |
KR102360589B1 (en) * | 2016-06-27 | 2022-02-08 | 아마존 테크놀로지스, 인크. | Systems and methods for routing content to related output devices |
KR20190021407A (en) * | 2016-06-27 | 2019-03-05 | 아마존 테크놀로지스, 인크. | System and method for routing content to an associated output device |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US11609678B2 (en) | 2016-10-26 | 2023-03-21 | Apple Inc. | User interfaces for browsing content from multiple content applications on an electronic device |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11150922B2 (en) * | 2017-04-25 | 2021-10-19 | Google Llc | Initializing a conversation with an automated agent via selectable graphical element |
US11544089B2 (en) | 2017-04-25 | 2023-01-03 | Google Llc | Initializing a conversation with an automated agent via selectable graphical element |
US11853778B2 (en) | 2017-04-25 | 2023-12-26 | Google Llc | Initializing a conversation with an automated agent via selectable graphical element |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US11341963B2 (en) * | 2017-12-06 | 2022-05-24 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling same |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
WO2019164049A1 (en) * | 2018-02-21 | 2019-08-29 | Lg Electronics Inc. | Display device and operating method thereof |
US11733965B2 (en) * | 2018-02-21 | 2023-08-22 | Lg Electronics Inc. | Display device and operating method thereof |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10720160B2 (en) | 2018-06-01 | 2020-07-21 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US20190132622A1 (en) * | 2018-08-07 | 2019-05-02 | Setos Family Trust | System for temporary access to subscriber content over non-proprietary networks |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US10771848B1 (en) * | 2019-01-07 | 2020-09-08 | Alphonso Inc. | Actionable contents of interest |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11057682B2 (en) | 2019-03-24 | 2021-07-06 | Apple Inc. | User interfaces including selectable representations of content items |
US11750888B2 (en) | 2019-03-24 | 2023-09-05 | Apple Inc. | User interfaces including selectable representations of content items |
US11445263B2 (en) | 2019-03-24 | 2022-09-13 | Apple Inc. | User interfaces including selectable representations of content items |
US11683565B2 (en) | 2019-03-24 | 2023-06-20 | Apple Inc. | User interfaces for interacting with channels that provide content that plays in a media browsing application |
US11467726B2 (en) | 2019-03-24 | 2022-10-11 | Apple Inc. | User interfaces for viewing and accessing content on an electronic device |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11863837B2 (en) | 2019-05-31 | 2024-01-02 | Apple Inc. | Notification of augmented reality content on an electronic device |
US11360739B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User activity shortcut suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11797606B2 (en) | 2019-05-31 | 2023-10-24 | Apple Inc. | User interfaces for a podcast browsing and playback application |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
WO2021061304A1 (en) * | 2019-09-26 | 2021-04-01 | Dish Network L.L.C. | Method and system for implementing an elastic cloud-based voice search utilized by set-top box (stb) clients |
US11317162B2 (en) | 2019-09-26 | 2022-04-26 | Dish Network L.L.C. | Method and system for navigating at a client device selected features on a non-dynamic image page from an elastic voice cloud server in communication with a third-party search service |
US11849192B2 (en) | 2019-09-26 | 2023-12-19 | Dish Network L.L.C. | Methods and systems for implementing an elastic cloud based voice search using a third-party search provider |
US11477536B2 (en) | 2019-09-26 | 2022-10-18 | Dish Network L.L.C | Method and system for implementing an elastic cloud-based voice search utilized by set-top box (STB) clients |
US11303969B2 (en) | 2019-09-26 | 2022-04-12 | Dish Network L.L.C. | Methods and systems for implementing an elastic cloud based voice search using a third-party search provider |
US11962836B2 (en) | 2020-03-24 | 2024-04-16 | Apple Inc. | User interfaces for a media browsing application |
US11843838B2 (en) | 2020-03-24 | 2023-12-12 | Apple Inc. | User interfaces for accessing episodes of a content series |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11899895B2 (en) | 2020-06-21 | 2024-02-13 | Apple Inc. | User interfaces for setting up an electronic device |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US11720229B2 (en) | 2020-12-07 | 2023-08-08 | Apple Inc. | User interfaces for browsing and presenting content |
US11934640B2 (en) | 2021-01-29 | 2024-03-19 | Apple Inc. | User interfaces for record labels |
Also Published As
Publication number | Publication date |
---|---|
US20110313775A1 (en) | 2011-12-22 |
US8522283B2 (en) | 2013-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8522283B2 (en) | Television remote control data transfer | |
AU2010319860B2 (en) | Computer-to-computer communication | |
US11516537B2 (en) | Intelligent automated assistant for TV user interactions | |
US11481187B2 (en) | Systems and methods for generating a volume-based response for multiple voice-operated user devices | |
JP7159358B2 (en) | Video access method, client, device, terminal, server and storage medium | |
US7415537B1 (en) | Conversational portal for providing conversational browsing and multimedia broadcast on demand | |
US20090063645A1 (en) | System and method for supporting messaging using a set top box | |
EP3680896B1 (en) | Method for controlling terminal by voice, terminal, server and storage medium | |
JP2019525272A (en) | Approximate template matching for natural language queries | |
US20020095294A1 (en) | Voice user interface for controlling a consumer media data storage and playback device | |
JP2021093749A (en) | System and method for identifying content corresponding to language spoken in household | |
US8994774B2 (en) | Providing information to user during video conference | |
KR101897968B1 (en) | Method for providing interpreting sound information, customized interpretation of server and system implementing the same | |
WO2022237381A1 (en) | Method for saving conference record, terminal, and server | |
US20230084372A1 (en) | Electronic content glossary |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |