US20130085848A1 - Gesture based search system - Google Patents

Gesture based search system Download PDF

Info

Publication number
US20130085848A1
US20130085848A1 US13/284,673 US201113284673A US2013085848A1 US 20130085848 A1 US20130085848 A1 US 20130085848A1 US 201113284673 A US201113284673 A US 201113284673A US 2013085848 A1 US2013085848 A1 US 2013085848A1
Authority
US
United States
Prior art keywords
content
gesture
search
presented
gbss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/284,673
Inventor
Matthew G. Dyor
Royce A. Levien
Richard T. Lord
Robert W. Lord
Mark A. Malamud
Xuedong Huang
Marc E. Davis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elwha LLC
Original Assignee
Elwha LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/251,046 external-priority patent/US20130085843A1/en
Priority claimed from US13/269,466 external-priority patent/US20130085847A1/en
Priority claimed from US13/278,680 external-priority patent/US20130086056A1/en
Priority to US13/284,673 priority Critical patent/US20130085848A1/en
Priority to US13/284,688 priority patent/US20130085855A1/en
Application filed by Elwha LLC filed Critical Elwha LLC
Priority to US13/330,371 priority patent/US20130086499A1/en
Priority to US13/361,126 priority patent/US20130085849A1/en
Assigned to ELWHA LLC reassignment ELWHA LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, XUEDONG, LEVIEN, ROYCE A., DAVIS, MARC E., DYOR, MATTHEW G., MALAMUD, MARK A., LORD, RICHARD T., LORD, ROBERT W.
Priority to US13/595,827 priority patent/US20130117130A1/en
Priority to US13/598,475 priority patent/US20130117105A1/en
Priority to US13/601,910 priority patent/US20130117111A1/en
Publication of US20130085848A1 publication Critical patent/US20130085848A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the present disclosure relates to methods, techniques, and systems for providing a gesture-based search system and, in particular, to methods, techniques, and systems for automatically initiating a search based upon gestured input.
  • the present application is related to and claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 USC ⁇ 119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Application(s)). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
  • a user invokes one or more search engines and provides them with keywords that are meant to cause the search engine to return results that are relevant because they contain the same or similar keywords to the ones submitted by the user.
  • search engines invokes one or more search engines and provides them with keywords that are meant to cause the search engine to return results that are relevant because they contain the same or similar keywords to the ones submitted by the user.
  • the user iterates using this process until he or she believes that the results returned are sufficiently close to what is desired. The better the user understands or knows what he or she is looking for, often the more relevant the results. Thus, such tools can often be frustrating when employed for information discovery where the user may or may not know much about the topic at hand.
  • search engines and search technology have been developed to increase the precision and correctness of search results returned, including arming such tools with the ability to add useful additional search terms (e.g., synonyms), rephrase queries, and take into account document related information such as whether a user-specified keyword appears in a particular position in a document.
  • search engines that utilize natural language processing capabilities have been developed.
  • bookmarks available in some client applications provide an easy way for a user to return to a known location (e.g., web page), they do not provide a dynamic memory that assists a user from going from one display or document to another, and then to another.
  • Some applications provide “hyperlinks,” which are cross-references to other information, typically a document or a portion of a document.
  • hyperlink cross-references are typically selectable, and when selected by a user (such as by using an input device such as a mouse, pointer, pen device, etc.), result in the other information being displayed to the user.
  • a user running a web browser that communicates via the World Wide Web network may select a hyperlink displayed on a web page to navigate to another page encoded by the hyperlink.
  • Hyperlinks are typically placed into a document by the document author or creator, and, in any case, are embedded into the electronic representation of the document. When the location of the other information changes, the hyperlink is “broken” until it is updated and/or replaced.
  • users can also create such links in a document, which are then stored as part of the document representation.
  • FIG. 1A is a screen display of example gesture based input performed by an example Gesture Based Search System (GBSS) or process.
  • GBSS Gesture Based Search System
  • FIG. 1B is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • FIG. 1C is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • FIG. 1D is a block diagram of an example environment for performing searches using an example Gesture Based Search System (GBSS) or process.
  • GBSS Gesture Based Search System
  • FIG. 2A is an example block diagram of components of an example Gesture Based Search System.
  • FIG. 2B is an example block diagram of further components of the Input Module of an example Gesture Based Search System.
  • FIG. 2C is an example block diagram of further components of the Factor Determination Module of an example Gesture Based Search System.
  • FIG. 2D is an example block diagram of further components of the Source Input Determination Module of an example Gesture Based Search System.
  • FIG. 2E is an example block diagram of further components of the Auxiliary Content Determination Module of an example Gesture Based Search System.
  • FIG. 2F is an example block diagram of further components of the Presentation Module of an example Gesture Based Search System.
  • FIG. 3 is an example flow diagram of example logic for providing a gesture based search for auxiliary content.
  • FIG. 4 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 5 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 6 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 7 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 8A is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 8B is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 8C is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 8D is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 8E is an example flow diagram of example logic illustrating various example embodiments of block 825 of FIG. 8C .
  • FIG. 9 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 10 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • FIG. 11A is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3 .
  • FIG. 11B is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3 .
  • FIG. 11C is an example flow diagram of example logic illustrating various example embodiments of block 1108 of FIG. 11B .
  • FIG. 12 is an example flow diagram of example logic illustrating various example embodiments of block 308 of FIG. 3 .
  • FIG. 13A is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • FIG. 13B is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • FIG. 13C is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • FIG. 14 is an example flow diagram of example logic illustrating various example embodiments of blocks 302 - 308 of FIG. 3 .
  • FIG. 15 is an example block diagram of a computing system for practicing embodiments of a Gesture Based Search System.
  • Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for automatically initiating a search to present auxiliary content in a gesture based input system.
  • Example embodiments provide a Gesture Based Search System (GBSS), which enables a gesture-based user interface to invoke (e.g., execute, generate, initiate, perform, or cause to be executed, generated, initiated, performed, or the like) a search related to an portion of electronic input that has been indicated by a received gesture.
  • GBSS Gesture Based Search System
  • the GBSS allows a portion (e.g., an area, part, or the like) of electronically presented content to be dynamically indicated by a gesture.
  • the gesture may be provided in the form of some type of pointer, for example, a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer that indicates a word, phrase, icon, image, or video, or may be provided in audio form.
  • the GBSS then examines the indicated portion in conjunction with a set of (e.g., one or more) factors to determine input to a search. The search is then automatically initiated with the determined source input.
  • the search may be provided, for example, by a third party search engine, a proprietary search engine, an off-the-shelf search engine or the like, communicatively coupled to the GBSS, and the source input is provided in a corresponding appropriate format. Once search result content is determined, the result content is then presented to the user.
  • the input for the search is based upon content contained in the portion of the presented electronic indicated by the gestured input as well as possibly one or more of a set of factors.
  • Content may include, for example, a word, phrase, spoken utterance, image, video, pattern, and/or other audio signal.
  • the portion may be formed from contiguous or composed of separate non-contiguous parts, for example, a title with a disconnected sentence.
  • the indicated portion may represent the entire body of electronic content presented to the user.
  • the electronic content may comprise any type of content that can be presented for gestured input, including, for example, text, a document, music, a video, an image, a sound, or the like.
  • the GBSS may incorporate information from a set of factors (e.g., criteria, state, influencers, things, features, and the like) in addition to the content contained in the indicated portion.
  • the set of factors that may influence what is input to the search may include such things as context surrounding or otherwise relating to the indicated portion (as indicated by the gesture), such as other text, audio, graphics, and/or objects within the presented electronic content; some attribute of the gesture itself, such as size, direction, color, how the gesture is steered (e.g., smudged, nudged, adjusted, and the like); presentation device capabilities, for example, the size of the presentation device, whether text or audio is being presented; prior device communication history, such as what other devices have recently been used by this user or to which other devices the user has been connected; time of day; and/or prior history associated with the user, such as prior search history, navigation history, purchase history, and/or demographic information (e.g., age, gender, location, contact information, or the like).
  • presentation device capabilities for example,
  • the search result content is “auxiliary” (additional, supplemental, other, etc.) content in that it is additional to what is currently presented to the user as the presented electronic content.
  • This auxiliary content is the presented to the user in conjunction with the presented electronic content by, for example, use of an overlay; in a separate presentation element (e.g., window, pane, frame, or other construct) such as a window juxtaposed (e.g., next to, contiguous with, nearly up against) to the presented electronic content; and/or, as an animation, for example, a pane that slides in to partially or totally obscure the presented electronic content.
  • a separate presentation element e.g., window, pane, frame, or other construct
  • a window juxtaposed e.g., next to, contiguous with, nearly up against
  • an animation for example, a pane that slides in to partially or totally obscure the presented electronic content.
  • Other methods of presenting the search results are contemplated.
  • the search result content e.g., the auxiliary content
  • FIG. 1A is a screen display of example gesture based input performed by an example Gesture Based Search System (GBSS) or process.
  • a presentation device such as computer display screen 001 , is shown presenting two windows with electronic content, window 002 and window 003 .
  • the user (not shown) utilizes an input device, such as mouse 20 a and/or a microphone 20 b , to indicate a gesture (e.g., gesture 005 ) to the GBSS.
  • the GBSS determines to which portion of the electronic content displayed in window 002 the gesture 005 corresponds, potentially including what type of gesture.
  • gesture 005 was created using the mouse device 20 a and Represents a closed path (shown in red) that is not quite a circle or oval that indicates that the user is interested in the entity “Obama.”
  • the gesture may be a circle, oval, closed path, polygon, or essentially any other shape recognizable by the GBSS.
  • the gesture may indicate content that is contiguous or non-contiguous. Audio may also be used to indicate some area of the presented content, such as by using a spoken word, phrase, and/or direction (e.g., command, order, directional command, or the like). Other embodiments provide additional ways to indicate input by means of a gesture.
  • the GBSS can be fitted to incorporate any technique for providing a gesture that indicates some area or portion (including any or all) of presented content. The GBSS has highlighted the text 007 to which gesture 005 is determined to correspond.
  • the GBSS determines from the indicated portion (the text “Obama”) and one or more factors, such as the user's prior navigation history, that the user is interested in more detailed information regarding the indicated portion.
  • the user has been known to employ “Wikipedia” for obtaining detailed information about entities.
  • the GBSS initiates a search on the entity Obama along with an indication that results from Wikipedia as a source are preferred.
  • any search engine can be employed, such as a keyword search engine like Bing, Google, Yahoo, and the like.
  • FIG. 1B is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • the auxiliary content is the resultant web page 006 on the entity “Obama” from Wikipedia. This content is shown as an overlay over one of the windows 003 on the presentation device 001 .
  • the user could continue searching using gestures from here to find more detailed information on Obama, for example, by indicating by a gesture an additional entity or action that the user desires information on.
  • an “entity” is any person, place, or thing, or a representative of the same, such as by an icon, image, video, utterance, etc.
  • An “action” is something that can be performed, for example, as represented by a verb, an icon, an utterance, or the like.
  • the GBSS determined from FIG. 1A that the user tended to like to use the computer for purchases.
  • the GBSS may surmise this as one of the factors for choosing a source input also by looking at the user's prior navigation history, purchase history, or the like.
  • the GBSS sends an indication to the search engine that an opportunity for commercialization, such as an advertisement is desirable.
  • FIG. 1C is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • an advertisement for a book 013 on the entity “Obama” (the gestured indicated portion) is presented alongside the gestured input 005 on window 002 .
  • the user could next use the gestural input system to select the advertisement on the book on “Obama” to create a purchase opportunity.
  • the advertisement is shown as an overlay over both windows 002 and 003 on the presentation device 001 .
  • the auxiliary content may be displayed in a separate pane, window, frame, or other construct.
  • the auxiliary content is brought into view in an animated fashion from one side of the screen and partially overlaid on top of the presented electronic content that the user is viewing.
  • the auxiliary content may appear to “move into place” from one side of a presentation device.
  • the auxiliary content may be placed in another window, pane, frame, or the like, which may or may not be juxtaposed, overlaid, or just placed in conjunction with to the initial presented content. Other arrangements are of course contemplated.
  • the GBSS may interact with one or more remote and/or third party systems to present auxiliary content.
  • the GBSS may invoke a third party advertising supplier system to cause it to serve (e.g., deliver, forward, send, communicate, etc.) an appropriate advertisement oriented to other factors related to the user, such as gender, age, location, etc.
  • FIG. 1D is a block diagram of an example environment for performing searches using an example Gesture Based Search System (GBSS) or process.
  • GBSS Gesture Based Search System
  • One or more users 10 a , 10 b , etc. communicate to the GBSS 110 through one or more networks, for example, wireless and/or wired network 30 , by indicating gestures using one or more input devices, for example a mobile device 20 a , an audio device such as a microphone 20 b , or a pointer device such as mouse 20 c or the stylus on table device 20 d (or for example, or any other input device, such as a keyboard of a computer device or a human body part, not shown).
  • a mobile device 20 a an audio device such as a microphone 20 b
  • a pointer device such as mouse 20 c or the stylus on table device 20 d (or for example, or any other input device, such as a keyboard of a computer device or a human body part, not shown).
  • the one or more networks 30 may be any type of communications link, including for example, a local area network or a wide area network such as the Internet.
  • Search input is typically generated (e.g., defined, produced, instantiated, created etc.) “on-the-fly” as a user indicates, by means of a gesture, what portion of the presented content is interesting and a desire to perform a search.
  • Many different mechanisms for causing a search to be initiated and result content to be presented can be accommodated, for example, a “single-click” of a mouse button following the gesture, a command via an audio input device such as microphone 20 b , a secondary gesture, etc.
  • the search is initiated automatically as a direct result of the gesture—without additional input—for example, as soon as the GBSS determines the gesture is complete.
  • the GBSS 110 will determine to what portion the gesture corresponds. In some embodiments, the GBSS 110 may take into account other factors in addition to the indicated portion of the presented content in order to determine what source input to use for the search, as explained above.
  • the GBSS 110 determines the indicated portion 25 to which the gesture-based input corresponds, and then, based upon the indicated portion 25 , and possibly a set of factors 50 , (and, in the case of a context menu, based upon a set of action/entity rules 51 ) determines search input. Then, once the search is initiated and the auxiliary content obtained, the GBSS 110 presents the auxiliary content.
  • the set of factors (e.g., criteria) 50 may be dynamically determined, predetermined, local to the GBSS 110 , or stored or supplied externally from the GBSS 110 as described elsewhere.
  • This set of factors may include a variety of aspects, including, for example: context of the indicated portion of the presented content, such as other words, symbols, and/or graphics nearby the indicated portion, the location of the indicated portion in the presented content, syntactic and semantic considerations, etc.; attributes of the user, for example, prior search, purchase, and/or navigation history, demographic information, and the like; attributes of the gesture, for example, direction, size, shape, color, steering, and the like; and other criteria, whether currently defined or defined in the future. In this manner, the GBSS 110 allows searching to become “personalized” to the user as much as the system is tuned.
  • the auxiliary content may be stored local to the GBSS 110 , for example, in auxiliary content data repository 40 associated with a computing system running the GBSS 110 , or may be stored or available externally, for example, from another computing system 42 , from third party content 43 (e.g., a 3 rd party advertising system, external content, a social network, etc.) from auxiliary content stored using cloud storage 44 , from another device 45 (such as from a settop box, A/V component, etc.), from a mobile device connected directly or indirectly with the user (e.g., from a device associated with a social network associated with the user, etc.), and/or from other devices or systems not illustrated.
  • third party content 43 e.g., a 3 rd party advertising system, external content, a social network, etc.
  • cloud storage 44 e.g., a settop box, A/V component, etc.
  • a mobile device connected directly or indirectly with the user e.g., from a device associated with a
  • Third party content 43 is demonstrated as being communicatively connected to both the GBSS 110 directly and/or through the one or more networks 30 .
  • various of the devices and/or systems 42 - 46 also may be communicatively connected to the GBSS 110 directly or indirectly.
  • the auxiliary content may be any type of content and, for example, may include another document, an image, an audio snippet, an audio visual presentation, an advertisement, an opportunity for commercialization such as a bid, a product offer, a service offer, or a competition, and the like.
  • the GBSS 110 illustrated in FIG. 1D may be executing (e.g., running, invoked, instantiated, or the like) on a client or on a server device or computing system.
  • a client application e.g., a web application, web browser, other application, etc.
  • the GBSS 110 components may be executing as part of the client application (for example, downloaded as a plug-in, active-x component, run as a script or as part of a monolithic application, etc.).
  • some portion or all of the GBSS 110 components may be executing as a server (e.g., server application, server computing system, software as a service, etc.) remotely from the client input and/or presentation devices 20 a - d.
  • server e.g., server application, server computing system, software as a service, etc.
  • FIG. 2A is an example block diagram of components of an example Gesture Based Search System.
  • the GBSS comprises one or more functional components/modules that work together to provide automatically initiated searches based upon gestured input.
  • a Gesture Based Search System 110 may reside in (e.g., execute thereupon, be stored in, operate with, etc.) a computing device 100 programmed with logic to effectuate the purposes of the GBSS 110 .
  • a GBSS 110 may be executed client side or server side.
  • the GBSS 110 is described as though it is operating as a server. It is to be understood that equivalent client side modules can be implemented.
  • client side modules need not operate in a client-server environment, as the GBSS 110 may be practiced in a standalone environment or even embedded into another apparatus.
  • the GBSS 110 may be implemented in hardware, software, or firmware, or in some combination.
  • auxiliary content is typically presented on a client presentation device such as devices 20 *, the content may be implemented server-side or some combination of both. Details of the computing device/system 100 are described below with reference to FIG. 15 .
  • a GBSS 110 comprises an input module 111 , a source (search) input determination module 112 , a factor determination module 113 , an automated search module 114 , and a presentation module 115 .
  • the GBSS 110 comprises additional and/or different modules as described further below.
  • Input module 111 is configured and responsible for determining the gesture and an indication of an area (e.g., a portion) of the presented electronic content indicated by the gesture.
  • the input module 111 comprises a gesture input detection and resolution module 121 to aid in this process.
  • the gesture input detection and resolution module 121 is responsible for determining, using different techniques, for example, pattern matching, parsing, heuristics, etc. to what area a gesture corresponds and what word, phrase, image, audio clip, etc. is indicated.
  • Source input determination module 112 is configured and responsible for determining the input to be used as source for a search. As explained, this determination may be based upon the context—the portion indicated by the gesture and potentially a set of factors (e.g., criteria, properties, aspects, or the like) that help to define context.
  • the source input determination module 112 may invoke the factor determination module 113 to determine the one or more factors to use to assist in defining the source input for the search.
  • the factor determination module 113 may comprise a variety of implementations corresponding to different types of factors, for example, modules for determining prior history associated with the user, current context, gesture attributes, system attributes, or the like.
  • the source input determination module 112 may utilize a disambiguation module 123 to help disambiguate the indicated portion of content. For example, if a gesture has indicated the word “Bill,” the disambiguation module 123 may help distinguish whether the user is likely interested in a person whose name is Bill or a legislative proposal. In addition, based upon the indicated portion of content and the set of factors more than one source input may be identified. If this is the case, then the source input determination module 112 may use the disambiguation module 123 and other logic to select a source input for a search.
  • the GBSS 110 uses the automated search module 114 to obtain a search result.
  • the search result determination module 122 is then used to obtain an auxiliary content to present.
  • the GBSS 110 then forwards (e.g., communicates, sends, pushes, etc.) the auxiliary content to the presentation module 115 to cause the presentation module 115 to present the auxiliary content.
  • the auxiliary content may be presented in a variety of manners, including via visual display, audio display, via a Braille printer, etc., and using different techniques, for example, overlays, animation, etc.
  • FIG. 2B is an example block diagram of further components of the Input Module of an example Gesture Based Search System.
  • the input module 111 may be configured to include a variety of other modules and/or logic.
  • the input module 111 may be configured to include a gesture input detection and resolution module 121 as described with reference to FIG. 2A .
  • the gesture input detection and resolution module 121 may be further configured to include a variety of modules and logic for handling a variety of input devices and systems.
  • gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices and/or a graphics handling module 224 for handing the association of gestures to graphics in content (such as an icon, image, movie, still, sequence of frames, etc.).
  • the input module 111 may be configured to include a natural language processing module 226 .
  • Natural language processing (NLP) module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content.
  • the input module 111 may be configured to include a gesture identification and attribute processing module 228 for handling other aspects of gesture determination such as determining the particular type of gesture (e.g., a circle, oval, polygon, closed path, check mark, box, or the like) or whether a particular gesture is a “steering” gesture that is meant to correct, for example, an initial path indicated by a gesture; a “smudge” which may have its own interpretation such as extend the gesture “here;” the color of the gesture, for example, if the input device supports the equivalent of a colored “pen” (e.g., pens that allow a user can select blue, black, red, or green); the size of a gesture (e.g., whether the gesture draws a thick or thin line, whether the gesture is a small or large circle, and the like); the direction of the gesture (up, down, across, etc.); and/or other attributes of a gesture.
  • determining the particular type of gesture e.g., a circle, oval, polygon,
  • the input module 111 is configured to include specific device handlers 125 (e.g., drivers) for detecting and controlling input from the various types of input devices, for example devices 20 *.
  • specific device handlers 125 may include a mobile device driver, a browser “device” driver, a remote display “device” driver, a speaker device driver, a Braille printer device driver, and the like.
  • the input module 111 may be configured to work with and or dynamically add other and/or different device handlers.
  • modules and logic may be also configured to be used with the input module 111 .
  • FIG. 2C is an example block diagram of further components of the Factor Determination Module of an example Gesture Based Search System.
  • the factor determination module 113 may be configured to include a prior history determination module 232 , a system attributes determination module 237 , other user attributes determination module 238 , a gesture attributes determination module 239 , and/or current context determination module 231 .
  • the prior history determination module 232 determines (e.g., finds, establishes, selects, realizes, resolves, establishes, etc.) prior histories associated with the user and is configured to include modules/logic to implement such.
  • the prior history determination module 232 may be configured to include a demographic history determination module 233 that is configured to determine demographics (such as age, gender, residence location, citizenship, languages spoken, or the like) associated with the user.
  • the prior history determination module 232 may be configured to include a purchase history determination module 234 that is configured to determine a user's prior purchases.
  • the purchase history may be available electronically, over the network, may be integrated from manual records, or some combination. In some systems, these purchases may be product and/or service purchases.
  • the prior history determination module 232 may be configured to include a search history determination module 235 that is configured to determine a user's prior searches. Such records may be stored locally with the GBSS 110 or may be available over the network 30 or using a third party service, etc.
  • the prior history determination module 232 also may be configured to include a navigation history determination module 236 that is configured to keep track of and/or determine how a user navigates through his or her computing system so that the GBSS 110 can determine aspects such as navigation preferences, commonly visited content (for example, commonly visited websites or bookmarked items), etc.
  • the factor determination module 113 may be configured to include a system attributes determination module 237 that is configured to determine aspects of the “system” that may provide influence or guidance (e.g., may inform) the determination of which menu items are appropriate for the portion of content indicated by the gestured input. These may include aspects of the GBSS 110 , aspects of the system that is executing the GBSS 119 (e.g., the computing system 100 ), aspects of a system associated with the GBSS 110 (e.g., a third party system), network statistics, and/or the like.
  • a system attributes determination module 237 that is configured to determine aspects of the “system” that may provide influence or guidance (e.g., may inform) the determination of which menu items are appropriate for the portion of content indicated by the gestured input. These may include aspects of the GBSS 110 , aspects of the system that is executing the GBSS 119 (e.g., the computing system 100 ), aspects of a system associated with the GBSS 110 (e.g., a third party system),
  • the factor determination module 113 also may be configured to include other user attributes determination module 238 that is configured to determine other attributes associated with the user not covered by the prior history determination module 232 .
  • other user attributes determination module 238 that is configured to determine other attributes associated with the user not covered by the prior history determination module 232 .
  • a user's social connectivity data may be determined by module 238 .
  • the factor determination module 113 also may be configured to include a gesture attributes determination module 239 .
  • the gesture attributes determination module 239 is configured to provide determinations of attributes of the gesture input, similar or different from those described relative to input module 111 and gesture attribute processing module 228 for determining to what content a gesture corresponds.
  • the gesture attributes determination module 239 may provide information and statistics regarding size, length, shape, color, and/or direction of a gesture.
  • the factor determination module 113 also may be configured to include a current context determination module 231 .
  • the current context determination module 231 is configured to provide determinations of attributes regarding what the user is viewing, the underlying content, context relative to other containing content (if known), whether the gesture has selected a word or phrase that is located with certain areas of presented content (such as the title, abstract, a review, and so forth).
  • Other modules and logic may be also configured to be used with the factor determination module 113 .
  • FIG. 2D is an example block diagram of further components of the Source Input Determination Module of an example Gesture Based Search System.
  • the source input determination module 112 determines what input to use for a search as described elsewhere. It may use a disambiguation module 123 when perhaps more than one source input is determined by the GBSS to apply to the content of the indicated portion and any factors considered.
  • the disambiguation module 123 may utilize syntactic and/or semantic aids, user selection, default values, and the like to assist in the determination of source input to the search.
  • the source input determination module 112 of the GBSS 110 may use a context menu to aid in source input selection.
  • the source input determination module 112 may include a context menu handling module 211 to process and handle menu presentation and input.
  • the context menu handling module 211 may be configured to include a variety of other modules and/or logic.
  • the context menu handling module 211 may be configured to include an items determination module 212 for determining what menu items to present on a particular menu, an input handler 214 for providing an event loop to detect and handle user selection of a menu item, a viewer module 216 to determine what kind of “view” (as in a model/view/controller—MVC—model) to present (e.g., a pop-up, pull-down, dialog, interest wheel, and the like) and a presentation module 215 for determining when and what to present to the user and to determine an auxiliary content to present that is associated with a selection.
  • the items determination module 213 may use a rules for actions and/or entities determination module 214 to determine what to present on a particular menu.
  • FIG. 2E is an example block diagram of further components of the Auxiliary Content Determination Module of an example Gesture Based Search System.
  • the auxiliary content determination module 122 is provided by the automated search module 114 , which is an interface to a search engine (or the search engine itself).
  • the GBSS 110 may be configured to include an auxiliary content determination module 122 to determine (e.g., find, establish, select, realize, resolve, establish, etc.) auxiliary or supplemental content that matches a search based upon the determine source input to the search.
  • the auxiliary content determination module 122 may be further configured to include a variety of different modules to aid in this determination process.
  • the auxiliary content determination module 122 may be configured to include an advertisement determination module 202 to determine one or more advertisements that can be associated with the obtained search result.
  • these advertisements may be provided by a variety of sources including from local storage, over a network (e.g., wide area network such as the Internet, a local area network, a proprietary network, an Intranet, or the like), from a known source provider, from third party content (available, for example from cloud storage or from the provider's repositories), and the like.
  • a third party advertisement provider system is used that is configured to accept queries for advertisements (“ads”) such as using keywords, to output appropriate advertising content.
  • the auxiliary content determination module 122 is further configured to provide a supplemental content determination module 204 .
  • the supplemental content determination module 204 may be configured to determine other content that somehow relates to (e.g., associated with, supplements, improves upon, corresponds to, has the opposite meaning from, etc.) the search.
  • the auxiliary content determination module 122 is further configured to provide an opportunity for commercialization determination module 208 to find a commercialization opportunity appropriate for the area indicated by the gesture.
  • the commercialization opportunities may include events such as purchase and/or offers
  • the opportunity for commercialization determination module 208 may be further configured to include an interactive entertainment determination module 201 , which may be further configured to include a role playing game determination module 203 , a computer assisted competition determination module 205 , a bidding determination module 206 , and a purchase and/or offer determination module 207 with logic to aid in determining a purchase and/or an offer as auxiliary content.
  • Other modules and logic may be also configured to be used with the auxiliary content determination module 122 .
  • FIG. 2F is an example block diagram of further components of the Presentation Module of an example Gesture Based Search System.
  • the presentation module 115 may be configured to include a variety of other modules and/or logic.
  • the presentation module 115 may be configured to include an overlay presentation module 252 for determined how to present auxiliary content determined by the content to present determination module 116 on a presentation device, such as tablet 20 d .
  • Overlay presentation module 252 may utilize knowledge of the presentation devices to decide how to integrate the auxiliary content as an “overlay” (e.g., covering up a portion or all of the underlying presented content). For example, when the GBSS 110 is run as a server application that serves web pages to a client side web browser, certain configurations using “html” commands or other tags may be used.
  • Presentation module 115 also may be configured to include an animation module 254 .
  • the auxiliary content may be “moved in” from one side or portion of a presentation device in an animated manner.
  • the auxiliary content may be placed in a pane (e.g., a window, frame, pane, etc., as appropriate to the underlying operating system or application running on the presentation device) that is moved in from one side of the display onto the content previously shown (a form of navigation to the auxiliary content).
  • a pane e.g., a window, frame, pane, etc., as appropriate to the underlying operating system or application running on the presentation device
  • Other animations can be similarly incorporated.
  • Presentation module 115 also may be configured to include an auxiliary display generation module 256 for generating a new graphic or audio construct to be presented in conjunction with the content already displayed on the presentation device.
  • the new content is presented in a new window, frame, pane, or other auxiliary display construct.
  • Presentation module 115 also may be configured to include specific device handlers 258 , for example device drivers configured to communicate with mobile devices, remote displays, speakers, Braille printers, and/or the like as described elsewhere. Other or different presentation device handlers may be similarly incorporated.
  • modules and logic may be also configured to be used with the presentation module 115 .
  • GBSS Gesture Based Search System
  • the phrase “gesture” is used generally to imply any type of physical pointing type of gesture or audio equivalent.
  • the examples described herein often refer to online electronic content such as available over a network such as the Internet, the techniques described herein can also be used by a local area network system or in a system without a network.
  • the concepts and techniques described are applicable to other input and presentation devices. Essentially, the concepts and techniques described are applicable to any environment that supports some type of gesture-based input.
  • Example embodiments described herein provide applications, tools, data structures and other support to implement a Gesture Based Search System (GBSS) to be used for providing gesture based searching.
  • GBSS Gesture Based Search System
  • Other embodiments of the described techniques may be used for other purposes.
  • numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques.
  • the embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the logic or code flow, different logic, or the like.
  • the scope of the techniques and/or components/modules described are not limited by the particular order, selection, or decomposition of logic described with reference to any particular routine.
  • FIGS. 3-15 include example flow diagrams of various example logic that may be used to implement embodiments of a Gesture Based Search System (GBSS).
  • the example logic will be described with respect to the example components of example embodiments of a GBSS as described above with respect to FIGS. 1A-2F .
  • the flows and logic may be executed in a number of other environments, systems, and contexts, and/or in modified versions of those described.
  • various logic blocks e.g., operations, events, activities, or the like
  • Such illustrations may indicate that the logic in an internal box may comprise an optional example embodiment of the logic illustrated in one or more (containing) external boxes.
  • internal box logic may be viewed as independent logic separate from any associated external boxes and may be performed in other sequences or concurrently.
  • FIG. 3 is an example flow diagram of example logic for providing a gesture based search for auxiliary content.
  • Operational flow 300 includes several operations.
  • the logic performs receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system. This logic may be performed, for example, by the input module 111 of the GBSS 110 described with reference to FIGS.
  • gesture input detection and resolution module 121 including the audio handling module 222 , graphics handling module 224 , natural language processing module 226 , and/or gesture identification and attribute processing module 228 may be used to assist in operation 302 .
  • the indicated portion may be formed from contiguous or composed of separate non-contiguous parts, for example, a title with a disconnected sentence.
  • the indicated portion may represent the entire body of electronic content presented to the user or a part.
  • the gestural input may be of different forms, including, for example, a circle, an oval, a closed path, a polygon, and the like.
  • the gesture may be from a pointing device, for example, a mouse, laser pointer, a body part, and the like, or from a source of auditory input.
  • the logic performs determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search. This logic may be performed, for example, by the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • the source input determination module 112 may use factor determination module 113 to determine a set of factors (e.g., the context of the gesture, the user, or of the presented content, prior history associated with the user or the system, attributes of the gestures, and the like) to use, in addition to determining what content has been indicated by the gesture, in order to determine an indication (e.g., a reference to, what, etc.) of source input to use for the search.
  • factors e.g., the context of the gesture, the user, or of the presented content, prior history associated with the user or the system, attributes of the gestures, and the like
  • an indication e.g., a reference to, what, etc.
  • the content contained within the indicated portion of the presented electronic content may be anything, for example, a word, phrase, utterance, video, image, or the like.
  • the logic performs automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content.
  • This logic may be performed, for example, by the automated search module 114 of the GBSS 110 as described with reference to FIG. 2A .
  • the automatically initiating may include, for example, invoking (e.g., executing, calling, sending, or the like) a search engine (e.g., an off-the-shelf search tool, a third party auxiliary content supply tool such as an advertising server, an application residing elsewhere, and the like) with the determined source input to obtain search result content.
  • the search result content may be anything, including for example, any type of auxiliary, supplement, or other content (e.g., a web page, an electronic document, code, speech, an opportunity for commercialization, an advertisement, or the like).
  • the logic performs presenting the search result content in conjunction with the corresponding presented electronic content.
  • This logic may be performed, for example, by the presentation module 115 of the GBSS 110 described with reference to FIGS. 2A and 2F to present (e.g., output, display, render, draw, show, illustrate, etc.) the search result (e.g., an advertisement, web page, supplemental content, document, instructions, image, and the like) in conjunction with the presented electronic content (e.g., displaying the auxiliary content web page as shown in FIG. 1B or the auxiliary content advertisement as shown in FIG. 1C as an overlay on the web page that is presented corresponding to the gestured input).
  • the presentation module 115 of the GBSS 110 described with reference to FIGS. 2A and 2F to present (e.g., output, display, render, draw, show, illustrate, etc.) the search result (e.g., an advertisement, web page, supplemental content, document, instructions, image, and the like) in conjunction with the presented electronic content (e.g
  • FIG. 4 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 402 whose logic specifies the indicated source input comprises at least one of a word, a phrase, an utterance, an image, a video, a pattern, or an audio signal.
  • the logic of operation 402 may be performed, for example, by any of the modules of input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B .
  • gesture input detection and resolution module 121 may be used to assist in operation 402 to determine what content (e.g., word, phrase, image, video, pattern, audio signal, utterance, etc.) is contained within the indicated portion.
  • content e.g., word, phrase, image, video, pattern, audio signal, utterance, etc.
  • FIG. 5 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 502 whose logic specifies the content contained within the indicated portion of electronic content is a portion less than the entire presented electronic content.
  • the logic of operation 502 may be performed, for example, by the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B .
  • the content determined to be contained within (e.g., represented by, indicated, etc.) the gestured portion may include for example only a portion of a presented content, such as a title and abstract of an electronically presented document.
  • FIG. 6 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 602 whose logic specifies the content contained within the indicated portion of electronic content is the entire presented electronic content.
  • the logic of operation 602 may be performed, for example, by of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B .
  • the content determined to be contained within (e.g., represented by, indicated, etc.) the gestured portion may include for the entire presented content, such as a whole document.
  • FIG. 7 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 702 whose logic specifies the content contained within the indicated portion of electronic content includes an audio portion.
  • the logic of operation 702 may be performed, for example, by an audio handling module 222 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B .
  • gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices such as microphone 20 b .
  • the audio portion may be, for example, a spoken title of a presented document.
  • operation 304 may further comprise an operation 703 whose logic specifies the content contained within the indicated portion of electronic content includes at least a word or a phrase.
  • the logic of operation 703 may be performed, for example, by the natural language processing module 226 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • NLP module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content.
  • the word or phrase may be any word or phrase located in or indicated by the electronically presented content.
  • operation 304 may include an operation 704 whose logic specifies the content contained within the indicated portion of electronic content includes at least a graphical object, image, and/or icon.
  • the logic of operation 704 may be performed, for example, by the graphics handling module 224 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • the graphics handling module 224 may be configured to handle the association of gestures to graphics located or indicated by the presented content (such as an icon, image, movie, still, sequence of frames, etc.).
  • operation 304 may include an operation 705 whose logic specifies the content contained within the indicated portion of electronic content includes an utterance.
  • the logic of operation 705 may be performed, for example, by an audio handling module 222 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B .
  • gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices such as microphone 20 b .
  • the utterance may be, for example, a spoken word of a presented document, or a command, or a sound.
  • operation 304 may include an operation 706 whose logic specifies the content contained within the indicated portion of electronic content comprises non-contiguous parts or contiguous parts.
  • the logic of operation 706 may be performed, for example, by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • the contiguous parts may represent a continuous are of the presented content, such as a sentence, a portion of a paragraph, a sequence of images, or the like.
  • Non-contiguous parts may include separate portions of the presented content that together comprise the indicated portion, such as a title and an abstract, a paragraph and the name of an author, a disconnected image and a spoken sentence, or the like.
  • operation 304 may include an operation 707 whose logic specifies the content contained within the indicated portion of electronic content is determined using syntactic and/or semantic rules.
  • the logic of operation 707 may be performed, for example, by the natural language processing module 226 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • NLP module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content.
  • the word or phrase may be any word or phrase located in or indicated by the electronically presented content.
  • FIG. 8A is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 802 whose logic specifies the set of factors includes context of other text, audio, graphics, and/or objects within the presented electronic content.
  • the logic of operation 802 may be performed, for example, by the current context determination module 231 provided by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C to determine (e.g., retrieve, designate, resolve, etc.) context related information from the currently presented content, including other text, audio, graphics, and/or objects.
  • operation 802 may further comprise an operation 803 whose logic specifies the set of factors includes an attribute of the gesture.
  • the logic of operation 803 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture itself (e.g., color, size, direction, shape, and so forth).
  • operation 803 may further include operation 804 whose logic specifies the attribute of the gesture is the size of the gesture.
  • the logic of operation 804 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as size.
  • Size of the gesture may include, for example, width and/or length, and other measurements appropriate to the input device 20 *.
  • operation 803 may include an operation 805 whose logic specifies the attribute of the gesture is a direction of the gesture.
  • the logic of operation 804 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as direction.
  • Direction of the gesture may include, for example, up or down, east or west, and other measurements or commands appropriate to the input device 20 *.
  • operation 803 may include an operation 806 whose logic specifies the attribute of the gesture is a color.
  • the logic of operation 806 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as color.
  • Color of the gesture may include, for example, a pen and/or ink color as well as other measurements appropriate to the input device 20 *.
  • operation 803 may include an operation 807 whose logic specifies the attribute of the gesture is a measure of steering of the gesture.
  • the logic of operation 807 may be performed, for example by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as steering. Steering of the gesture may occur when, for example, an initial gesture is indicated (e.g., on a mobile device) and the user desires to correct or nudge it in a certain direction.
  • operation 807 may further include an operation 808 whose logic specifies the steering of the gesture is accomplished by smudging the input device.
  • the logic of operation 807 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as smudging.
  • Smudging of the gesture may occur when, for example, an initial gesture is indicated (e.g., on a mobile device) and the user desires to correct or nudge it in a certain direction by, for example “smudging” the gesture using for example, a finger. This type of action may be particularly useful on a touch screen input device.
  • operation 807 may include an operation 809 whose logic specifies the steering of the gesture is performed by a handheld gaming accessory.
  • the logic of operation 807 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as steering.
  • the steering is performed by a handheld gaming accessory such as a particular type of input device 20 *.
  • the gaming accessory may include a joy stick, a handheld controller, or the like.
  • operation 807 may include an operation 810 whose logic specifies the steering of the gesture is a measure of adjustment of the gesture.
  • the logic of operation 810 may be performed, for example, by the of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • a gesture may be adjusted (e.g., modified, extended, smeared, smudged, redone) by any mechanism, including, for example, adjusting the gesture itself, or, for example, by modifying what the gesture indicates, for example, using a context menu, selecting a portion of the indicated gesture, and so forth.
  • adjusted e.g., modified, extended, smeared, smudged, redone
  • FIG. 8B is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 811 whose logic specifies the set of factors are associated with weights that are taken into consideration in determining the indication of source input.
  • the logic of operation 811 may be performed, for example, by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C .
  • the attributes of the gesture may be more important, hence weighted more heavily, than other attributes, such as the prior navigation history of the user. Any form of weighting, whether explicit or implicit may be used.
  • operation 304 may further include an operation 812 whose logic specifies the set of factors includes presentation device capabilities.
  • the logic of operation 812 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • Presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size, whether the device supports color, is a touch screen, and so forth.
  • operation 812 may further include operation 813 whose logic specifies the presentation device capabilities includes the size of the presentation device.
  • the logic of operation 813 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • Presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size of the device, whether the device supports color, is a touch screen, and so forth.
  • operation 812 may include an operation 814 whose logic specifies the presentation device capabilities includes whether text or audio is being presented.
  • the logic of operation 814 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size of the device, whether the device supports color, is a touch screen, and so forth.
  • operation 304 may include an operation 815 whose logic specifies the set of factors includes prior device communication history.
  • the logic of operation 815 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C .
  • Prior device communication history may include aspects such as how often the computing system running the GPSS 110 has been connected to the Internet, whether multiple client devices are connected to it—some times, at all times, etc., and how often the computing system is connected with various remote search capabilities.
  • operation 304 may include an operation 816 whose logic specifies the set of factors includes time of day.
  • the logic of operation 816 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine the time of day.
  • FIG. 8C is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 817 whose logic specifies the set of factors includes prior history associated with the user.
  • the logic of operation 817 may be performed, for example, by prior history determination module 232 provided by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C to determine prior history that may be associated with (e.g., coincident with, related to, appropriate to, etc.) the user, for example, prior purchase, navigation, or search history or demographic information.
  • operation 817 may further include an operation 818 whose logic specifies the prior history associated with the user includes prior search history.
  • the logic of operation 818 may be performed, for example, by the search history determination module 235 provided by the prior history determination module 232 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of properties based upon the prior search history associated with the user. Factors such as what content the user has reviewed and looked for may be considered. Other factors may be considered as well.
  • operation 817 may include operation 819 whose logic specifies the prior history associated with the user includes prior navigation history.
  • the logic of operation 819 may be performed, for example, by the navigation history determination module 236 provided by the prior history determination module 232 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the prior navigation history associated with the user. Factors such as what content the user has reviewed, for how long, and where the user has navigated to from that point may be considered. Other factors may be considered as well.
  • operation 817 may include operation 820 whose logic specifies the prior history associated with the user includes prior purchase history.
  • the logic of operation 820 may be performed, for example, by the prior purchase history determination module 234 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of factors based upon the prior purchase history associated with the user. Factors such as what products and/or services the user has bought or considered buying (determined, for example, by what the user has viewed) may be considered. Other factors may be considered as well.
  • operation 817 may include operation 821 whose logic specifies the prior history associated with the user includes demographic information associated with the user.
  • the logic of operation 821 may be performed, for example, by the demographic history determination module 233 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the demographic history associated with the user. Factors such as what the age, gender, location, citizenship, religious preferences (if specified) may be considered. Other factors may be considered as well.
  • operation 821 may further include operation 822 whose logic specifies the demographic information including at least one of age, gender, and/or a location associated with the user and/or contact information associated with the user.
  • the logic of operation 822 may be performed, for example, by the demographic history determination module 233 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the demographic history associated with the user including age, gender, or a location such as the user's residence information, country of citizenship, native language country, and the like.
  • FIG. 8D is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 824 whose logic specifies the set of factors includes a received selection from a context menu.
  • the logic of operation 824 may be performed, for example, by input handler 214 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • a context menu may be used, for example, to adjust or modify a gesture, to modify indicated content contained within the portion indicated by the gesture, to add information for a source input string such as additional keywords, or the like. Anything that can be indicated by a menu could be used as a factor to influence the source input.
  • a context menu includes, for example, any type of menu that can be presented and relates to some context.
  • a context menu may include pop-up menus, dialog boxes, pull-down menus, interest wheels, or any other shape of menu, rectangular or otherwise.
  • operation 824 may further include an operation 825 whose logic specifies the context menu includes a plurality of actions and/or entities derived from a set of rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs.
  • the logic of operation 825 may be performed, for example, by the items determination module 212 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • the set of rules may include heuristics for developing verbs (actions) from nouns (entities) encompassed by the content by the gestured input, using for example, verbification, frequency calculations, or other techniques.
  • operation 825 may further include an operation 826 whose logic specifies the rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs determine at least one of a set of most frequently occurring words in proximity to the indicated portion, a set of frequently occurring words in the electronic content, or a set of common verbs used with one or more entities encompassed by the indicated portion, and convert the words and/or verbs into actions and/or entities presented on the context menu.
  • the logic of operation 826 may be performed, for example, by the items determination module 212 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • the most frequent “n” occurring words in the presented electronic content may be counted and converted into verbs (actions), the “n” occurring words in proximity to the indicated portion (portion 25 ) of the presented electronic content may be used and/or converted into verbs (actions), the most common words in relative to some designated body of content may be used and/or converted into verbs (actions) and presented on the menu.
  • operation 825 may include operation 827 whose logic specifies the context menu includes an action to find a better ⁇ entity>, wherein ⁇ entity> is an entity encompassed by the indicated portion of the presented electronic content.
  • the logic of operation 827 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • Rules for determining what is “better” may be context dependent such as, for example, brighter color, better quality photograph, more often purchased, or the like. Different heuristics may be programmed into the logic to thus derive a better entity.
  • operation 825 may include operation 828 whose logic specifies the context menu includes an action to share a better ⁇ entity>, wherein ⁇ entity> is an entity encompassed by the indicated portion of the presented electronic content.
  • the logic of operation 828 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D . Sharing (e.g., forwarding, emailing, posting, messaging, communicating, or the like) may be also enhanced by context determined by the indicated portion (portion 25 ) or the set of criteria (e.g., prior search or purchase history, type of gesture, or the like).
  • operation 825 may include operation 829 whose logic specifies the context menu includes an action to obtain information about an ⁇ entity>, wherein ⁇ entity> is an entity encompassed by the indicated portion of the presented electronic content.
  • the logic of operation 829 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • Obtaining information may suggest actions like “find more information,” “get details,” “find source,” “define,” or the like.
  • FIG. 8E is an example flow diagram of example logic illustrating various example embodiments of block 825 of FIG. 8C .
  • the logic of operation 825 for the context menu includes a plurality of actions and/or entities derived from a set of rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs may include an operation 830 whose logic specifies the context menu includes actions that specify some form of buying or shopping, sharing, and/or exploring or obtaining information.
  • the logic of operation 830 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D . For example, actions for “buy ⁇ entity,” “obtain more info on ⁇ entity,” or the like may be derived by this logic.
  • operation 825 may include an operation 831 whose logic specifies the context menu includes one or more comparative actions.
  • the logic of operation 831 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • comparative actions may include verb phrases such as “find me a better,” “find me a cheaper,” “ship me sooner,” or the like.
  • operation 831 may further include an operation 832 whose logic specifies the comparative actions of the context menu include at least one of an action to obtain an entity sooner, an action to purchase an entity sooner, or an action to find a better deal.
  • the logic of operation 832 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D .
  • obtain an entity sooner may include shipping sooner, subscribing faster, finishing quicker, or the like.
  • operation 825 may include an operation 833 whose logic specifies the context menu is presented as at least one of a pop-up menu, an interest wheel, a rectangular shaped user interface element, or a non-rectangular shaped user interface element.
  • the logic of operation 833 may be performed, for example, by the a viewer module 216 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D .
  • Pop-up menus may be implemented, for example, using overlay windows, dialog boxes, and the like, and appear visible with a standard user interface typically from the point of a “cursor,” “pointer,” or other reference associated with the gesture.
  • Drop-down context menus may contain, for example, any number of actions and/or entities that are determined to be menu items. They appear visible with a standard user interface typically from the point of a “cursor,” “pointer,” or other reference associated with the gesture.
  • an interest wheel has menu items arranged in a pie shape. Rectangular menus may include pop-ups and pull-downs, although they may also be implemented in a non-rectangular fashion. Non-rectangular menus may include pop-ups, pull-downs, and interest wheels. They may also include other viewer controls.
  • FIG. 9 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 902 whose logic specifies disambiguating possible source input by presenting one or more indicators of possible source input and receiving a selected indicator to one of the presented one or more indicators of possible source input to determine the indication of source input for the search.
  • the logic of operation 902 may be performed, for example, by of the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D .
  • Presenting the one or more indicators of possible source input allows a user 10 * to select which source input to use for a search, especially in the case where there is some sort of ambiguity.
  • operation 304 may further include an operation 903 whose logic specifies disambiguating possible source input by determining a default source input to be used for the search.
  • the logic of operation 903 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D .
  • the GBSS 110 may determine a default source input for a search (e.g., the most prominent entity in the indicated portion of the presented content) in the case of an ambiguous finding of source input.
  • operation 903 may further include an operation 904 whose logic specifies the default source input may be overridden by the user.
  • the logic of operation 904 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D .
  • the DGGS 110 allows the user 10 * to override an default source input presented in a variety of ways, including by specifying that no default content is to be presented. Overriding can take place as a configuration parameter of the system, upon the presentation of a set of possible selections of source input, or at other times.
  • operation 304 may include an operation 905 whose logic specifies disambiguating possible source input utilizing syntactic and/or semantic rules to aid in determining the source input for the search.
  • the logic of operation 905 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D .
  • NLP-based mechanisms may be employed to determine what a user means by a gesture and hence what source input may be meaningful.
  • operation 304 may include an operation 906 whose logic specifies the search result content comprises content that corresponds to a plurality of source inputs.
  • the logic of operation 906 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D . Presenting multiple source inputs allows a user 10 * to select which source input to conduct the search upon.
  • FIG. 10 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3 .
  • the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 1002 whose logic specifies wherein the indicated source input is associated with a persistent state.
  • the logic of operation 1002 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D by generating a representation of the source input in memory (e.g., memory 101 in FIG. 24 ), including a file, a link, or the like.
  • operation 1002 may further include an operation 1003 whose logic specifies the persistent state is a uniform resource identifier.
  • the logic of operation 1003 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D by generating a representation of the source input as a uniform resource identifier (URI, or uniform resource locator, URL) that represents the source input.
  • URI uniform resource identifier
  • operation 304 may include an operation 1004 whose logic specifies the indicated source input is associated with a purchase.
  • the logic of operation 1004 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D to associate (e.g., link to or with, indicate, etc.) the source input with a user's purchase.
  • the purchase may be obtainable from the prior purchase information identifiable by the purchase history determination module 234 of the prior history determination module 232 of the factor determination module 113 of the GBSS 110 .
  • FIG. 11A is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3 .
  • the logic of operation 306 for automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content may include an operation 1102 whose logic specifies wherein the designated body of electronic content is any page or object accessible over a network.
  • the logic of operation 1102 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A .
  • the designated body of electronic content may include, for example, a corpus of documents, a set of images, a movie, a group of sounds, or the like.
  • the indicated source input is used to search this designated body of content to obtain (e.g., derive, get, receive, pull down, or the like) search result contents.
  • the search itself may be performed by any appropriate search engine as described elsewhere including a remote tool connected via the network to the GBSS 110 .
  • operation 1102 may further include an operation 1103 whose logic specifies the network is at least one of the Internet, a proprietary network, a wide area network, or a local area network.
  • the logic of operation 1103 may be performed, for example, by automated search module 114 of the GBSS 110 described with reference to FIG. 2A .
  • operation 306 may include an operation 1104 whose logic specifies the designated body of electronic content comprises at least one of web pages, computer code, electronic documents, and/or electronic versions of paper documents.
  • the logic of operation 1104 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A .
  • the designated body of electronic content may include, for example, web pages computer code, electronic documents, and/or electronic versions of paper documents, or other types of content as described.
  • operation 306 may include an operation 1105 whose logic specifies the automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content further comprising automatically initiating a search of the designated body of electronic content using an off-the-shelf search engine.
  • the logic of operation 1105 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A .
  • the search may be performed by any appropriate search engine, for example, a remote tool connected via the network to the GBSS 110 such as an off-the-shelf search engine such as a keyword search engine like Bing, Google, or Yahoo, or an advertising system.
  • operation 306 may include an operation 1106 whose logic specifies the automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content further comprising automatically initiating a search of the designated body of electronic content using a keyword search engine.
  • the logic of operation 1106 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A .
  • the search may be performed by a keyword search engine, for example, a remote tool connected via the network to the GBSS 110 such as a keyword search engine like Bing, Google, or Yahoo, or an advertising system.
  • FIG. 11B is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3 .
  • the logic of operation 306 for automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content may include an operation 1107 whose logic specifies wherein the search result content includes an opportunity for commercialization.
  • the logic of operation 1107 may be performed, for example, by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the auxiliary determination module 122 may be used to enhance, modify, substitute for, translate, or the like, output received from the search engine to determine auxiliary content.
  • the auxiliary content includes an indication of something that can be used for commercialization such as an advertisement, a web site that sells products, a bidding opportunity, a certificate, products, services, or the like.
  • operation 1107 may further include an operation 1108 whose logic specifies that the opportunity for commercialization is an advertisement.
  • the logic of operation 1108 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the advertisement may be a direct or indirect indication of an advertisement that is somehow supplemental to the content indicated by the indicated portion of the gesture, as referred to by the source input.
  • operation 1108 may further include an operation 1109 whose logic specifies that the advertisement is provided by at least one of: an entity separate from the entity that provided the presented electronic content; a competitor entity; and/or an entity associated with the presented electronic content.
  • the logic of operation 1109 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the entity separate from the entity that provide the presented electronic content may be, for example, a third party or a competitor entity whose content is accessible through third party auxiliary content 43 .
  • the entity associated with the presented electronic content may be, for example, GBSS 110 and the advertisement from the auxiliary content 40 . Advertisements may be supplied directly or indirectly as indicators to advertisements that can be served by server computing systems.
  • operation 1108 may include an operation 1110 whose logic specifies that the advertisement is selected from a plurality of advertisements.
  • the logic of operation 1110 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • a third party server such as a third party advertising system
  • a plurality of advertisements may be delivered (e.g., forwarded, sent, communicated, etc.) to the GBSS 110 for selection before being presented by the GBSS 110 .
  • operation 1108 may include an operation 1111 whose logic specifies that the advertisement is interactive entertainment.
  • the logic of operation 1111 may be performed, for example, by the interactive entertainment determination module 201 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the interactive entertainment may include, for example, a computer game, an on-line quiz show, a lottery, a movie to watch, and so forth.
  • operation 1108 may include an operation 1112 whose logic specifies that the advertisement is a role-playing game.
  • the logic of operation 1112 may be performed, for example, by the role playing game determination module 203 provided by the interactive entertainment determination module 201 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the role playing game may be a multi-player online role playing game (MMRPG) or a standalone, single or multi-player role playing game, or some other form of online, manual, or other role playing game.
  • MMRPG multi-player online role playing game
  • operation 1108 may include an operation 1113 whose logic specifies that the advertisement is at least one of a computer-assisted competition and/or a bidding opportunity.
  • the logic of operation 1113 may be performed, for example, by the bidding determination module 206 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the bidding opportunity for example, a competition or gambling event, etc., may be computer based, computer-assisted, and/or manual.
  • FIG. 11C is an example flow diagram of example logic illustrating various example embodiments of block 1108 of FIG. 11B .
  • the logic of operation 1108 wherein the opportunity for commercialization is an advertisement includes an operation 1114 whose logic specifies wherein the advertisement includes a purchase and/or an offer.
  • the logic of operation 1114 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the purchase or offer may take any form, for example, a book advertisement, or a web page, and may be for products and/or services.
  • operation 1114 may further include an operation 1115 whose logic specifies that the purchase and/or an offer is for at least one of: information, an item for sale, a service for offer and/or a service for sale, a prior purchase of the user, and/or a current purchase.
  • the logic of operation 1115 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • Any type of information, item, or service (online or offline, machine generated or human generated) can be offered and/or purchased in this manner. If human generated the advertisement may be to a computer representation of the human generated service, for example, a contract or a calendar entry, or the like.
  • operation 1114 may further include an operation 1116 whose logic specifies that the purchase and/or an offer is a purchase of an entity that is part of a social network of the user.
  • the logic of operation 1116 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the purchase may be related to (e.g., associated with, directed to, mentioned by, a contact directly or indirectly related to, etc.) someone that belongs to a social network associated with the user, for example through the one or more networks 30 .
  • FIG. 12 is an example flow diagram of example logic illustrating various example embodiments of block 308 of FIG. 3 .
  • the logic of operation 308 for presenting the search result content in conjunction with the corresponding presented electronic content may include an operation 1202 whose logic specifies wherein the search result includes supplemental information to the presented electronic content.
  • the logic of operation 1202 may be performed, for example, by the supplemental content determination module 204 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • the supplemental information may be of any nature, for example, an additional document or portion thereof, map, web page, advertisement, and so forth.
  • operation 308 may include an operation 1203 whose logic specifies that the search result is at least one of a web page, an electronic document, and/or an electronic version of a paper document.
  • the logic of operation 1203 may be performed, for example, by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E .
  • operation 308 may include an operation 1204 whose logic specifies that the search result content is presented as an overlay on top of the presented electronic content.
  • the logic of operation 1204 may be performed, for example, by the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • the overlay may be in any form including a pane, window, menu, dialog, frame, etc. and may partially or totally obscure the underlying presented content.
  • operation 1204 may further include an operation 1205 whose logic specifies that the overlay is made visible using animation techniques.
  • the logic of operation 1205 may be performed, for example, by the animation module 254 in conjunction with the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • the animation techniques may include leaving trailing foot print information for the user to see the animation, may be of varying speeds, involve different shapes, sounds, or the like.
  • operation 1204 may further include an operation 1206 whose logic specifies that the overlay is made visible by causing a pane to appear as though the pane is caused to slide from one side of the presentation device onto the presented electronic content.
  • the logic of operation 1206 may be performed, for example, by the animation module 254 in conjunction with the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • the pane may be a window, frame, popup, dialog box, or any other presentation construct that may be made gradually more visible as it is moved into the visible presentation area. Once there, the pane may obscure, not obscure, or partially obscure the other presented content.
  • operation 308 may include an operation 1207 whose logic specifies that the search result content is presented in an auxiliary window, pane, frame, or other auxiliary display construct.
  • the logic of operation 1207 may be performed, for example, by the auxiliary display generation module 256 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • the auxiliary display module may be presented in an animated fashion, overlaid upon other content, placed non-contiguously or juxtaposed to other content.
  • operation 308 may include an operation 1208 whose logic specifies that the search result content is presented in an auxiliary window juxtaposed to the presented electronic content.
  • the logic of operation 1208 may be performed, for example, by the auxiliary display generation module 256 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • the search result content may be presented in a separate window or frame to enable the user to see the original content alongside the auxiliary content (such as an advertisement).
  • FIG. 13A is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1301 whose logic specifies wherein the input device is at least one of a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer.
  • the logic of operation 1301 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect and resolve gesture input from, for example, devices 20 *.
  • FIG. 13B is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1302 whose logic specifies wherein the user inputted gesture approximates a circle shape.
  • the logic of operation 1302 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a circle shape.
  • operation 302 may include an operation 1303 whose logic specifies that the user inputted gesture approximates an oval shape.
  • the logic of operation 1303 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates an oval shape.
  • operation 302 may include an operation 1304 whose logic specifies that the user inputted gesture approximates a closed path.
  • the logic of operation 1304 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a closed path of points and/or line segments.
  • operation 302 may include an operation 1305 whose logic specifies that the user inputted gesture approximates a polygon.
  • the logic of operation 1305 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a polygon.
  • operation 302 may include an operation 1306 whose logic specifies that the user inputted gesture is an audio gesture.
  • the logic of operation 1306 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is an audio gesture, such as received via audio device, microphone 20 b.
  • operation 1306 may further include an operation 1307 whose logic specifies that the audio gesture is a spoken word or phrase.
  • the logic of operation 1307 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received audio gesture, such as received via audio device, microphone 20 b , indicates (e.g., designates or otherwise selects) a word or phrase indicating some portion of the presented content.
  • operation 1306 may include an operation 1308 whose logic specifies that the audio gesture is a direction.
  • the logic of operation 1308 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect a direction received from an audio input device, such as audio input device 20 b .
  • the direction may be a single letter, number, word, phrase, or any type of instruction or indication of where to move a cursor or locator device.
  • operation 1306 may include an operation 1309 whose logic specifies that the audio gesture is at least one of a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer.
  • the logic of operation 1309 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect and resolve audio gesture input from, for example, devices 20 *.
  • FIG. 13C is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3 .
  • the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1310 whose logic specifies wherein the presentation device is at least one of a browser, a mobile device, a hand-held device, embedded as part of the computing system, a remote display associated with the computing system, a speaker, or a Braille printer.
  • the logic of operation 1310 may be performed, for example, by the specific device handlers 258 of the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F .
  • operation 302 may include an operation 1311 whose logic specifies that the presented electronic content is at least one of code, a web page, an electronic document, an electronic version of a paper document, an image, a video, an audio and/or any combination thereof.
  • the logic of operation 1311 may be performed, for example, by one or more modules of the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • operation 302 may include an operation 1312 whose logic specifies that the computing system comprises at least one of a computer, notebook, tablet, wireless device, cellular phone, mobile device, hand-held device, and/or wired device.
  • the logic of operation 1312 may be performed, for example, by the specific device handlers 125 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B .
  • FIG. 14 is an example flow diagram of example logic illustrating various example embodiments of blocks 302 to 308 of FIG. 3 .
  • the logic of the operations 302 to 310 may further include logic 1402 that specifies that the entire method is performed by a client.
  • a client may be hardware, software, or firmware, physical or virtual, and may be part or the whole of a computing system.
  • a client may be an application or a device.
  • the logic of the operations 302 to 310 may further include logic 1403 that specifics that the entire method is performed by a server.
  • a server may be hardware, software, or firmware, physical or virtual, and may be part or the whole of a computing system.
  • a server may be service as well as a system.
  • FIG. 15 is an example block diagram of a computing system for practicing embodiments of a Gesture Based Search System as described herein. Note that a general purpose or a special purpose computing system suitably instructed may be used to implement an GBSS, such as GBSS 110 of FIG. 1D .
  • the GBSS may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
  • the computing system 100 may comprise one or more server and/or client computing systems and may span distributed locations.
  • each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks.
  • the various blocks of the GBSS 110 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other.
  • computer system 100 comprises a computer memory (“memory”) 101 , a display 1502 , one or more Central Processing Units (“CPU”) 1503 , Input/Output devices 1504 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1505 , and one or more network connections 1506 .
  • the GBSS 110 is shown residing in memory 101 . In other embodiments, some portion of the contents, some of, or all of the components of the GBSS 110 may be stored on and/or transmitted over the other computer-readable media 1505 .
  • the components of the GBSS 110 preferably execute on one or more CPUs 1503 and manage providing automatic navigation to auxiliary content, as described herein.
  • code or programs 1530 and potentially other data stores also reside in the memory 101 , and preferably execute on one or more CPUs 1503 .
  • data repository 1520 also reside in the memory 101 , and preferably execute on one or more CPUs 1503 .
  • one or more of the components in FIG. 15 may not be present in any specific implementation.
  • some embodiments embedded in other software may not provide means for user input or display.
  • the GBSS 110 includes one or more input modules 111 , one or more source input determination modules 112 , one or more factor determination modules 113 , one or more automated search modules 114 , and one or more presentation modules 115 .
  • some data is provided external to the GBSS 110 and is available, potentially, over one or more networks 30 .
  • Other and/or different modules may be implemented.
  • the GBSS 110 may interact via a network 30 with application or client code 1555 that can absorb search results, for example, for other purposes, one or more client computing systems or client devices 20 *, and/or one or more third-party content provider systems 1565 , such as third party advertising systems or other purveyors of auxiliary content.
  • the history data repository 1515 may be provided external to the GBSS 110 as well, for example in a knowledge base accessible over one or more networks 30 .
  • components/modules of the GBSS 110 are implemented using standard programming techniques.
  • a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C#, Smalltalk, etc.), functional (e.g., ML, Lisp, Scheme, etc.), procedural (e.g., C, Pascal, Ada, Modula, etc.), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, etc.), declarative (e.g., SQL, Prolog, etc.), etc.
  • object-oriented e.g., Java, C++, C#, Smalltalk, etc.
  • functional e.g., ML, Lisp, Scheme, etc.
  • procedural e.g., C, Pascal, Ada, Modula, etc.
  • scripting e.g., Perl, Ruby, Python, JavaScript, VB
  • the embodiments described above may also use well-known or proprietary synchronous or asynchronous client-server computing techniques.
  • the various components may be implemented using more monolithic programming techniques as well, for example, as an executable running on a single CPU computer system, or alternately decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs.
  • Some embodiments are illustrated as executing concurrently and asynchronously and communicating using message passing techniques. Equivalent synchronous embodiments are also supported by an GBSS implementation.
  • programming interfaces to the data stored as part of the GBSS 110 can be available by standard means such as through C, C++, C#, Visual Basic.NET and Java APIs; libraries for accessing files, databases, or other data repositories; through scripting languages such as XML; or through Web servers, FTP servers, or other types of servers providing access to stored data.
  • the repositories 1515 and 41 may be implemented as one or more database systems, file systems, or any other method known in the art for storing such information, or any combination of the above, including implementation using distributed computing techniques.
  • the example GBSS 110 may be implemented in a distributed environment comprising multiple, even heterogeneous, computer systems and networks. Different configurations and locations of programs and data are contemplated for use with techniques of described herein.
  • the server and/or client components may be physical or virtual computing systems and may reside on the same physical system.
  • one or more of the modules may themselves be distributed, pooled or otherwise grouped, such as for load balancing, reliability or security reasons.
  • a variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (XML-RPC, JAX-RPC, SOAP, etc.) etc. Other variations are possible.
  • other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of an GBSS.
  • some or all of the components of the GBSS 110 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like.
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • CPLDs complex programmable logic devices
  • system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., a hard disk; memory; network; other computer-readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) to enable the computer-readable medium to execute or otherwise use or provide the contents to perform at least some of the described techniques.
  • a computer-readable medium e.g., a hard disk; memory; network; other computer-readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device
  • Some or all of the components and/or data structures may be stored on tangible, non-transitory storage mediums.
  • system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames).
  • Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
  • the methods and systems for performing automatic navigation to auxiliary content discussed herein are applicable to other architectures other than a windowed or client-server architecture.
  • the methods and systems discussed herein are applicable to differing protocols, communication media (optical, wireless, cable, etc.) and devices (such as wireless handsets, electronic organizers, personal digital assistants, tablets, portable email machines, game machines, pagers, navigation devices such as GPS receivers, etc.).

Abstract

Methods, systems, and techniques for automatically initiating a search to present auxiliary content in a gesture based input system are provided Example embodiments provide a Gesture Based Search System (GBSS), which enables a gesture-based user interface to invoke (e.g., execute, generate, initiate, perform, or cause to be executed, generated, initiated, performed, or the like) a search related to an portion of electronic input that has been indicated by a received gesture. In overview, the GBSS allows a portion (e.g., an area, part, or the like) of electronically presented content to be dynamically indicated by a gesture. The GBSS then examines the indicated portion in conjunction with a set of (e.g., one or more) factors to determine input to a search. The search is then automatically initiated with the determined source input. Once search result content is determined, the result content is then presented to the user.

Description

    RELATED APPLICATIONS
  • For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. 13/251,046, entitled GESTURELET BASED NAVIGATION TO AUXILIARY CONTENT, naming Matthew Dyor, Royce Levien, Richard T. Lord, Robert W. Lord, Mark Malamud as inventors, filed 30 Sep. 2011, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date.
  • For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. 13/269,466, entitled PERSISTENT GESTURELETS, naming Matthew Dyor, Royce Levien, Richard T. Lord, Robert W. Lord, Mark Malamud as inventors, filed 7 Oct. 2011, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date.
  • For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. 13/278,680, entitled GESTURE BASED CONTEXT MENUS, naming Matthew Dyor, Royce Levien, Richard T. Lord, Robert W. Lord, Mark Malamud as inventors, filed 21 Oct. 2011, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date.
  • For purposes of the USPTO extra-statutory requirements, the present application constitutes a continuation-in-part of U.S. patent application Ser. No. ______ (Attorney Docket No. 1010-003-004-000000), entitled GESTURE BASED NAVIGATION SYSTEM, naming Matthew Dyor, Royce Levien, Richard T. Lord, Robert W. Lord, Mark Malamud as inventors, filed 28 Oct. 2011, which is currently co-pending, or is an application of which a currently co-pending application is entitled to the benefit of the filing date.
  • TECHNICAL FIELD
  • The present disclosure relates to methods, techniques, and systems for providing a gesture-based search system and, in particular, to methods, techniques, and systems for automatically initiating a search based upon gestured input.
  • CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is related to and claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 USC §119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Application(s)). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
  • BACKGROUND
  • As massive amounts of information continue to become progressively more available to users connected via a network, such as the Internet, a company intranet, or a proprietary network, it is becoming increasingly more difficult for a user to find particular information that is relevant, such as for a task, information discovery, or for some other purpose. Typically, a user invokes one or more search engines and provides them with keywords that are meant to cause the search engine to return results that are relevant because they contain the same or similar keywords to the ones submitted by the user. Often, the user iterates using this process until he or she believes that the results returned are sufficiently close to what is desired. The better the user understands or knows what he or she is looking for, often the more relevant the results. Thus, such tools can often be frustrating when employed for information discovery where the user may or may not know much about the topic at hand.
  • Different search engines and search technology have been developed to increase the precision and correctness of search results returned, including arming such tools with the ability to add useful additional search terms (e.g., synonyms), rephrase queries, and take into account document related information such as whether a user-specified keyword appears in a particular position in a document. In addition, search engines that utilize natural language processing capabilities have been developed.
  • In addition, it has becoming increasingly more difficult for a user to navigate the information and remember what information was visited, even if the user knows what he or she is looking for. Although bookmarks available in some client applications (such as a web browser) provide an easy way for a user to return to a known location (e.g., web page), they do not provide a dynamic memory that assists a user from going from one display or document to another, and then to another. Some applications provide “hyperlinks,” which are cross-references to other information, typically a document or a portion of a document. These hyperlink cross-references are typically selectable, and when selected by a user (such as by using an input device such as a mouse, pointer, pen device, etc.), result in the other information being displayed to the user. For example, a user running a web browser that communicates via the World Wide Web network may select a hyperlink displayed on a web page to navigate to another page encoded by the hyperlink. Hyperlinks are typically placed into a document by the document author or creator, and, in any case, are embedded into the electronic representation of the document. When the location of the other information changes, the hyperlink is “broken” until it is updated and/or replaced. In some systems, users can also create such links in a document, which are then stored as part of the document representation.
  • Even with advancements, searching and navigating the morass of information is of times still a frustrating user experience.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a screen display of example gesture based input performed by an example Gesture Based Search System (GBSS) or process.
  • FIG. 1B is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • FIG. 1C is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process.
  • FIG. 1D is a block diagram of an example environment for performing searches using an example Gesture Based Search System (GBSS) or process.
  • FIG. 2A is an example block diagram of components of an example Gesture Based Search System.
  • FIG. 2B is an example block diagram of further components of the Input Module of an example Gesture Based Search System.
  • FIG. 2C is an example block diagram of further components of the Factor Determination Module of an example Gesture Based Search System.
  • FIG. 2D is an example block diagram of further components of the Source Input Determination Module of an example Gesture Based Search System.
  • FIG. 2E is an example block diagram of further components of the Auxiliary Content Determination Module of an example Gesture Based Search System.
  • FIG. 2F is an example block diagram of further components of the Presentation Module of an example Gesture Based Search System.
  • FIG. 3 is an example flow diagram of example logic for providing a gesture based search for auxiliary content.
  • FIG. 4 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 5 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 6 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 7 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 8A is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 8B is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 8C is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 8D is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 8E is an example flow diagram of example logic illustrating various example embodiments of block 825 of FIG. 8C.
  • FIG. 9 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 10 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3.
  • FIG. 11A is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3.
  • FIG. 11B is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3.
  • FIG. 11C is an example flow diagram of example logic illustrating various example embodiments of block 1108 of FIG. 11B.
  • FIG. 12 is an example flow diagram of example logic illustrating various example embodiments of block 308 of FIG. 3.
  • FIG. 13A is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3.
  • FIG. 13B is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3.
  • FIG. 13C is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3.
  • FIG. 14 is an example flow diagram of example logic illustrating various example embodiments of blocks 302-308 of FIG. 3.
  • FIG. 15 is an example block diagram of a computing system for practicing embodiments of a Gesture Based Search System.
  • DETAILED DESCRIPTION
  • Embodiments described herein provide enhanced computer- and network-based methods, techniques, and systems for automatically initiating a search to present auxiliary content in a gesture based input system. Example embodiments provide a Gesture Based Search System (GBSS), which enables a gesture-based user interface to invoke (e.g., execute, generate, initiate, perform, or cause to be executed, generated, initiated, performed, or the like) a search related to an portion of electronic input that has been indicated by a received gesture.
  • In overview, the GBSS allows a portion (e.g., an area, part, or the like) of electronically presented content to be dynamically indicated by a gesture. The gesture may be provided in the form of some type of pointer, for example, a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer that indicates a word, phrase, icon, image, or video, or may be provided in audio form. The GBSS then examines the indicated portion in conjunction with a set of (e.g., one or more) factors to determine input to a search. The search is then automatically initiated with the determined source input. The search may be provided, for example, by a third party search engine, a proprietary search engine, an off-the-shelf search engine or the like, communicatively coupled to the GBSS, and the source input is provided in a corresponding appropriate format. Once search result content is determined, the result content is then presented to the user.
  • The input for the search is based upon content contained in the portion of the presented electronic indicated by the gestured input as well as possibly one or more of a set of factors. Content may include, for example, a word, phrase, spoken utterance, image, video, pattern, and/or other audio signal. Also, the portion may be formed from contiguous or composed of separate non-contiguous parts, for example, a title with a disconnected sentence. In addition, the indicated portion may represent the entire body of electronic content presented to the user. For the purposes described herein, the electronic content may comprise any type of content that can be presented for gestured input, including, for example, text, a document, music, a video, an image, a sound, or the like.
  • As stated, the GBSS may incorporate information from a set of factors (e.g., criteria, state, influencers, things, features, and the like) in addition to the content contained in the indicated portion. The set of factors that may influence what is input to the search (e.g., source input) may include such things as context surrounding or otherwise relating to the indicated portion (as indicated by the gesture), such as other text, audio, graphics, and/or objects within the presented electronic content; some attribute of the gesture itself, such as size, direction, color, how the gesture is steered (e.g., smudged, nudged, adjusted, and the like); presentation device capabilities, for example, the size of the presentation device, whether text or audio is being presented; prior device communication history, such as what other devices have recently been used by this user or to which other devices the user has been connected; time of day; and/or prior history associated with the user, such as prior search history, navigation history, purchase history, and/or demographic information (e.g., age, gender, location, contact information, or the like). In addition, information from a context menu, such as a selection of a menu item by the user, may be used to assist the GBSS in determining what input to use for the search.
  • Once the source input is determined, the GBSS automatically initiates a search to obtain search result content. The search result content is “auxiliary” (additional, supplemental, other, etc.) content in that it is additional to what is currently presented to the user as the presented electronic content. This auxiliary content is the presented to the user in conjunction with the presented electronic content by, for example, use of an overlay; in a separate presentation element (e.g., window, pane, frame, or other construct) such as a window juxtaposed (e.g., next to, contiguous with, nearly up against) to the presented electronic content; and/or, as an animation, for example, a pane that slides in to partially or totally obscure the presented electronic content. Other methods of presenting the search results are contemplated.
  • The search result content, e.g., the auxiliary content, may be anything, including, for example, a web page, computer code, electronic document, electronic version of a paper document, a purchase or an offer to purchase a product or service, social networking content, and/or the like.
  • FIG. 1A is a screen display of example gesture based input performed by an example Gesture Based Search System (GBSS) or process. In FIG. 1A, a presentation device, such as computer display screen 001, is shown presenting two windows with electronic content, window 002 and window 003. The user (not shown) utilizes an input device, such as mouse 20 a and/or a microphone 20 b, to indicate a gesture (e.g., gesture 005) to the GBSS. The GBSS, as will be described in detail elsewhere herein, determines to which portion of the electronic content displayed in window 002 the gesture 005 corresponds, potentially including what type of gesture. In the example illustrated, gesture 005 was created using the mouse device 20 a and Represents a closed path (shown in red) that is not quite a circle or oval that indicates that the user is interested in the entity “Obama.” The gesture may be a circle, oval, closed path, polygon, or essentially any other shape recognizable by the GBSS. The gesture may indicate content that is contiguous or non-contiguous. Audio may also be used to indicate some area of the presented content, such as by using a spoken word, phrase, and/or direction (e.g., command, order, directional command, or the like). Other embodiments provide additional ways to indicate input by means of a gesture. The GBSS can be fitted to incorporate any technique for providing a gesture that indicates some area or portion (including any or all) of presented content. The GBSS has highlighted the text 007 to which gesture 005 is determined to correspond.
  • In the example illustrated, the GBSS determines from the indicated portion (the text “Obama”) and one or more factors, such as the user's prior navigation history, that the user is interested in more detailed information regarding the indicated portion. In this case, the user has been known to employ “Wikipedia” for obtaining detailed information about entities. Thus, the GBSS initiates a search on the entity Obama along with an indication that results from Wikipedia as a source are preferred. In this case, any search engine can be employed, such as a keyword search engine like Bing, Google, Yahoo, and the like.
  • FIG. 1B is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process. In this example, the auxiliary content is the resultant web page 006 on the entity “Obama” from Wikipedia. This content is shown as an overlay over one of the windows 003 on the presentation device 001. The user could continue searching using gestures from here to find more detailed information on Obama, for example, by indicating by a gesture an additional entity or action that the user desires information on.
  • For the purposes of this description, an “entity” is any person, place, or thing, or a representative of the same, such as by an icon, image, video, utterance, etc. An “action” is something that can be performed, for example, as represented by a verb, an icon, an utterance, or the like.
  • Suppose, on the other hand, the GBSS determined from FIG. 1A that the user tended to like to use the computer for purchases. In this case, the GBSS may surmise this as one of the factors for choosing a source input also by looking at the user's prior navigation history, purchase history, or the like. In this case, the GBSS sends an indication to the search engine that an opportunity for commercialization, such as an advertisement is desirable.
  • FIG. 1C is a screen display of an example gesture based auxiliary content produced by an automatic search performed by an example Gesture Based Search System or process. In this example, an advertisement for a book 013 on the entity “Obama” (the gestured indicated portion) is presented alongside the gestured input 005 on window 002. The user could next use the gestural input system to select the advertisement on the book on “Obama” to create a purchase opportunity.
  • In FIG. 1C, the advertisement is shown as an overlay over both windows 002 and 003 on the presentation device 001. In other examples, the auxiliary content may be displayed in a separate pane, window, frame, or other construct. In some examples, the auxiliary content is brought into view in an animated fashion from one side of the screen and partially overlaid on top of the presented electronic content that the user is viewing. For example, the auxiliary content may appear to “move into place” from one side of a presentation device. In other examples, the auxiliary content may be placed in another window, pane, frame, or the like, which may or may not be juxtaposed, overlaid, or just placed in conjunction with to the initial presented content. Other arrangements are of course contemplated.
  • In some embodiments, the GBSS may interact with one or more remote and/or third party systems to present auxiliary content. For example, to achieve the presentation illustrated in FIG. 1C, the GBSS may invoke a third party advertising supplier system to cause it to serve (e.g., deliver, forward, send, communicate, etc.) an appropriate advertisement oriented to other factors related to the user, such as gender, age, location, etc.
  • FIG. 1D is a block diagram of an example environment for performing searches using an example Gesture Based Search System (GBSS) or process. One or more users 10 a, 10 b, etc. communicate to the GBSS 110 through one or more networks, for example, wireless and/or wired network 30, by indicating gestures using one or more input devices, for example a mobile device 20 a, an audio device such as a microphone 20 b, or a pointer device such as mouse 20 c or the stylus on table device 20 d (or for example, or any other input device, such as a keyboard of a computer device or a human body part, not shown). For the purposes of this description, the nomenclature “*” indicates a wildcard (substitutable letter(s)). Thus, user 20* may indicate a device 20 a or a device 20 b. The one or more networks 30 may be any type of communications link, including for example, a local area network or a wide area network such as the Internet.
  • Search input (source input) is typically generated (e.g., defined, produced, instantiated, created etc.) “on-the-fly” as a user indicates, by means of a gesture, what portion of the presented content is interesting and a desire to perform a search. Many different mechanisms for causing a search to be initiated and result content to be presented can be accommodated, for example, a “single-click” of a mouse button following the gesture, a command via an audio input device such as microphone 20 b, a secondary gesture, etc. Or in some cases, the search is initiated automatically as a direct result of the gesture—without additional input—for example, as soon as the GBSS determines the gesture is complete.
  • For example, once the user has provided gestured input, the GBSS 110 will determine to what portion the gesture corresponds. In some embodiments, the GBSS 110 may take into account other factors in addition to the indicated portion of the presented content in order to determine what source input to use for the search, as explained above. The GBSS 110 determines the indicated portion 25 to which the gesture-based input corresponds, and then, based upon the indicated portion 25, and possibly a set of factors 50, (and, in the case of a context menu, based upon a set of action/entity rules 51) determines search input. Then, once the search is initiated and the auxiliary content obtained, the GBSS 110 presents the auxiliary content.
  • The set of factors (e.g., criteria) 50 may be dynamically determined, predetermined, local to the GBSS 110, or stored or supplied externally from the GBSS 110 as described elsewhere. This set of factors may include a variety of aspects, including, for example: context of the indicated portion of the presented content, such as other words, symbols, and/or graphics nearby the indicated portion, the location of the indicated portion in the presented content, syntactic and semantic considerations, etc.; attributes of the user, for example, prior search, purchase, and/or navigation history, demographic information, and the like; attributes of the gesture, for example, direction, size, shape, color, steering, and the like; and other criteria, whether currently defined or defined in the future. In this manner, the GBSS 110 allows searching to become “personalized” to the user as much as the system is tuned.
  • As explained with reference to FIGS. 1A-1C, the determined source input is then used in an automatically initiated search to obtain auxiliary content. The auxiliary content may be stored local to the GBSS 110, for example, in auxiliary content data repository 40 associated with a computing system running the GBSS 110, or may be stored or available externally, for example, from another computing system 42, from third party content 43 (e.g., a 3rd party advertising system, external content, a social network, etc.) from auxiliary content stored using cloud storage 44, from another device 45 (such as from a settop box, A/V component, etc.), from a mobile device connected directly or indirectly with the user (e.g., from a device associated with a social network associated with the user, etc.), and/or from other devices or systems not illustrated. Third party content 43 is demonstrated as being communicatively connected to both the GBSS 110 directly and/or through the one or more networks 30. Although not shown, various of the devices and/or systems 42-46 also may be communicatively connected to the GBSS 110 directly or indirectly. The auxiliary content may be any type of content and, for example, may include another document, an image, an audio snippet, an audio visual presentation, an advertisement, an opportunity for commercialization such as a bid, a product offer, a service offer, or a competition, and the like. Once the GBSS 110 obtains the auxiliary content to present, the GBSS 110 causes the auxiliary to be presented on a presentation device (e.g., presentation device 20 d) associated with the user.
  • The GBSS 110 illustrated in FIG. 1D may be executing (e.g., running, invoked, instantiated, or the like) on a client or on a server device or computing system. For example, a client application (e.g., a web application, web browser, other application, etc.) may be executing on one of the presentation devices, such as tablet 20 d. In some embodiments, some portion or all of the GBSS 110 components may be executing as part of the client application (for example, downloaded as a plug-in, active-x component, run as a script or as part of a monolithic application, etc.). In other embodiments, some portion or all of the GBSS 110 components may be executing as a server (e.g., server application, server computing system, software as a service, etc.) remotely from the client input and/or presentation devices 20 a-d.
  • FIG. 2A is an example block diagram of components of an example Gesture Based Search System. In example GBSSes such as GBSS 110 of FIG. 1D, the GBSS comprises one or more functional components/modules that work together to provide automatically initiated searches based upon gestured input. For example, a Gesture Based Search System 110 may reside in (e.g., execute thereupon, be stored in, operate with, etc.) a computing device 100 programmed with logic to effectuate the purposes of the GBSS 110. As mentioned, a GBSS 110 may be executed client side or server side. For ease of description, the GBSS 110 is described as though it is operating as a server. It is to be understood that equivalent client side modules can be implemented. Moreover, such client side modules need not operate in a client-server environment, as the GBSS 110 may be practiced in a standalone environment or even embedded into another apparatus. Moreover, the GBSS 110 may be implemented in hardware, software, or firmware, or in some combination. In addition, although auxiliary content is typically presented on a client presentation device such as devices 20*, the content may be implemented server-side or some combination of both. Details of the computing device/system 100 are described below with reference to FIG. 15.
  • In an example system, a GBSS 110 comprises an input module 111, a source (search) input determination module 112, a factor determination module 113, an automated search module 114, and a presentation module 115. In some embodiments the GBSS 110 comprises additional and/or different modules as described further below.
  • Input module 111 is configured and responsible for determining the gesture and an indication of an area (e.g., a portion) of the presented electronic content indicated by the gesture. In some example systems, the input module 111 comprises a gesture input detection and resolution module 121 to aid in this process. The gesture input detection and resolution module 121 is responsible for determining, using different techniques, for example, pattern matching, parsing, heuristics, etc. to what area a gesture corresponds and what word, phrase, image, audio clip, etc. is indicated.
  • Source input determination module 112 is configured and responsible for determining the input to be used as source for a search. As explained, this determination may be based upon the context—the portion indicated by the gesture and potentially a set of factors (e.g., criteria, properties, aspects, or the like) that help to define context. The source input determination module 112 may invoke the factor determination module 113 to determine the one or more factors to use to assist in defining the source input for the search. The factor determination module 113 may comprise a variety of implementations corresponding to different types of factors, for example, modules for determining prior history associated with the user, current context, gesture attributes, system attributes, or the like.
  • In some cases, for example, when the portion of content indicated by the gesture is ambiguous or not clear by the indicated portion itself, the source input determination module 112 may utilize a disambiguation module 123 to help disambiguate the indicated portion of content. For example, if a gesture has indicated the word “Bill,” the disambiguation module 123 may help distinguish whether the user is likely interested in a person whose name is Bill or a legislative proposal. In addition, based upon the indicated portion of content and the set of factors more than one source input may be identified. If this is the case, then the source input determination module 112 may use the disambiguation module 123 and other logic to select a source input for a search.
  • Once the source input for the search is determined, the GBSS 110 uses the automated search module 114 to obtain a search result. The search result determination module 122 is then used to obtain an auxiliary content to present. The GBSS 110 then forwards (e.g., communicates, sends, pushes, etc.) the auxiliary content to the presentation module 115 to cause the presentation module 115 to present the auxiliary content. The auxiliary content may be presented in a variety of manners, including via visual display, audio display, via a Braille printer, etc., and using different techniques, for example, overlays, animation, etc.
  • FIG. 2B is an example block diagram of further components of the Input Module of an example Gesture Based Search System. In some example systems, the input module 111 may be configured to include a variety of other modules and/or logic. For example, the input module 111 may be configured to include a gesture input detection and resolution module 121 as described with reference to FIG. 2A. The gesture input detection and resolution module 121 may be further configured to include a variety of modules and logic for handling a variety of input devices and systems. For example, gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices and/or a graphics handling module 224 for handing the association of gestures to graphics in content (such as an icon, image, movie, still, sequence of frames, etc.). In addition, in some example systems, the input module 111 may be configured to include a natural language processing module 226. Natural language processing (NLP) module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content. In some example systems, the input module 111 may be configured to include a gesture identification and attribute processing module 228 for handling other aspects of gesture determination such as determining the particular type of gesture (e.g., a circle, oval, polygon, closed path, check mark, box, or the like) or whether a particular gesture is a “steering” gesture that is meant to correct, for example, an initial path indicated by a gesture; a “smudge” which may have its own interpretation such as extend the gesture “here;” the color of the gesture, for example, if the input device supports the equivalent of a colored “pen” (e.g., pens that allow a user can select blue, black, red, or green); the size of a gesture (e.g., whether the gesture draws a thick or thin line, whether the gesture is a small or large circle, and the like); the direction of the gesture (up, down, across, etc.); and/or other attributes of a gesture.
  • In some example systems, the input module 111 is configured to include specific device handlers 125 (e.g., drivers) for detecting and controlling input from the various types of input devices, for example devices 20*. For example, specific device handlers 125 may include a mobile device driver, a browser “device” driver, a remote display “device” driver, a speaker device driver, a Braille printer device driver, and the like. The input module 111 may be configured to work with and or dynamically add other and/or different device handlers.
  • Other modules and logic may be also configured to be used with the input module 111.
  • FIG. 2C is an example block diagram of further components of the Factor Determination Module of an example Gesture Based Search System. In some example systems, the factor determination module 113 may be configured to include a prior history determination module 232, a system attributes determination module 237, other user attributes determination module 238, a gesture attributes determination module 239, and/or current context determination module 231.
  • In some example systems, the prior history determination module 232 determines (e.g., finds, establishes, selects, realizes, resolves, establishes, etc.) prior histories associated with the user and is configured to include modules/logic to implement such. For example, the prior history determination module 232 may be configured to include a demographic history determination module 233 that is configured to determine demographics (such as age, gender, residence location, citizenship, languages spoken, or the like) associated with the user. The prior history determination module 232 may be configured to include a purchase history determination module 234 that is configured to determine a user's prior purchases. The purchase history may be available electronically, over the network, may be integrated from manual records, or some combination. In some systems, these purchases may be product and/or service purchases. The prior history determination module 232 may be configured to include a search history determination module 235 that is configured to determine a user's prior searches. Such records may be stored locally with the GBSS 110 or may be available over the network 30 or using a third party service, etc. The prior history determination module 232 also may be configured to include a navigation history determination module 236 that is configured to keep track of and/or determine how a user navigates through his or her computing system so that the GBSS 110 can determine aspects such as navigation preferences, commonly visited content (for example, commonly visited websites or bookmarked items), etc.
  • The factor determination module 113 may be configured to include a system attributes determination module 237 that is configured to determine aspects of the “system” that may provide influence or guidance (e.g., may inform) the determination of which menu items are appropriate for the portion of content indicated by the gestured input. These may include aspects of the GBSS 110, aspects of the system that is executing the GBSS 119 (e.g., the computing system 100), aspects of a system associated with the GBSS 110 (e.g., a third party system), network statistics, and/or the like.
  • The factor determination module 113 also may be configured to include other user attributes determination module 238 that is configured to determine other attributes associated with the user not covered by the prior history determination module 232. For example, a user's social connectivity data may be determined by module 238.
  • The factor determination module 113 also may be configured to include a gesture attributes determination module 239. The gesture attributes determination module 239 is configured to provide determinations of attributes of the gesture input, similar or different from those described relative to input module 111 and gesture attribute processing module 228 for determining to what content a gesture corresponds. Thus, for example, the gesture attributes determination module 239 may provide information and statistics regarding size, length, shape, color, and/or direction of a gesture.
  • The factor determination module 113 also may be configured to include a current context determination module 231. The current context determination module 231 is configured to provide determinations of attributes regarding what the user is viewing, the underlying content, context relative to other containing content (if known), whether the gesture has selected a word or phrase that is located with certain areas of presented content (such as the title, abstract, a review, and so forth). Other modules and logic may be also configured to be used with the factor determination module 113.
  • FIG. 2D is an example block diagram of further components of the Source Input Determination Module of an example Gesture Based Search System. The source input determination module 112 determines what input to use for a search as described elsewhere. It may use a disambiguation module 123 when perhaps more than one source input is determined by the GBSS to apply to the content of the indicated portion and any factors considered. The disambiguation module 123 may utilize syntactic and/or semantic aids, user selection, default values, and the like to assist in the determination of source input to the search.
  • In addition, in some example systems, the source input determination module 112 of the GBSS 110 may use a context menu to aid in source input selection. In such a case, the source input determination module 112 may include a context menu handling module 211 to process and handle menu presentation and input. The context menu handling module 211 may be configured to include a variety of other modules and/or logic. For example, the context menu handling module 211 may be configured to include an items determination module 212 for determining what menu items to present on a particular menu, an input handler 214 for providing an event loop to detect and handle user selection of a menu item, a viewer module 216 to determine what kind of “view” (as in a model/view/controller—MVC—model) to present (e.g., a pop-up, pull-down, dialog, interest wheel, and the like) and a presentation module 215 for determining when and what to present to the user and to determine an auxiliary content to present that is associated with a selection. In some embodiments, the items determination module 213 may use a rules for actions and/or entities determination module 214 to determine what to present on a particular menu.
  • FIG. 2E is an example block diagram of further components of the Auxiliary Content Determination Module of an example Gesture Based Search System. The auxiliary content determination module 122 is provided by the automated search module 114, which is an interface to a search engine (or the search engine itself). In some example systems, the GBSS 110 may be configured to include an auxiliary content determination module 122 to determine (e.g., find, establish, select, realize, resolve, establish, etc.) auxiliary or supplemental content that matches a search based upon the determine source input to the search.
  • The auxiliary content determination module 122 may be further configured to include a variety of different modules to aid in this determination process. For example, the auxiliary content determination module 122 may be configured to include an advertisement determination module 202 to determine one or more advertisements that can be associated with the obtained search result. For example, as shown in FIG. 1C, these advertisements may be provided by a variety of sources including from local storage, over a network (e.g., wide area network such as the Internet, a local area network, a proprietary network, an Intranet, or the like), from a known source provider, from third party content (available, for example from cloud storage or from the provider's repositories), and the like. In some systems, a third party advertisement provider system is used that is configured to accept queries for advertisements (“ads”) such as using keywords, to output appropriate advertising content.
  • In some example systems the auxiliary content determination module 122 is further configured to provide a supplemental content determination module 204. The supplemental content determination module 204 may be configured to determine other content that somehow relates to (e.g., associated with, supplements, improves upon, corresponds to, has the opposite meaning from, etc.) the search.
  • In some example systems the auxiliary content determination module 122 is further configured to provide an opportunity for commercialization determination module 208 to find a commercialization opportunity appropriate for the area indicated by the gesture. In some such systems, the commercialization opportunities may include events such as purchase and/or offers, and the opportunity for commercialization determination module 208 may be further configured to include an interactive entertainment determination module 201, which may be further configured to include a role playing game determination module 203, a computer assisted competition determination module 205, a bidding determination module 206, and a purchase and/or offer determination module 207 with logic to aid in determining a purchase and/or an offer as auxiliary content. Other modules and logic may be also configured to be used with the auxiliary content determination module 122.
  • FIG. 2F is an example block diagram of further components of the Presentation Module of an example Gesture Based Search System. In some example systems, the presentation module 115 may be configured to include a variety of other modules and/or logic. For example, the presentation module 115 may be configured to include an overlay presentation module 252 for determined how to present auxiliary content determined by the content to present determination module 116 on a presentation device, such as tablet 20 d. Overlay presentation module 252 may utilize knowledge of the presentation devices to decide how to integrate the auxiliary content as an “overlay” (e.g., covering up a portion or all of the underlying presented content). For example, when the GBSS 110 is run as a server application that serves web pages to a client side web browser, certain configurations using “html” commands or other tags may be used.
  • Presentation module 115 also may be configured to include an animation module 254. In some example systems, the auxiliary content may be “moved in” from one side or portion of a presentation device in an animated manner. For example, the auxiliary content may be placed in a pane (e.g., a window, frame, pane, etc., as appropriate to the underlying operating system or application running on the presentation device) that is moved in from one side of the display onto the content previously shown (a form of navigation to the auxiliary content). Other animations can be similarly incorporated.
  • Presentation module 115 also may be configured to include an auxiliary display generation module 256 for generating a new graphic or audio construct to be presented in conjunction with the content already displayed on the presentation device. In some systems, the new content is presented in a new window, frame, pane, or other auxiliary display construct.
  • Presentation module 115 also may be configured to include specific device handlers 258, for example device drivers configured to communicate with mobile devices, remote displays, speakers, Braille printers, and/or the like as described elsewhere. Other or different presentation device handlers may be similarly incorporated.
  • Also, other modules and logic may be also configured to be used with the presentation module 115.
  • Although the techniques of a Gesture Based Search System (GBSS) are generally applicable to any type of gesture-based system, the phrase “gesture” is used generally to imply any type of physical pointing type of gesture or audio equivalent. In addition, although the examples described herein often refer to online electronic content such as available over a network such as the Internet, the techniques described herein can also be used by a local area network system or in a system without a network. In addition, the concepts and techniques described are applicable to other input and presentation devices. Essentially, the concepts and techniques described are applicable to any environment that supports some type of gesture-based input.
  • Also, although certain terms are used primarily herein, other terms could be used interchangeably to yield equivalent embodiments and examples. In addition, terms may have alternate spellings which may or may not be explicitly mentioned, and all such variations of terms are intended to be included.
  • Example embodiments described herein provide applications, tools, data structures and other support to implement a Gesture Based Search System (GBSS) to be used for providing gesture based searching. Other embodiments of the described techniques may be used for other purposes. In the following description, numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques. The embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the logic or code flow, different logic, or the like. Thus, the scope of the techniques and/or components/modules described are not limited by the particular order, selection, or decomposition of logic described with reference to any particular routine.
  • FIGS. 3-15 include example flow diagrams of various example logic that may be used to implement embodiments of a Gesture Based Search System (GBSS). The example logic will be described with respect to the example components of example embodiments of a GBSS as described above with respect to FIGS. 1A-2F. However, it is to be understood that the flows and logic may be executed in a number of other environments, systems, and contexts, and/or in modified versions of those described. In addition, various logic blocks (e.g., operations, events, activities, or the like) may be illustrated in a “box-within-a-box” manner. Such illustrations may indicate that the logic in an internal box may comprise an optional example embodiment of the logic illustrated in one or more (containing) external boxes. However, it is to be understood that internal box logic may be viewed as independent logic separate from any associated external boxes and may be performed in other sequences or concurrently.
  • FIG. 3 is an example flow diagram of example logic for providing a gesture based search for auxiliary content. Operational flow 300 includes several operations. In operation 302, the logic performs receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system. This logic may be performed, for example, by the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B by receiving (e.g., obtaining, getting, extracting, and so forth), from an input device capable of providing gesture input (e.g., devices 20*), an indication of a user inputted gesture that corresponds to an indicated portion (e.g., indicated portion 25) on electronic content presented via a presentation device (e.g., 20*) associated with the computing system 100. One or more of the modules provided by gesture input detection and resolution module 121, including the audio handling module 222, graphics handling module 224, natural language processing module 226, and/or gesture identification and attribute processing module 228 may be used to assist in operation 302. As described in detail elsewhere, the indicated portion may be formed from contiguous or composed of separate non-contiguous parts, for example, a title with a disconnected sentence. In addition, the indicated portion may represent the entire body of electronic content presented to the user or a part. Also as described elsewhere, the gestural input may be of different forms, including, for example, a circle, an oval, a closed path, a polygon, and the like. The gesture may be from a pointing device, for example, a mouse, laser pointer, a body part, and the like, or from a source of auditory input.
  • In operation 304, the logic performs determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search. This logic may be performed, for example, by the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. As described elsewhere, the source input determination module 112 may use factor determination module 113 to determine a set of factors (e.g., the context of the gesture, the user, or of the presented content, prior history associated with the user or the system, attributes of the gestures, and the like) to use, in addition to determining what content has been indicated by the gesture, in order to determine an indication (e.g., a reference to, what, etc.) of source input to use for the search. The content contained within the indicated portion of the presented electronic content may be anything, for example, a word, phrase, utterance, video, image, or the like.
  • In operation 306, the logic performs automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content. This logic may be performed, for example, by the automated search module 114 of the GBSS 110 as described with reference to FIG. 2A. As described elsewhere, the automatically initiating may include, for example, invoking (e.g., executing, calling, sending, or the like) a search engine (e.g., an off-the-shelf search tool, a third party auxiliary content supply tool such as an advertising server, an application residing elsewhere, and the like) with the determined source input to obtain search result content. The search result content may be anything, including for example, any type of auxiliary, supplement, or other content (e.g., a web page, an electronic document, code, speech, an opportunity for commercialization, an advertisement, or the like).
  • In operation 308, the logic performs presenting the search result content in conjunction with the corresponding presented electronic content. This logic may be performed, for example, by the presentation module 115 of the GBSS 110 described with reference to FIGS. 2A and 2F to present (e.g., output, display, render, draw, show, illustrate, etc.) the search result (e.g., an advertisement, web page, supplemental content, document, instructions, image, and the like) in conjunction with the presented electronic content (e.g., displaying the auxiliary content web page as shown in FIG. 1B or the auxiliary content advertisement as shown in FIG. 1C as an overlay on the web page that is presented corresponding to the gestured input).
  • FIG. 4 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 402 whose logic specifies the indicated source input comprises at least one of a word, a phrase, an utterance, an image, a video, a pattern, or an audio signal. The logic of operation 402 may be performed, for example, by any of the modules of input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B. For example, one or more of the modules provided by gesture input detection and resolution module 121, including the audio handling module 222, graphics handling module 224, natural language processing module 226, and/or gesture identification and attribute processing module 228 may be used to assist in operation 402 to determine what content (e.g., word, phrase, image, video, pattern, audio signal, utterance, etc.) is contained within the indicated portion.
  • FIG. 5 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 502 whose logic specifies the content contained within the indicated portion of electronic content is a portion less than the entire presented electronic content. The logic of operation 502 may be performed, for example, by the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B. The content determined to be contained within (e.g., represented by, indicated, etc.) the gestured portion may include for example only a portion of a presented content, such as a title and abstract of an electronically presented document.
  • FIG. 6 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 602 whose logic specifies the content contained within the indicated portion of electronic content is the entire presented electronic content. The logic of operation 602 may be performed, for example, by of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B. The content determined to be contained within (e.g., represented by, indicated, etc.) the gestured portion may include for the entire presented content, such as a whole document.
  • FIG. 7 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 702 whose logic specifies the content contained within the indicated portion of electronic content includes an audio portion. The logic of operation 702 may be performed, for example, by an audio handling module 222 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B. For example, gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices such as microphone 20 b. The audio portion may be, for example, a spoken title of a presented document.
  • In some embodiments, operation 304 may further comprise an operation 703 whose logic specifies the content contained within the indicated portion of electronic content includes at least a word or a phrase. The logic of operation 703 may be performed, for example, by the natural language processing module 226 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B. NLP module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content. The word or phrase may be any word or phrase located in or indicated by the electronically presented content.
  • In the same or different embodiments, operation 304 may include an operation 704 whose logic specifies the content contained within the indicated portion of electronic content includes at least a graphical object, image, and/or icon. The logic of operation 704 may be performed, for example, by the graphics handling module 224 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B. For example, the graphics handling module 224 may be configured to handle the association of gestures to graphics located or indicated by the presented content (such as an icon, image, movie, still, sequence of frames, etc.).
  • In the same or different embodiments, operation 304 may include an operation 705 whose logic specifies the content contained within the indicated portion of electronic content includes an utterance. The logic of operation 705 may be performed, for example, by an audio handling module 222 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 described with reference to FIGS. 2A and 2B. For example, gesture input detection and resolution module 121 may be configured to include an audio handling module 222 for handling gesture input by way of audio devices such as microphone 20 b. The utterance may be, for example, a spoken word of a presented document, or a command, or a sound.
  • In the same or different embodiments, operation 304 may include an operation 706 whose logic specifies the content contained within the indicated portion of electronic content comprises non-contiguous parts or contiguous parts. The logic of operation 706 may be performed, for example, by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B. For example, the contiguous parts may represent a continuous are of the presented content, such as a sentence, a portion of a paragraph, a sequence of images, or the like. Non-contiguous parts may include separate portions of the presented content that together comprise the indicated portion, such as a title and an abstract, a paragraph and the name of an author, a disconnected image and a spoken sentence, or the like.
  • In the same or different embodiments, operation 304 may include an operation 707 whose logic specifies the content contained within the indicated portion of electronic content is determined using syntactic and/or semantic rules. The logic of operation 707 may be performed, for example, by the natural language processing module 226 provided by the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B. NLP module 226 may be used, for example, to detect whether a gesture is meant to indicate a word, a phrase, a sentence, a paragraph, or some other portion of presented electronic content using techniques such as syntactic and/or semantic analysis of the content. The word or phrase may be any word or phrase located in or indicated by the electronically presented content.
  • FIG. 8A is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 802 whose logic specifies the set of factors includes context of other text, audio, graphics, and/or objects within the presented electronic content. The logic of operation 802 may be performed, for example, by the current context determination module 231 provided by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C to determine (e.g., retrieve, designate, resolve, etc.) context related information from the currently presented content, including other text, audio, graphics, and/or objects.
  • In some embodiments, operation 802 may further comprise an operation 803 whose logic specifies the set of factors includes an attribute of the gesture. The logic of operation 803 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture itself (e.g., color, size, direction, shape, and so forth).
  • In some embodiments, operation 803 may further include operation 804 whose logic specifies the attribute of the gesture is the size of the gesture. The logic of operation 804 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as size. Size of the gesture may include, for example, width and/or length, and other measurements appropriate to the input device 20*.
  • In the same or different embodiments operation 803 may include an operation 805 whose logic specifies the attribute of the gesture is a direction of the gesture. The logic of operation 804 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as direction. Direction of the gesture may include, for example, up or down, east or west, and other measurements or commands appropriate to the input device 20*.
  • In the same or different embodiments operation 803 may include an operation 806 whose logic specifies the attribute of the gesture is a color. The logic of operation 806 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as color. Color of the gesture may include, for example, a pen and/or ink color as well as other measurements appropriate to the input device 20*.
  • In the same or different embodiments operation 803 may include an operation 807 whose logic specifies the attribute of the gesture is a measure of steering of the gesture. The logic of operation 807 may be performed, for example by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as steering. Steering of the gesture may occur when, for example, an initial gesture is indicated (e.g., on a mobile device) and the user desires to correct or nudge it in a certain direction.
  • In some embodiments operation 807 may further include an operation 808 whose logic specifies the steering of the gesture is accomplished by smudging the input device. The logic of operation 807 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as smudging. Smudging of the gesture may occur when, for example, an initial gesture is indicated (e.g., on a mobile device) and the user desires to correct or nudge it in a certain direction by, for example “smudging” the gesture using for example, a finger. This type of action may be particularly useful on a touch screen input device.
  • In the same or different embodiments operation 807 may include an operation 809 whose logic specifies the steering of the gesture is performed by a handheld gaming accessory. The logic of operation 807 may be performed, for example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine context related information from the attributes of the gesture such as steering. In this case the steering is performed by a handheld gaming accessory such as a particular type of input device 20*. For example, the gaming accessory may include a joy stick, a handheld controller, or the like.
  • In the same or different embodiments operation 807 may include an operation 810 whose logic specifies the steering of the gesture is a measure of adjustment of the gesture. The logic of operation 810 may be performed, for example, by the of the GBSS 110 as described with reference to FIGS. 2A and 2C. For example, by the gesture attributes determination module 239 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C. Once a gesture has been made, it may be adjusted (e.g., modified, extended, smeared, smudged, redone) by any mechanism, including, for example, adjusting the gesture itself, or, for example, by modifying what the gesture indicates, for example, using a context menu, selecting a portion of the indicated gesture, and so forth.
  • FIG. 8B is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 811 whose logic specifies the set of factors are associated with weights that are taken into consideration in determining the indication of source input. The logic of operation 811 may be performed, for example, by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C. For example, in some embodiments, the attributes of the gesture may be more important, hence weighted more heavily, than other attributes, such as the prior navigation history of the user. Any form of weighting, whether explicit or implicit may be used.
  • In some embodiments, operation 304 may further include an operation 812 whose logic specifies the set of factors includes presentation device capabilities. The logic of operation 812 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C. Presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size, whether the device supports color, is a touch screen, and so forth.
  • In some embodiments, operation 812 may further include operation 813 whose logic specifies the presentation device capabilities includes the size of the presentation device. The logic of operation 813 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C. Presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size of the device, whether the device supports color, is a touch screen, and so forth.
  • In the same or different embodiments operation 812 may include an operation 814 whose logic specifies the presentation device capabilities includes whether text or audio is being presented. The logic of operation 814 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C. In addition to determining whether text or audio is being presented, presentation device capabilities may include, for example, whether the device is connected to speakers or a network such as the Internet, the size of the device, whether the device supports color, is a touch screen, and so forth.
  • In the same or different embodiments operation 304 may include an operation 815 whose logic specifies the set of factors includes prior device communication history. The logic of operation 815 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C. Prior device communication history may include aspects such as how often the computing system running the GPSS 110 has been connected to the Internet, whether multiple client devices are connected to it—some times, at all times, etc., and how often the computing system is connected with various remote search capabilities.
  • In the same or different embodiments operation 304 may include an operation 816 whose logic specifies the set of factors includes time of day. The logic of operation 816 may be performed, for example, by the system attributes determination module 237 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine the time of day.
  • FIG. 8C is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 817 whose logic specifies the set of factors includes prior history associated with the user. The logic of operation 817 may be performed, for example, by prior history determination module 232 provided by the factor determination module 113 of the GBSS 110 described with reference to FIGS. 2A and 2C to determine prior history that may be associated with (e.g., coincident with, related to, appropriate to, etc.) the user, for example, prior purchase, navigation, or search history or demographic information.
  • In some embodiments, operation 817 may further include an operation 818 whose logic specifies the prior history associated with the user includes prior search history. The logic of operation 818 may be performed, for example, by the search history determination module 235 provided by the prior history determination module 232 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of properties based upon the prior search history associated with the user. Factors such as what content the user has reviewed and looked for may be considered. Other factors may be considered as well.
  • In the same or different embodiments, operation 817 may include operation 819 whose logic specifies the prior history associated with the user includes prior navigation history. The logic of operation 819 may be performed, for example, by the navigation history determination module 236 provided by the prior history determination module 232 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the prior navigation history associated with the user. Factors such as what content the user has reviewed, for how long, and where the user has navigated to from that point may be considered. Other factors may be considered as well.
  • In the same or different embodiments, operation 817 may include operation 820 whose logic specifies the prior history associated with the user includes prior purchase history. The logic of operation 820 may be performed, for example, by the prior purchase history determination module 234 of the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of factors based upon the prior purchase history associated with the user. Factors such as what products and/or services the user has bought or considered buying (determined, for example, by what the user has viewed) may be considered. Other factors may be considered as well.
  • In the same or different embodiments, operation 817 may include operation 821 whose logic specifies the prior history associated with the user includes demographic information associated with the user. The logic of operation 821 may be performed, for example, by the demographic history determination module 233 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the demographic history associated with the user. Factors such as what the age, gender, location, citizenship, religious preferences (if specified) may be considered. Other factors may be considered as well.
  • In the some embodiments, operation 821 may further include operation 822 whose logic specifies the demographic information including at least one of age, gender, and/or a location associated with the user and/or contact information associated with the user. The logic of operation 822 may be performed, for example, by the demographic history determination module 233 provided by the factor determination module 113 of the GBSS 110 as described with reference to FIGS. 2A and 2C to determine a set of criteria based upon the demographic history associated with the user including age, gender, or a location such as the user's residence information, country of citizenship, native language country, and the like.
  • FIG. 8D is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 824 whose logic specifies the set of factors includes a received selection from a context menu. The logic of operation 824 may be performed, for example, by input handler 214 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. As explained elsewhere, a context menu may be used, for example, to adjust or modify a gesture, to modify indicated content contained within the portion indicated by the gesture, to add information for a source input string such as additional keywords, or the like. Anything that can be indicated by a menu could be used as a factor to influence the source input. A context menu includes, for example, any type of menu that can be presented and relates to some context. For example, a context menu may include pop-up menus, dialog boxes, pull-down menus, interest wheels, or any other shape of menu, rectangular or otherwise.
  • In some embodiments, operation 824 may further include an operation 825 whose logic specifies the context menu includes a plurality of actions and/or entities derived from a set of rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs. The logic of operation 825 may be performed, for example, by the items determination module 212 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. The set of rules may include heuristics for developing verbs (actions) from nouns (entities) encompassed by the content by the gestured input, using for example, verbification, frequency calculations, or other techniques.
  • In some embodiments, operation 825 may further include an operation 826 whose logic specifies the rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs determine at least one of a set of most frequently occurring words in proximity to the indicated portion, a set of frequently occurring words in the electronic content, or a set of common verbs used with one or more entities encompassed by the indicated portion, and convert the words and/or verbs into actions and/or entities presented on the context menu. The logic of operation 826 may be performed, for example, by the items determination module 212 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. For example, the most frequent “n” occurring words in the presented electronic content may be counted and converted into verbs (actions), the “n” occurring words in proximity to the indicated portion (portion 25) of the presented electronic content may be used and/or converted into verbs (actions), the most common words in relative to some designated body of content may be used and/or converted into verbs (actions) and presented on the menu.
  • In the same or different embodiments, operation 825 may include operation 827 whose logic specifies the context menu includes an action to find a better <entity>, wherein <entity> is an entity encompassed by the indicated portion of the presented electronic content. The logic of operation 827 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. Rules for determining what is “better” may be context dependent such as, for example, brighter color, better quality photograph, more often purchased, or the like. Different heuristics may be programmed into the logic to thus derive a better entity.
  • In the same or different embodiments, operation 825 may include operation 828 whose logic specifies the context menu includes an action to share a better <entity>, wherein <entity> is an entity encompassed by the indicated portion of the presented electronic content. The logic of operation 828 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. Sharing (e.g., forwarding, emailing, posting, messaging, communicating, or the like) may be also enhanced by context determined by the indicated portion (portion 25) or the set of criteria (e.g., prior search or purchase history, type of gesture, or the like).
  • In the same or different embodiments, operation 825 may include operation 829 whose logic specifies the context menu includes an action to obtain information about an <entity>, wherein <entity> is an entity encompassed by the indicated portion of the presented electronic content. The logic of operation 829 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. Obtaining information may suggest actions like “find more information,” “get details,” “find source,” “define,” or the like.
  • FIG. 8E is an example flow diagram of example logic illustrating various example embodiments of block 825 of FIG. 8C. In some embodiments, the logic of operation 825 for the context menu includes a plurality of actions and/or entities derived from a set of rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs may include an operation 830 whose logic specifies the context menu includes actions that specify some form of buying or shopping, sharing, and/or exploring or obtaining information. The logic of operation 830 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. For example, actions for “buy <entity,” “obtain more info on <entity,” or the like may be derived by this logic.
  • In the same or different embodiments, operation 825 may include an operation 831 whose logic specifies the context menu includes one or more comparative actions. The logic of operation 831 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. For example, comparative actions may include verb phrases such as “find me a better,” “find me a cheaper,” “ship me sooner,” or the like.
  • In some embodiments, operation 831 may further include an operation 832 whose logic specifies the comparative actions of the context menu include at least one of an action to obtain an entity sooner, an action to purchase an entity sooner, or an action to find a better deal. The logic of operation 832 may be performed, for example, by the items determination module 212 of the context menu handling module 211 of the source input determination module 112 of the GBSS 110 described with reference to FIGS. 2A and 2D. For example, obtain an entity sooner may include shipping sooner, subscribing faster, finishing quicker, or the like.
  • In the same or different embodiments, operation 825 may include an operation 833 whose logic specifies the context menu is presented as at least one of a pop-up menu, an interest wheel, a rectangular shaped user interface element, or a non-rectangular shaped user interface element. The logic of operation 833 may be performed, for example, by the a viewer module 216 provided by the context menu handling module 211 of the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. Pop-up menus may be implemented, for example, using overlay windows, dialog boxes, and the like, and appear visible with a standard user interface typically from the point of a “cursor,” “pointer,” or other reference associated with the gesture. Drop-down context menus may contain, for example, any number of actions and/or entities that are determined to be menu items. They appear visible with a standard user interface typically from the point of a “cursor,” “pointer,” or other reference associated with the gesture. In one embodiment, an interest wheel has menu items arranged in a pie shape. Rectangular menus may include pop-ups and pull-downs, although they may also be implemented in a non-rectangular fashion. Non-rectangular menus may include pop-ups, pull-downs, and interest wheels. They may also include other viewer controls.
  • FIG. 9 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 902 whose logic specifies disambiguating possible source input by presenting one or more indicators of possible source input and receiving a selected indicator to one of the presented one or more indicators of possible source input to determine the indication of source input for the search. The logic of operation 902 may be performed, for example, by of the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. Presenting the one or more indicators of possible source input allows a user 10* to select which source input to use for a search, especially in the case where there is some sort of ambiguity.
  • In some embodiments, operation 304 may further include an operation 903 whose logic specifies disambiguating possible source input by determining a default source input to be used for the search. The logic of operation 903 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. The GBSS 110 may determine a default source input for a search (e.g., the most prominent entity in the indicated portion of the presented content) in the case of an ambiguous finding of source input.
  • In some embodiments, operation 903 may further include an operation 904 whose logic specifies the default source input may be overridden by the user. The logic of operation 904 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. The DGGS 110 allows the user 10* to override an default source input presented in a variety of ways, including by specifying that no default content is to be presented. Overriding can take place as a configuration parameter of the system, upon the presentation of a set of possible selections of source input, or at other times.
  • In the same or different embodiments, operation 304 may include an operation 905 whose logic specifies disambiguating possible source input utilizing syntactic and/or semantic rules to aid in determining the source input for the search. The logic of operation 905 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. As described elsewhere, NLP-based mechanisms may be employed to determine what a user means by a gesture and hence what source input may be meaningful.
  • In the same or different embodiments, operation 304 may include an operation 906 whose logic specifies the search result content comprises content that corresponds to a plurality of source inputs. The logic of operation 906 may be performed, for example, by the disambiguation module 123 provided by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D. Presenting multiple source inputs allows a user 10* to select which source input to conduct the search upon.
  • FIG. 10 is an example flow diagram of example logic illustrating various example embodiments of block 304 of FIG. 3. In some embodiments, the logic of operation 304 for determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search may include an operation 1002 whose logic specifies wherein the indicated source input is associated with a persistent state. The logic of operation 1002 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D by generating a representation of the source input in memory (e.g., memory 101 in FIG. 24), including a file, a link, or the like.
  • In some embodiments, operation 1002 may further include an operation 1003 whose logic specifies the persistent state is a uniform resource identifier. The logic of operation 1003 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D by generating a representation of the source input as a uniform resource identifier (URI, or uniform resource locator, URL) that represents the source input.
  • In the same or different embodiments, operation 304 may include an operation 1004 whose logic specifies the indicated source input is associated with a purchase. The logic of operation 1004 may be performed, for example, by the source input determination module 112 of the GBSS 110 as described with reference to FIGS. 2A and 2D to associate (e.g., link to or with, indicate, etc.) the source input with a user's purchase. The purchase may be obtainable from the prior purchase information identifiable by the purchase history determination module 234 of the prior history determination module 232 of the factor determination module 113 of the GBSS 110.
  • FIG. 11A is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3. In some embodiments, the logic of operation 306 for automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content may include an operation 1102 whose logic specifies wherein the designated body of electronic content is any page or object accessible over a network. The logic of operation 1102 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A. The designated body of electronic content may include, for example, a corpus of documents, a set of images, a movie, a group of sounds, or the like. The indicated source input is used to search this designated body of content to obtain (e.g., derive, get, receive, pull down, or the like) search result contents. The search itself may be performed by any appropriate search engine as described elsewhere including a remote tool connected via the network to the GBSS 110.
  • In some embodiments, operation 1102 may further include an operation 1103 whose logic specifies the network is at least one of the Internet, a proprietary network, a wide area network, or a local area network. The logic of operation 1103 may be performed, for example, by automated search module 114 of the GBSS 110 described with reference to FIG. 2A.
  • In the same or different embodiments, operation 306 may include an operation 1104 whose logic specifies the designated body of electronic content comprises at least one of web pages, computer code, electronic documents, and/or electronic versions of paper documents. The logic of operation 1104 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A. The designated body of electronic content may include, for example, web pages computer code, electronic documents, and/or electronic versions of paper documents, or other types of content as described.
  • In the same or different embodiments, operation 306 may include an operation 1105 whose logic specifies the automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content further comprising automatically initiating a search of the designated body of electronic content using an off-the-shelf search engine. The logic of operation 1105 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A. The search may be performed by any appropriate search engine, for example, a remote tool connected via the network to the GBSS 110 such as an off-the-shelf search engine such as a keyword search engine like Bing, Google, or Yahoo, or an advertising system.
  • In the same or different embodiments, operation 306 may include an operation 1106 whose logic specifies the automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content further comprising automatically initiating a search of the designated body of electronic content using a keyword search engine. The logic of operation 1106 may be performed, for example, by the automated search module 114 of the GBSS 110 described with reference to FIG. 2A. The search may be performed by a keyword search engine, for example, a remote tool connected via the network to the GBSS 110 such as a keyword search engine like Bing, Google, or Yahoo, or an advertising system.
  • FIG. 11B is an example flow diagram of example logic illustrating various example embodiments of block 306 of FIG. 3. In some embodiments, the logic of operation 306 for automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content may include an operation 1107 whose logic specifies wherein the search result content includes an opportunity for commercialization. The logic of operation 1107 may be performed, for example, by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The auxiliary determination module 122 may be used to enhance, modify, substitute for, translate, or the like, output received from the search engine to determine auxiliary content. In this case the auxiliary content includes an indication of something that can be used for commercialization such as an advertisement, a web site that sells products, a bidding opportunity, a certificate, products, services, or the like.
  • In some embodiments, operation 1107 may further include an operation 1108 whose logic specifies that the opportunity for commercialization is an advertisement. The logic of operation 1108 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The advertisement may be a direct or indirect indication of an advertisement that is somehow supplemental to the content indicated by the indicated portion of the gesture, as referred to by the source input.
  • In some embodiments, operation 1108 may further include an operation 1109 whose logic specifies that the advertisement is provided by at least one of: an entity separate from the entity that provided the presented electronic content; a competitor entity; and/or an entity associated with the presented electronic content. The logic of operation 1109 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The entity separate from the entity that provide the presented electronic content may be, for example, a third party or a competitor entity whose content is accessible through third party auxiliary content 43. The entity associated with the presented electronic content may be, for example, GBSS 110 and the advertisement from the auxiliary content 40. Advertisements may be supplied directly or indirectly as indicators to advertisements that can be served by server computing systems.
  • In the same or different embodiments, operation 1108 may include an operation 1110 whose logic specifies that the advertisement is selected from a plurality of advertisements. The logic of operation 1110 may be performed, for example, by the advertisement determination module 202 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. When a third party server, such as a third party advertising system, is used to supply the auxiliary content a plurality of advertisements may be delivered (e.g., forwarded, sent, communicated, etc.) to the GBSS 110 for selection before being presented by the GBSS 110.
  • In the same or different embodiments, operation 1108 may include an operation 1111 whose logic specifies that the advertisement is interactive entertainment. The logic of operation 1111 may be performed, for example, by the interactive entertainment determination module 201 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The interactive entertainment may include, for example, a computer game, an on-line quiz show, a lottery, a movie to watch, and so forth.
  • In the same or different embodiments, operation 1108 may include an operation 1112 whose logic specifies that the advertisement is a role-playing game. The logic of operation 1112 may be performed, for example, by the role playing game determination module 203 provided by the interactive entertainment determination module 201 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The role playing game may be a multi-player online role playing game (MMRPG) or a standalone, single or multi-player role playing game, or some other form of online, manual, or other role playing game.
  • In the same or different embodiments, operation 1108 may include an operation 1113 whose logic specifies that the advertisement is at least one of a computer-assisted competition and/or a bidding opportunity. The logic of operation 1113 may be performed, for example, by the bidding determination module 206 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The bidding opportunity, for example, a competition or gambling event, etc., may be computer based, computer-assisted, and/or manual.
  • FIG. 11C is an example flow diagram of example logic illustrating various example embodiments of block 1108 of FIG. 11B. In some embodiments, the logic of operation 1108 wherein the opportunity for commercialization is an advertisement includes an operation 1114 whose logic specifies wherein the advertisement includes a purchase and/or an offer. The logic of operation 1114 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The purchase or offer may take any form, for example, a book advertisement, or a web page, and may be for products and/or services.
  • In some embodiments, operation 1114 may further include an operation 1115 whose logic specifies that the purchase and/or an offer is for at least one of: information, an item for sale, a service for offer and/or a service for sale, a prior purchase of the user, and/or a current purchase. The logic of operation 1115 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. Any type of information, item, or service (online or offline, machine generated or human generated) can be offered and/or purchased in this manner. If human generated the advertisement may be to a computer representation of the human generated service, for example, a contract or a calendar entry, or the like.
  • In some embodiments, operation 1114 may further include an operation 1116 whose logic specifies that the purchase and/or an offer is a purchase of an entity that is part of a social network of the user. The logic of operation 1116 may be performed, for example, by the purchase and/or offer determination module 207 provided by the opportunity for commercialization determination module 208 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The purchase may be related to (e.g., associated with, directed to, mentioned by, a contact directly or indirectly related to, etc.) someone that belongs to a social network associated with the user, for example through the one or more networks 30.
  • FIG. 12 is an example flow diagram of example logic illustrating various example embodiments of block 308 of FIG. 3. In some embodiments, the logic of operation 308 for presenting the search result content in conjunction with the corresponding presented electronic content may include an operation 1202 whose logic specifies wherein the search result includes supplemental information to the presented electronic content. The logic of operation 1202 may be performed, for example, by the supplemental content determination module 204 provided by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E. The supplemental information may be of any nature, for example, an additional document or portion thereof, map, web page, advertisement, and so forth.
  • In the same or different embodiments, operation 308 may include an operation 1203 whose logic specifies that the search result is at least one of a web page, an electronic document, and/or an electronic version of a paper document. The logic of operation 1203 may be performed, for example, by the auxiliary content determination module 122 of the automated search module 114 of the GBSS 110 described with reference to FIGS. 2A and 2E.
  • In the same or different embodiments, operation 308 may include an operation 1204 whose logic specifies that the search result content is presented as an overlay on top of the presented electronic content. The logic of operation 1204 may be performed, for example, by the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F. The overlay may be in any form including a pane, window, menu, dialog, frame, etc. and may partially or totally obscure the underlying presented content.
  • In some embodiments, operation 1204 may further include an operation 1205 whose logic specifies that the overlay is made visible using animation techniques. The logic of operation 1205 may be performed, for example, by the animation module 254 in conjunction with the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F. The animation techniques may include leaving trailing foot print information for the user to see the animation, may be of varying speeds, involve different shapes, sounds, or the like.
  • In the same or different embodiments, operation 1204 may further include an operation 1206 whose logic specifies that the overlay is made visible by causing a pane to appear as though the pane is caused to slide from one side of the presentation device onto the presented electronic content. The logic of operation 1206 may be performed, for example, by the animation module 254 in conjunction with the overlay presentation module 252 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F. The pane may be a window, frame, popup, dialog box, or any other presentation construct that may be made gradually more visible as it is moved into the visible presentation area. Once there, the pane may obscure, not obscure, or partially obscure the other presented content.
  • In the same or different embodiments, operation 308 may include an operation 1207 whose logic specifies that the search result content is presented in an auxiliary window, pane, frame, or other auxiliary display construct. The logic of operation 1207 may be performed, for example, by the auxiliary display generation module 256 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F. Once generated, the auxiliary display module may be presented in an animated fashion, overlaid upon other content, placed non-contiguously or juxtaposed to other content.
  • In the same or different embodiments, operation 308 may include an operation 1208 whose logic specifies that the search result content is presented in an auxiliary window juxtaposed to the presented electronic content. The logic of operation 1208 may be performed, for example, by the auxiliary display generation module 256 provided by the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F. For example, the search result content may be presented in a separate window or frame to enable the user to see the original content alongside the auxiliary content (such as an advertisement).
  • FIG. 13A is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3. In some embodiments, the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1301 whose logic specifies wherein the input device is at least one of a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer. The logic of operation 1301 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect and resolve gesture input from, for example, devices 20*.
  • FIG. 13B is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3. In some embodiments, the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1302 whose logic specifies wherein the user inputted gesture approximates a circle shape. The logic of operation 1302 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a circle shape.
  • In the same or different embodiments, operation 302 may include an operation 1303 whose logic specifies that the user inputted gesture approximates an oval shape. The logic of operation 1303 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates an oval shape.
  • In the same or different embodiments, operation 302 may include an operation 1304 whose logic specifies that the user inputted gesture approximates a closed path. The logic of operation 1304 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a closed path of points and/or line segments.
  • In the same or different embodiments, operation 302 may include an operation 1305 whose logic specifies that the user inputted gesture approximates a polygon. The logic of operation 1305 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is in a form that approximates a polygon.
  • In the same or different embodiments, operation 302 may include an operation 1306 whose logic specifies that the user inputted gesture is an audio gesture. The logic of operation 1306 may be performed, for example, by the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received gesture is an audio gesture, such as received via audio device, microphone 20 b.
  • In the some embodiments, operation 1306 may further include an operation 1307 whose logic specifies that the audio gesture is a spoken word or phrase. The logic of operation 1307 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect whether a received audio gesture, such as received via audio device, microphone 20 b, indicates (e.g., designates or otherwise selects) a word or phrase indicating some portion of the presented content.
  • In the same or different embodiments, operation 1306 may include an operation 1308 whose logic specifies that the audio gesture is a direction. The logic of operation 1308 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect a direction received from an audio input device, such as audio input device 20 b. The direction may be a single letter, number, word, phrase, or any type of instruction or indication of where to move a cursor or locator device.
  • In the same or different embodiments, operation 1306 may include an operation 1309 whose logic specifies that the audio gesture is at least one of a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer. The logic of operation 1309 may be performed, for example, by the audio handling module 222 provided by the gesture input detection and resolution module 121 in conjunction with the specific device handlers 125 provided by the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B to detect and resolve audio gesture input from, for example, devices 20*.
  • FIG. 13C is an example flow diagram of example logic illustrating various example embodiments of block 302 of FIG. 3. In some embodiments, the logic of operation 302 for receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system may include an operation 1310 whose logic specifies wherein the presentation device is at least one of a browser, a mobile device, a hand-held device, embedded as part of the computing system, a remote display associated with the computing system, a speaker, or a Braille printer. The logic of operation 1310 may be performed, for example, by the specific device handlers 258 of the presentation module 115 of the GBSS 110 as described with reference to FIGS. 2A and 2F.
  • In the same or different embodiments, operation 302 may include an operation 1311 whose logic specifies that the presented electronic content is at least one of code, a web page, an electronic document, an electronic version of a paper document, an image, a video, an audio and/or any combination thereof. The logic of operation 1311 may be performed, for example, by one or more modules of the gesture input detection and resolution module 121 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B.
  • In the same or different embodiments, operation 302 may include an operation 1312 whose logic specifies that the computing system comprises at least one of a computer, notebook, tablet, wireless device, cellular phone, mobile device, hand-held device, and/or wired device. The logic of operation 1312 may be performed, for example, by the specific device handlers 125 of the input module 111 of the GBSS 110 as described with reference to FIGS. 2A and 2B.
  • FIG. 14 is an example flow diagram of example logic illustrating various example embodiments of blocks 302 to 308 of FIG. 3. In particular, the logic of the operations 302 to 310 may further include logic 1402 that specifies that the entire method is performed by a client. As described earlier, a client may be hardware, software, or firmware, physical or virtual, and may be part or the whole of a computing system. A client may be an application or a device.
  • In the same or different embodiments, the logic of the operations 302 to 310 may further include logic 1403 that specifics that the entire method is performed by a server. As described earlier, a server may be hardware, software, or firmware, physical or virtual, and may be part or the whole of a computing system. A server may be service as well as a system.
  • FIG. 15 is an example block diagram of a computing system for practicing embodiments of a Gesture Based Search System as described herein. Note that a general purpose or a special purpose computing system suitably instructed may be used to implement an GBSS, such as GBSS 110 of FIG. 1D.
  • Further, the GBSS may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
  • The computing system 100 may comprise one or more server and/or client computing systems and may span distributed locations. In addition, each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks. Moreover, the various blocks of the GBSS 110 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other.
  • In the embodiment shown, computer system 100 comprises a computer memory (“memory”) 101, a display 1502, one or more Central Processing Units (“CPU”) 1503, Input/Output devices 1504 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1505, and one or more network connections 1506. The GBSS 110 is shown residing in memory 101. In other embodiments, some portion of the contents, some of, or all of the components of the GBSS 110 may be stored on and/or transmitted over the other computer-readable media 1505. The components of the GBSS 110 preferably execute on one or more CPUs 1503 and manage providing automatic navigation to auxiliary content, as described herein. Other code or programs 1530 and potentially other data stores, such as data repository 1520, also reside in the memory 101, and preferably execute on one or more CPUs 1503. Of note, one or more of the components in FIG. 15 may not be present in any specific implementation. For example, some embodiments embedded in other software may not provide means for user input or display.
  • In a typical embodiment, the GBSS 110 includes one or more input modules 111, one or more source input determination modules 112, one or more factor determination modules 113, one or more automated search modules 114, and one or more presentation modules 115. In at least some embodiments, some data is provided external to the GBSS 110 and is available, potentially, over one or more networks 30. Other and/or different modules may be implemented. In addition, the GBSS 110 may interact via a network 30 with application or client code 1555 that can absorb search results, for example, for other purposes, one or more client computing systems or client devices 20*, and/or one or more third-party content provider systems 1565, such as third party advertising systems or other purveyors of auxiliary content. Also, of note, the history data repository 1515 may be provided external to the GBSS 110 as well, for example in a knowledge base accessible over one or more networks 30.
  • In an example embodiment, components/modules of the GBSS 110 are implemented using standard programming techniques. However, a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented (e.g., Java, C++, C#, Smalltalk, etc.), functional (e.g., ML, Lisp, Scheme, etc.), procedural (e.g., C, Pascal, Ada, Modula, etc.), scripting (e.g., Perl, Ruby, Python, JavaScript, VBScript, etc.), declarative (e.g., SQL, Prolog, etc.), etc.
  • The embodiments described above may also use well-known or proprietary synchronous or asynchronous client-server computing techniques. However, the various components may be implemented using more monolithic programming techniques as well, for example, as an executable running on a single CPU computer system, or alternately decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs. Some embodiments are illustrated as executing concurrently and asynchronously and communicating using message passing techniques. Equivalent synchronous embodiments are also supported by an GBSS implementation.
  • In addition, programming interfaces to the data stored as part of the GBSS 110 (e.g., in the data repositories 1515 and 41) can be available by standard means such as through C, C++, C#, Visual Basic.NET and Java APIs; libraries for accessing files, databases, or other data repositories; through scripting languages such as XML; or through Web servers, FTP servers, or other types of servers providing access to stored data. The repositories 1515 and 41 may be implemented as one or more database systems, file systems, or any other method known in the art for storing such information, or any combination of the above, including implementation using distributed computing techniques.
  • Also the example GBSS 110 may be implemented in a distributed environment comprising multiple, even heterogeneous, computer systems and networks. Different configurations and locations of programs and data are contemplated for use with techniques of described herein. In addition, the server and/or client components may be physical or virtual computing systems and may reside on the same physical system. Also, one or more of the modules may themselves be distributed, pooled or otherwise grouped, such as for load balancing, reliability or security reasons. A variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (XML-RPC, JAX-RPC, SOAP, etc.) etc. Other variations are possible. Also, other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of an GBSS.
  • Furthermore, in some embodiments, some or all of the components of the GBSS 110 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like. Some or all of the system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., a hard disk; memory; network; other computer-readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) to enable the computer-readable medium to execute or otherwise use or provide the contents to perform at least some of the described techniques. Some or all of the components and/or data structures may be stored on tangible, non-transitory storage mediums. Some or all of the system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
  • All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, are incorporated herein by reference, in their entireties.
  • From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the claims. For example, the methods and systems for performing automatic navigation to auxiliary content discussed herein are applicable to other architectures other than a windowed or client-server architecture. Also, the methods and systems discussed herein are applicable to differing protocols, communication media (optical, wireless, cable, etc.) and devices (such as wireless handsets, electronic organizers, personal digital assistants, tablets, portable email machines, game machines, pagers, navigation devices such as GPS receivers, etc.).

Claims (65)

1. A method in a computing system for automatically initiating a search, comprising:
receiving, from an input device capable of providing gesture input, an indication of a user inputted gesture that corresponds to an indicated portion of electronic content presented via a presentation device associated with the computing system;
determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search;
automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content; and
presenting the search result content in conjunction with the corresponding presented electronic content.
2. The method of claim 1 wherein the indicated source input comprises at least one of a word, a phrase, an utterance, an image, a video, a pattern, or an audio signal.
3.-4. (canceled)
5. The method of claim 1 wherein the content contained within the indicated portion of electronic content includes an audio portion.
6. The method of claim 1 wherein the content contained within the indicated portion of electronic content includes at least a word or a phrase.
7. The method of claim 1 wherein the content contained within the indicated portion of electronic content includes at least a graphical object, image, and/or icon.
8. The method of claim 1 wherein the content contained within the indicated portion of electronic content includes an utterance.
9. The method of claim 1 wherein the content contained within the indicated portion of electronic content comprises non-contiguous parts or contiguous parts.
10. The method of claim 1 wherein the content contained within the indicated portion of electronic content is determined using syntactic and/or semantic rules.
11. The method of claim 1 wherein the set of factors are associated with weights that are taken into consideration in determining the indication of source input.
12. The method of claim 1 wherein the set of factors includes context of other text, audio, graphics, and/or objects within the presented electronic content.
13. The method of claim 1 wherein the set of factors includes an attribute of the gesture.
14. The method of claim 13 wherein the attribute of the gesture is at least one of a size of the gesture, a direction of the gesture, a color, and/or a measure of steering of the gesture.
15.-20. (canceled)
21. The method of claim 1 wherein the set of factors includes presentation device capabilities.
22.-23. (canceled)
24. The method of claim 1 wherein the set of factors includes at least one of prior device communication history, time of day, and/or prior history associated with the user.
25.-26. (canceled)
27. The method of claim 24 wherein the prior history associated with the user includes at least one of prior search history, prior navigation history, prior purchase history, and/or demographic information associated with the user.
28.-31. (canceled)
32. The method of claim 1 wherein the set of factors includes a received selection from a context menu.
33. The method of claim 32 wherein the context menu includes a plurality of actions and/or entities derived from a set of rules used to convert one or more nouns that relate to the indicated portion into corresponding verbs.
34. (canceled)
35. The method of claim 32 wherein the context menu includes actions that specify some form of buying or shopping, sharing, and/or exploring or obtaining information.
36. The method of claim 32 wherein the context menu includes an action to find, to share, and/or to obtain information about a better <entity>, wherein <entity> is an entity encompassed by the indicated portion of the presented electronic content.
37.-38. (canceled)
39. The method of claim 32 wherein the context menu includes one or more comparative actions.
40. The method of claim 39 wherein the comparative actions of the context menu include at least one of an action to obtain an entity sooner, an action to purchase an entity sooner, or an action to find a better deal.
41. The method of claim 34 wherein the context menu is presented as at least one of a pop-up menu, an interest wheel, a rectangular shaped user interface element, or a non-rectangular shaped user interface element.
42. The method of claim 1 wherein determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search further comprises:
disambiguating possible source input by presenting one or more indicators of possible source input and receiving a selected indicator to one of the presented one or more indicators of possible source input to determine the indication of source input for the search.
43.-44. (canceled)
45. The method of claim 1 wherein determining by inference, based upon content contained within the indicated portion of the presented electronic content and a set of factors, an indication of source input for the search further comprises:
disambiguating possible source input utilizing syntactic and/or semantic rules to aid in determining the source input for the search.
46. The method of claim 1, wherein the search result content comprises content that corresponds to a plurality of source inputs.
47. The method of claim 1 wherein the indicated source input is associated with a persistent state and/or a purchase.
48. The method of claim 47 wherein the persistent state is a uniform resource identifier.
49. (canceled)
50. The method of claim 1 wherein the designated body of electronic content is any page or object accessible over a network.
51.-52. (canceled)
53. The method of claim 1, the automatically initiating a search of a designated body of electronic content using the indicated source input to obtain search result content further comprising automatically initiating a search of the designated body of electronic content using an off-the-shelf search engine and/or a keyword search engine.
54. (canceled)
55. The method of claim 1 wherein the search result content includes an opportunity for commercialization.
56. The method of claim 55 wherein the opportunity for commercialization is an advertisement.
57. The method of claim 56 wherein the advertisement is provided by at least one of: an entity separate from the entity that provided the presented electronic content; a competitor entity; and/or an entity associated with the presented electronic content.
58. (canceled)
59. The method of claim 55 wherein the advertisement is at least one of interactive entertainment, a role-playing game, a computer-assisted competition and/or a bidding opportunity, and/or a purchase and/or an offer.
60.-62. (canceled)
63. The method of claim 62 wherein the purchase and/or an offer is for at least one of: information, an item for sale, a service for offer and/or a service for sale, a prior purchase of the user, and/or a current purchase.
64. The method of claim 62 wherein the purchase and/or an offer is a purchase of an entity that is part of a social network of the user.
65. The method of claim 1 wherein the search result includes supplemental information to the presented electronic content.
66. The method of claim 1 wherein the search result is at least one of a web page, an electronic document, and/or an electronic version of a paper document.
67. The method of claim 1 wherein the search result content is presented as an overlay on top of the presented electronic content.
68. (canceled)
69. The method of claim 67 wherein the overlay is made visible by causing a pane to appear as though the pane is caused to slide from one side of the presentation device onto the presented electronic content.
70. The method of claim 1 wherein the search result content is presented in an auxiliary window, pane, frame, or other auxiliary display construct.
71. (canceled)
72. The method of claim 1 wherein the input device is at least one of a mouse, a touch sensitive display, a wireless device, a human body part, a microphone, a stylus, and/or a pointer.
73. The method of claim 1 wherein the user inputted gesture approximates at least one of a circle shape, an oval shape, a closed path, and/or a polygon.
74.-76. (canceled)
77. The method of claim 1 wherein the user inputted gesture is an audio gesture.
78.-80. (canceled)
81. The method of claim 1 wherein the presentation device is at least one of a browser, a mobile device, a hand-held device, embedded as part of the computing system, a remote display associated with the computing system, a speaker, or a Braille printer.
82. The method of claim 1 wherein the presented electronic content is at least one of code, a web page, an electronic document, an electronic version of a paper document, an image, a video, an audio and/or any combination thereof.
83. The method of claim 1 wherein the computing system comprises at least one of a computer, notebook, tablet, wireless device, cellular phone, mobile device, hand-held device, and/or wired device.
84. The method of claim 1 performed by a client or by a server.
85.-226. (canceled)
US13/284,673 2011-09-30 2011-10-28 Gesture based search system Abandoned US20130085848A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US13/284,673 US20130085848A1 (en) 2011-09-30 2011-10-28 Gesture based search system
US13/284,688 US20130085855A1 (en) 2011-09-30 2011-10-28 Gesture based navigation system
US13/330,371 US20130086499A1 (en) 2011-09-30 2011-12-19 Presenting auxiliary content in a gesture-based system
US13/361,126 US20130085849A1 (en) 2011-09-30 2012-01-30 Presenting opportunities for commercialization in a gesture-based user interface
US13/595,827 US20130117130A1 (en) 2011-09-30 2012-08-27 Offering of occasions for commercial opportunities in a gesture-based user interface
US13/598,475 US20130117105A1 (en) 2011-09-30 2012-08-29 Analyzing and distributing browsing futures in a gesture based user interface
US13/601,910 US20130117111A1 (en) 2011-09-30 2012-08-31 Commercialization opportunities for informational searching in a gesture-based user interface

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/251,046 US20130085843A1 (en) 2011-09-30 2011-09-30 Gesture based navigation to auxiliary content
US13/269,466 US20130085847A1 (en) 2011-09-30 2011-10-07 Persistent gesturelets
US13/278,680 US20130086056A1 (en) 2011-09-30 2011-10-21 Gesture based context menus
US13/284,673 US20130085848A1 (en) 2011-09-30 2011-10-28 Gesture based search system

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US13/251,046 Continuation-In-Part US20130085843A1 (en) 2011-09-30 2011-09-30 Gesture based navigation to auxiliary content
US13/284,688 Continuation-In-Part US20130085855A1 (en) 2011-09-30 2011-10-28 Gesture based navigation system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/278,680 Continuation-In-Part US20130086056A1 (en) 2011-09-30 2011-10-21 Gesture based context menus

Publications (1)

Publication Number Publication Date
US20130085848A1 true US20130085848A1 (en) 2013-04-04

Family

ID=47993474

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/284,673 Abandoned US20130085848A1 (en) 2011-09-30 2011-10-28 Gesture based search system

Country Status (1)

Country Link
US (1) US20130085848A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110158605A1 (en) * 2009-12-18 2011-06-30 Bliss John Stuart Method and system for associating an object to a moment in time in a digital video
US20110176788A1 (en) * 2009-12-18 2011-07-21 Bliss John Stuart Method and System for Associating an Object to a Moment in Time in a Digital Video
US20130167085A1 (en) * 2011-06-06 2013-06-27 Nfluence Media, Inc. Consumer self-profiling gui, analysis and rapid information presentation tools
US8724963B2 (en) 2009-12-18 2014-05-13 Captimo, Inc. Method and system for gesture based searching
US20140172892A1 (en) * 2012-12-18 2014-06-19 Microsoft Corporation Queryless search based on context
US20140237425A1 (en) * 2013-02-21 2014-08-21 Yahoo! Inc. System and method of using context in selecting a response to user device interaction
US8868598B2 (en) * 2012-08-15 2014-10-21 Microsoft Corporation Smart user-centric information aggregation
US20150332322A1 (en) * 2014-05-15 2015-11-19 Yahoo! Inc. Entity sponsorship within a modular search object framework
CN105183306A (en) * 2015-06-12 2015-12-23 广东小天才科技有限公司 Screenshot method and screenshot device for displayed content in mobile terminal
US9348979B2 (en) 2013-05-16 2016-05-24 autoGraph, Inc. Privacy sensitive persona management tools
US20160154777A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Device and method for outputting response
US9672287B2 (en) 2013-12-26 2017-06-06 Thomson Licensing Method and apparatus for gesture-based searching
US9898756B2 (en) 2011-06-06 2018-02-20 autoGraph, Inc. Method and apparatus for displaying ads directed to personas having associated characteristics
US10019730B2 (en) 2012-08-15 2018-07-10 autoGraph, Inc. Reverse brand sorting tools for interest-graph driven personalization
US10470021B2 (en) 2014-03-28 2019-11-05 autoGraph, Inc. Beacon based privacy centric network communication, sharing, relevancy tools and other tools
US10540515B2 (en) 2012-11-09 2020-01-21 autoGraph, Inc. Consumer and brand owner data management tools and consumer privacy tools
US10810357B1 (en) * 2014-10-15 2020-10-20 Slickjump, Inc. System and method for selection of meaningful page elements with imprecise coordinate selection for relevant information identification and browsing
US11074280B2 (en) * 2017-05-18 2021-07-27 Aiqudo, Inc Cluster based search and recommendation method to rapidly on-board commands in personal assistants

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564005A (en) * 1993-10-15 1996-10-08 Xerox Corporation Interactive system for producing, storing and retrieving information correlated with a recording of an event
US6115482A (en) * 1996-02-13 2000-09-05 Ascent Technology, Inc. Voice-output reading system with gesture-based navigation
US20020027570A1 (en) * 2000-09-01 2002-03-07 Tetsuyuki Muto Scheme for posting advertisements on comprehensive information viewing device
US6396473B1 (en) * 1999-04-22 2002-05-28 Webtv Networks, Inc. Overlay graphics memory management method and apparatus
US20040054701A1 (en) * 2002-03-01 2004-03-18 Garst Peter F. Modeless gesture driven editor for handwritten mathematical expressions
US20070027749A1 (en) * 2005-07-27 2007-02-01 Hewlett-Packard Development Company, L.P. Advertisement detection
US20070073722A1 (en) * 2005-09-14 2007-03-29 Jorey Ramer Calculation and presentation of mobile content expected value
US20090012841A1 (en) * 2007-01-05 2009-01-08 Yahoo! Inc. Event communication platform for mobile device users
US20090228817A1 (en) * 2008-03-10 2009-09-10 Randy Adams Systems and methods for displaying a search result
US20090262069A1 (en) * 2008-04-22 2009-10-22 Opentv, Inc. Gesture signatures
US20090271256A1 (en) * 2008-04-25 2009-10-29 John Toebes Advertisement campaign system using socially collaborative filtering
US20090319181A1 (en) * 2008-06-20 2009-12-24 Microsoft Corporation Data services based on gesture and location information of device
US20100045705A1 (en) * 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
US20110213655A1 (en) * 2009-01-24 2011-09-01 Kontera Technologies, Inc. Hybrid contextual advertising and related content analysis and display techniques
US20120044179A1 (en) * 2010-08-17 2012-02-23 Google, Inc. Touch-based gesture detection for a touch-sensitive device
US20120197857A1 (en) * 2011-01-31 2012-08-02 Microsoft Corporation Gesture-based search

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564005A (en) * 1993-10-15 1996-10-08 Xerox Corporation Interactive system for producing, storing and retrieving information correlated with a recording of an event
US6115482A (en) * 1996-02-13 2000-09-05 Ascent Technology, Inc. Voice-output reading system with gesture-based navigation
US6396473B1 (en) * 1999-04-22 2002-05-28 Webtv Networks, Inc. Overlay graphics memory management method and apparatus
US20020027570A1 (en) * 2000-09-01 2002-03-07 Tetsuyuki Muto Scheme for posting advertisements on comprehensive information viewing device
US20040054701A1 (en) * 2002-03-01 2004-03-18 Garst Peter F. Modeless gesture driven editor for handwritten mathematical expressions
US20070027749A1 (en) * 2005-07-27 2007-02-01 Hewlett-Packard Development Company, L.P. Advertisement detection
US20070073722A1 (en) * 2005-09-14 2007-03-29 Jorey Ramer Calculation and presentation of mobile content expected value
US20100045705A1 (en) * 2006-03-30 2010-02-25 Roel Vertegaal Interaction techniques for flexible displays
US20090012841A1 (en) * 2007-01-05 2009-01-08 Yahoo! Inc. Event communication platform for mobile device users
US20090228817A1 (en) * 2008-03-10 2009-09-10 Randy Adams Systems and methods for displaying a search result
US20090262069A1 (en) * 2008-04-22 2009-10-22 Opentv, Inc. Gesture signatures
US20090271256A1 (en) * 2008-04-25 2009-10-29 John Toebes Advertisement campaign system using socially collaborative filtering
US20090319181A1 (en) * 2008-06-20 2009-12-24 Microsoft Corporation Data services based on gesture and location information of device
US20110213655A1 (en) * 2009-01-24 2011-09-01 Kontera Technologies, Inc. Hybrid contextual advertising and related content analysis and display techniques
US20120044179A1 (en) * 2010-08-17 2012-02-23 Google, Inc. Touch-based gesture detection for a touch-sensitive device
US20120197857A1 (en) * 2011-01-31 2012-08-02 Microsoft Corporation Gesture-based search

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Trademark Electronic Search System (TESS), BING, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), GOOGLE, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), JAVA, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), JAVASCRIPT, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), ML, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), PERL, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), RUBY, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), WIKIPEDIA, 8 March 2013, United States Patent and Trademark Office *
Trademark Electronic Search System (TESS), YAHOO!, 8 March 2013, United States Patent and Trademark Office *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9449107B2 (en) 2009-12-18 2016-09-20 Captimo, Inc. Method and system for gesture based searching
US20110176788A1 (en) * 2009-12-18 2011-07-21 Bliss John Stuart Method and System for Associating an Object to a Moment in Time in a Digital Video
US8724963B2 (en) 2009-12-18 2014-05-13 Captimo, Inc. Method and system for gesture based searching
US20110158605A1 (en) * 2009-12-18 2011-06-30 Bliss John Stuart Method and system for associating an object to a moment in time in a digital video
US20130167085A1 (en) * 2011-06-06 2013-06-27 Nfluence Media, Inc. Consumer self-profiling gui, analysis and rapid information presentation tools
US9898756B2 (en) 2011-06-06 2018-02-20 autoGraph, Inc. Method and apparatus for displaying ads directed to personas having associated characteristics
US9619567B2 (en) * 2011-06-06 2017-04-11 Nfluence Media, Inc. Consumer self-profiling GUI, analysis and rapid information presentation tools
US10482501B2 (en) 2011-06-06 2019-11-19 autoGraph, Inc. Method and apparatus for displaying ads directed to personas having associated characteristics
US10019730B2 (en) 2012-08-15 2018-07-10 autoGraph, Inc. Reverse brand sorting tools for interest-graph driven personalization
US8868598B2 (en) * 2012-08-15 2014-10-21 Microsoft Corporation Smart user-centric information aggregation
US10540515B2 (en) 2012-11-09 2020-01-21 autoGraph, Inc. Consumer and brand owner data management tools and consumer privacy tools
US20140172892A1 (en) * 2012-12-18 2014-06-19 Microsoft Corporation Queryless search based on context
US9483518B2 (en) * 2012-12-18 2016-11-01 Microsoft Technology Licensing, Llc Queryless search based on context
US9977835B2 (en) 2012-12-18 2018-05-22 Microsoft Technology Licensing, Llc Queryless search based on context
US10649619B2 (en) * 2013-02-21 2020-05-12 Oath Inc. System and method of using context in selecting a response to user device interaction
US20140237425A1 (en) * 2013-02-21 2014-08-21 Yahoo! Inc. System and method of using context in selecting a response to user device interaction
US9348979B2 (en) 2013-05-16 2016-05-24 autoGraph, Inc. Privacy sensitive persona management tools
US9875490B2 (en) 2013-05-16 2018-01-23 autoGraph, Inc. Privacy sensitive persona management tools
US10346883B2 (en) 2013-05-16 2019-07-09 autoGraph, Inc. Privacy sensitive persona management tools
US9672287B2 (en) 2013-12-26 2017-06-06 Thomson Licensing Method and apparatus for gesture-based searching
US10838538B2 (en) 2013-12-26 2020-11-17 Interdigital Madison Patent Holdings, Sas Method and apparatus for gesture-based searching
US10470021B2 (en) 2014-03-28 2019-11-05 autoGraph, Inc. Beacon based privacy centric network communication, sharing, relevancy tools and other tools
US10628848B2 (en) * 2014-05-15 2020-04-21 Oath Inc. Entity sponsorship within a modular search object framework
US20150332322A1 (en) * 2014-05-15 2015-11-19 Yahoo! Inc. Entity sponsorship within a modular search object framework
US10810357B1 (en) * 2014-10-15 2020-10-20 Slickjump, Inc. System and method for selection of meaningful page elements with imprecise coordinate selection for relevant information identification and browsing
US20160154777A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Device and method for outputting response
CN105183306A (en) * 2015-06-12 2015-12-23 广东小天才科技有限公司 Screenshot method and screenshot device for displayed content in mobile terminal
US11074280B2 (en) * 2017-05-18 2021-07-27 Aiqudo, Inc Cluster based search and recommendation method to rapidly on-board commands in personal assistants

Similar Documents

Publication Publication Date Title
US20130085848A1 (en) Gesture based search system
US20130085855A1 (en) Gesture based navigation system
US20130086056A1 (en) Gesture based context menus
US20130086499A1 (en) Presenting auxiliary content in a gesture-based system
US11776018B2 (en) Universal ad creative
US20130085847A1 (en) Persistent gesturelets
US20130117105A1 (en) Analyzing and distributing browsing futures in a gesture based user interface
US20130117130A1 (en) Offering of occasions for commercial opportunities in a gesture-based user interface
US9760541B2 (en) Systems and methods for delivery techniques of contextualized services on mobile devices
US9977835B2 (en) Queryless search based on context
US20130117111A1 (en) Commercialization opportunities for informational searching in a gesture-based user interface
US20130085843A1 (en) Gesture based navigation to auxiliary content
US10152730B2 (en) Systems and methods for advertising using sponsored verbs and contexts
US20130085849A1 (en) Presenting opportunities for commercialization in a gesture-based user interface
US9607055B2 (en) System and method for dynamically retrieving data specific to a region of a layer
TWI573042B (en) Gesture-based tagging to view related content
US9830388B2 (en) Modular search object framework
US11016964B1 (en) Intent determinations for content search
US9460167B2 (en) Transition from first search results environment to second search results environment
WO2014153086A2 (en) Serving advertisements for search preview based on user intents
US20150317319A1 (en) Enhanced search results associated with a modular search object framework
JP2023515158A (en) Interface and mode selection for digital action execution
US10628848B2 (en) Entity sponsorship within a modular search object framework
WO2014110048A1 (en) Browser interface for accessing supple-mental content associated with content pages
Ahmed From Clicks to Conversations: The Transition to Chat Interfaces

Legal Events

Date Code Title Description
AS Assignment

Owner name: ELWHA LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DYOR, MATTHEW G.;LEVIEN, ROYCE A.;LORD, RICHARD T.;AND OTHERS;SIGNING DATES FROM 20120113 TO 20120207;REEL/FRAME:027827/0859

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION