US20100235786A1 - Enhanced 3d interfacing for remote devices - Google Patents
Enhanced 3d interfacing for remote devices Download PDFInfo
- Publication number
- US20100235786A1 US20100235786A1 US12/721,582 US72158210A US2010235786A1 US 20100235786 A1 US20100235786 A1 US 20100235786A1 US 72158210 A US72158210 A US 72158210A US 2010235786 A1 US2010235786 A1 US 2010235786A1
- Authority
- US
- United States
- Prior art keywords
- gesture
- user
- display screen
- user interface
- arc
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Definitions
- This invention relates generally to user interfaces for computerized systems and specifically to user interfaces with three-dimensional characteristics.
- tactile interface devices include the computer keyboard, mouse and joystick.
- Touch screens detect the presence and location of a touch by a finger or other object within the display area.
- Infrared remote controls are widely used, and “wearable” hardware devices have been developed, as well, for purposes of remote control.
- the gestures are recognized based on the shape of the body part and its position and orientation over an interval.
- the gesture is classified for determining an input into a related electronic device.
- U.S. Pat. No. 7,348,963 whose disclosure is incorporated herein by reference, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures three-dimensional information regarding an object in an interactive area located in front of the display screen.
- a computer system directs the display screen to change the visual image in response to the object.
- An embodiment of the invention provides a method for operating a computerized system, which is carried out by presenting user interface elements on a display screen of the computerized system and detecting a first gesture made in a three-dimensional space by a distal portion of an upper extremity of a user while a segment of the distal portion thereof rests on a surface.
- a first gesture made in a three-dimensional space by a distal portion of an upper extremity of a user while a segment of the distal portion thereof rests on a surface.
- a first gesture made in a three-dimensional space by a distal portion of an upper extremity of a user while a segment of the distal portion thereof rests on a surface.
- a first gesture made in a three-dimensional space by a distal portion of an upper extremity of a user while a segment of the distal portion thereof rests on a surface.
- a second gesture made by the distal portion while the segment continues to rest on the surface is detected so as to select one of the user interface
- the method further includes mapping an operation to the corresponding user interface element, wherein the second gesture causes the operation to be performed.
- the method further includes mapping a three-dimensional location of the distal portion of the upper extremity to two parameters of a two-dimensional parametric surface, which is a section of a sphere that corresponds to possible locations of the distal portion of the upper extremity while the segment of the distal portion rests on a surface, and mapping the two parameters to corresponding parameters in a planar two-dimensional coordinate system of the display screen where the user interface elements are located
- the first gesture describes a first arc-like movement forming at least a portion of a horizontal circular arc mapped to the section of the sphere as a pointing command
- the second gesture includes a second arc-like movement mapped to the section of the sphere as a selection command.
- the pointing command includes pointing to a letter on the display screen and the selection command includes inputting the letter to the system.
- a threshold is defined, and includes displaying a subset of letters when a magnitude of the arc-like movement is less than the threshold and inputting the letter when the magnitude of the arc-like movement is greater than the threshold.
- the subset of letters is shifted on the display screen using a language model to determine a probability of a preferred letter, and wherein inputting the letter is performed in a single continuous motion with high probability.
- the segment of the distal portion may include an elbow, a wrist, or a forearm.
- An embodiment of the invention provides a method for operating a computerized system, which is carried out by presenting user interface elements on a display screen of the computerized system and detecting a first gesture made in a three-dimensional space by a part of a body of a user.
- An area of the display screen selected by the user is identified responsively to the first gesture, and a magnification level of one or more of the user elements appearing in the selected area on the display screen is increased.
- a second gesture made by the part of the body of the user is detected so as to select one of the user elements that appear in the selected area.
- a third gesture made by the part of the body is detected, and the magnification level is decreased in response to the third gesture.
- the first and the third gesture include circular motions of a hand of the user in opposite, respective directions.
- detecting the second gesture includes actuating a shortcut on the display in response to the second gesture.
- selecting the magnified alphanumeric symbol includes adding the selected magnified alphanumeric symbol to a word spelled on the display screen, wherein the method includes detecting a third gesture made by the part of the body, opposite to the second gesture, and removing one or more symbols from the word in response to the third gesture.
- the first gesture includes a three-dimensional movement by the part of the body of the user.
- presenting user interface elements includes displaying a plurality of symbols arranged in at least one arc.
- displaying the plurality of symbols includes presenting a set of symbols in a first arc, and increasing the magnification level includes presenting a magnified subset of the set of symbols in a second arc adjacent to the first arc.
- detecting the first gesture includes detecting an arcuate movement of a hand of the user, and associating the arcuate movement with the plurality of symbols in the at least one arc.
- presenting user interface elements includes presenting a sequence of textual characters, and increasing the magnification level includes displaying further characters for addition to the sequence using a language model to select the further characters.
- detecting the first gesture includes scrolling forward or backward along the sequence responsively to first movements of a hand of the user in first and second directions along the sequence
- detecting the second gesture includes selecting the further characters for addition to the sequence in response to second movements of the hand in at least a third direction perpendicular to the first and second directions.
- An embodiment of the invention provides a computer software product for operating a computer system, including a sensing device, which is configured to detect at least a part of a body of a user, a display screen, which is configured to present user interface elements, and a processor, which is coupled to the sensing device so as to detect a first gesture made in a three-dimensional space by the part of the body.
- the processor is additionally configured to identify an area of the display screen selected by the user in response to the first gesture, and to increase a magnification level of one or more of the user interface elements appearing in the selected area on the display screen, and after increasing the magnification level, to detect a second gesture made by the part of the body so as to select one of the user interface elements that appears in the selected area.
- An embodiment of the invention provides a computer software product for operating a computerized system, including a computer storage medium in which computer program instructions are stored, which instructions, when executed by a computer, cause the computer to present user interface elements on a display screen of the computerized system, to detect a first gesture made in a three-dimensional space by a part of a body of a user, to identify an area of the display screen selected by the user in response to the first gesture, and to increase a magnification level of one or more of the user interface elements appearing in the selected area on the display screen. After increasing the magnification level, the instructions cause the computer to detect a second gesture made by the part of the body of the user so as to select one of the user interface elements that appears in the selected area.
- a method for operating a computerized system including the steps of presenting user interface elements on a display screen of the computerized system and detecting a gesture made in a three-dimensional space by a part of a body of a user. While the user performs the gesture, one or more of the user interface elements on the display screen are continuously modified responsively to a direction of the gesture.
- continuously modifying the one or more of the user interface elements includes increasing or decreasing a magnification level of at least one of the user interface elements, typically by zooming in on a user interface element toward which the gesture is directed.
- presenting the user interface elements includes presenting a sequence of textual characters, which is continuously modifying by adding characters to the sequence while scrolling over the sequence responsively to the gesture.
- Adding the characters typically includes presenting choices of further characters to add to the sequence, using a language model to determine the choices, and selecting at least one of the choices responsively to the gesture.
- Presenting the choices may include determining, based on the language model, a respective likelihood of correctness of each of the choices, and displaying the choices so that an effort required by the user to select a given choice is a decreasing function of the likelihood.
- continuously modifying the one or more of the user interface elements may include scrolling forward or backward along the sequence responsively to first movements of a hand of the user in first and second directions along the sequence, and selecting further characters for addition to the sequence responsively to second movements of the hand in at least a third direction perpendicular to the first and second directions.
- FIG. 1 is a schematic, pictorial illustration of a system for remote gesture-mediated information input, in accordance with an embodiment of the present invention
- FIG. 2 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention
- FIG. 3 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention
- FIG. 4 is a schematic, pictorial illustration of a system for remote information input, in accordance with an embodiment of the present invention.
- FIG. 5 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention
- FIG. 6 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention.
- FIG. 7 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention.
- FIG. 8 is a flow chart of a method for remotely interfacing with a computer system, in accordance with an embodiment of the present invention.
- Embodiments of the present invention that are described hereinbelow provide improved methods, products and systems for providing remote input to an electronic device. When a user interacts remotely with a device that requires accurate input and navigation decisions, these embodiments provide an intuitive and streamlined interface.
- Remote input may be provided for interaction with a remote device such as a gaming console, an interactive television, a computerized cellular phone, or a computer.
- a remote device such as a gaming console, an interactive television, a computerized cellular phone, or a computer.
- the term “remote device” herein refers to any remotely governable device containing a processing unit.
- a sensing device may be used to detect a virtual control, such as a virtual keyboard.
- the sensing device is typically a three-dimensional camera that detects information that includes the position of a body (or at least parts of the body) of the user or other tangible entities wielded or operated by the user for interacting with a computer application running on the remote device, all of which are sometimes referred to herein for convenience as “control entities”.
- the sensing device detects the presence and changes of position of a control entity, i.e. its speed and direction.
- the remote device interprets movements detected by the sensing device as described hereinbelow.
- the virtual control is positioned in front of the user, usually between the user and a display.
- the remote device presents user interface elements on the display.
- the sensing device detects the movements of the control entity in a three-dimensional space, such as a user's hand manipulating the virtual control, and translates them into commands for the remote device. For example, movement of the control entity using a circular gesture may be interpreted by the remote device as a command to adjust a magnification (or zoom) level of a remote information input interface comprising the user interface elements on the display.
- magnification in this context is not limited to simple visual magnification: New details or relevant potentially selected options may be exposed.
- a distance between the control entity and the sensing device varies as a gesture, e.g., the circular gesture. Movements in which the distance between the control entity and the sensing device remains substantially constant are classified as either “1-dimensional” or “2-dimensional movements”. Performing a clockwise circular gesture with the control entity may command the remote device to increase the magnification level of the display, whereas a counter-clockwise circular movement may cause minification.
- clockwise and counterclockwise are used arbitrarily herein to distinguish two meaningful gestures. These terms have no necessary physical meanings with respect to the actual configuration of the embodiments.
- the remote device causes the interface to zoom in and out, with a potentially infinite zoom range.
- the actual distance between the user and the remote device remains largely constant.
- the result of this arrangement is that the interface is essentially three-dimensional, and the elements of the interface may be continuously regrouped in three-dimensional space so as to move toward or away from the user as required, depending on the direction of the user's gesture.
- the display may zoom in on a user interface element toward which a gesture is directed. This zoom may continue until the user's hand reaches the virtual location of the user element in the three-dimensional space, whereupon the element is selected (with or without an additional selection gesture).
- the virtual control may be a virtual keyboard for remote information input, such as remote text input.
- the display presents a remote information input interface representing the virtual keyboard.
- the symbol positions e.g., the relative positions of the letters in the virtual keyboard, may be held constant.
- the relative size of the symbols may be varied on the display in response to movements by the control entity.
- the distance of the control entity from the sensing unit and the relative motion of the control entity with respect to the sensing unit may be interpreted as a manipulation of the virtual control, and hence as commands to the remote device as described hereinbelow.
- the virtual control may be interpreted by the remote device to be constructed or oriented so as to accommodate right or left handed operation as the case may be.
- the positions of the symbols in the virtual keyboard may be held constant, while the relative size of the symbols may be varied to reflect expected relevance or likelihood of each symbol being a suggested next symbol for remote input, e.g., based upon context awareness.
- Movement of the control entity forward towards the sensing unit may cause a selected symbol, e.g., a letter to be input. Movement of the control entity away from the sensing unit may cause the previously input symbol to be erased. Selection is typically performed in two stages. A first selection stage may be used to determine a specific group of symbols. A second selection stage is used to input an individual symbol from the specific group of symbols. In some embodiments, the first selection stage is made by an arc-like movement of the control entity, e.g., clockwise or counter-clockwise, followed by a slight movement towards the sensing unit.
- a threshold which may be fixed or adaptive, is defined to enable the remote device to detect completion of each selection stage.
- the remote device interprets a movement with a magnitude less than the threshold as completion of the first selection stage.
- the remote device interprets the movement as completion of the second stage.
- the values of the threshold may depend upon the resolving capabilities of the sensing device, and are typically set so as to detect significant motion, while ignoring “jitter” by the control entity. In some cases, there is no pause between the stages, e.g., when a single continuous movement is performed.
- the second selection stage may constitute an additional arc-like movement by the control entity towards the sensing unit and downwards, as described in more detail hereinbelow.
- Selection stages may be combinations of continuous and discrete, curved and linear movements in many directions with respect to the sensing unit.
- the user can execute the combinations while resting his elbow (or other relatively proximal segment of his upper extremity, e.g., a wrist or forearm) on a surface such as an armrest of a chair or a table, while using a distal portion as the control entity.
- the selection may be performed by the user's hand, fingers or forearm, or combinations thereof without tiring the arm.
- dropping the control entity downwards could be interpreted as a command to begin a new line, a new paragraph, or to input a highlighted symbol. Hovering the control unit longer than an activation threshold may stimulate an autocomplete feature to offer suggestions to complete a word or a sentence, thus causing several shortcuts to be shown on the display.
- the term “shortcut” herein refers to an option on a remote information input interface that appears upon user interaction, which offers a choice that is available in the current state, or context, of the computer application. That is to say, a user interface element, e.g. a shortcut, may be mapped to an operation to be performed upon selection of the shortcut.
- the operation may cause the word shown in the shortcut to be input to the remote device.
- shortcuts include a context menu that appears upon a mouse click operation such as a “right-click”, and an iconic link whose activation triggers some function in a remote device.
- the activation threshold may be customized to vary according to specific symbols and application scenarios.
- the display may be altered to reflect potential likelihood or relevance of a next letter, group of letters, or symbol based upon an analysis of previously input symbols, e.g., by highlighting likely symbols on the display.
- a subset of letters may be shown on the display to simplify navigation, and may themselves be linked to shortcuts.
- the relative location of the subset of letters may be shifted on the display so as to enable the user to select a preferred letter in a single selection motion. Typically, the relative location of the subset of letters is altered without any change to the order of the letters.
- the distance and motion relative to the display unit, rather than to the sensing unit, are used for interpreting the commands.
- FIG. 1 is a schematic, pictorial illustration of a system 10 for gesture-mediated remote information input, in accordance with an embodiment of the present invention.
- System 10 incorporates a sensing device 12 , typically a three-dimensional camera, which detects information that includes the body (or at least parts of the body) of a user 14 or other control entities for controlling a computer application as described hereinabove.
- the control entity is described using an example of a hand 16 for providing remote input to system 10 .
- control entities could include portions of objects being manipulated by user 14 , e.g., as hockey sticks, golf clubs, bows, and tennis rackets.
- sensing device 12 is suitable for use in system 10 as sensing device 12 .
- Other known three-dimensional cameras may also be employed as sensing device 12 .
- this embodiment relates to one particular system for providing remote information input, the principles of providing remote information input that are implemented in system 10 may similarly be applied, mutatis mutandis, in other types of remote information input or gesture control systems, using other techniques for providing remote information input or remote control via specialized gesture.
- Sensing device 12 is connected to remote device 18 via a sensing interface 22 , which may comprise a Bluetooth® adapter, an Infrared Data Association (IrDA) device, a cable connection, a universal serial bus (USB) interface, or any communication interface for outputting sensor data that allows remote device 18 to import remote sensing data.
- Remote device 18 typically comprises a general-purpose computer processor, which is programmed in software to carry out the functions described hereinbelow. The software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on tangible storage media, such as optical, magnetic, or electronic memory media.
- remote device 18 is shown in FIG. 1 , as a separate unit from sensing device 12 , some or all of the processing functions of remote device 18 may be performed by suitable dedicated circuitry within the housing of sensing device 12 or otherwise associated with sensing device 12 .
- Display screen 20 presents user interface elements comprising a pointer 24 and a remote information input interface 26 , which comprises symbols 28 , 30 , 32 , 34 , 36 .
- a display interface 38 connects display screen 20 to remote device 18 , and may comprise a Bluetooth® adapter, an IrDA device, a cable connection, or any communication interface for outputting image data that allows remote device 18 to export visual display data, e.g., in the form of a compressed image.
- the symbol selection layout provides a simplified example for the purposes of illustration. In the present example, symbols 28 , 30 , 32 represent numerals, whereas symbols 34 and 36 represent actions. Each symbol may be remotely selected or actuated to control the computer application.
- Remote information input interface 26 may also comprise a zoom level indicator 40 to provide a visual indicator of the zoom level of remote information input interface 26 .
- Zoom level indicator 40 may be shown as a slider, similar to sliders utilized in web browsers and other applications. The zoom level is typically allowed to range within certain limits, e.g., from 50% to 500%. In the example of FIG. 1 , zoom level indicator 40 shows an initial value of 100%.
- a scale indicator 42 shows symbols 28 , 30 , 32 having an initial height of one unit on remote information input interface 26 .
- FIG. 2 is a view of portions of system 10 ( FIG. 1 ) operating under remote control of user 14 in accordance with an embodiment of the present invention.
- FIG. 1 and FIG. 2 may be viewed as a sequence of actions.
- user 14 is about to perform a specialized gesture, and in FIG. 2 completes the gesture.
- User 14 who is typically viewing display screen 20 , performs the specialized gesture, e.g., a clockwise circular gesture from the perspective of user 14 , using hand 16 as described on a reference coordinate system 44 by directed broken lines 46 .
- the specialized gesture e.g., a clockwise circular gesture from the perspective of user 14
- hand 16 as described on a reference coordinate system 44 by directed broken lines 46 .
- other types of hand movements may be used to invoke image zoom and other functions, such as moving the hand toward and away from the display screen.
- reference coordinate system 44 the x, y, and z axes are to be interpreted as horizontal, vertical, and depth coordinates, respectively, with respect to sensing device 12 .
- the distance between hand 16 and sensing device 12 vary as the specialized gesture is performed.
- the inclination of the plane of the circle may be significant, according to specifications programmed in remote device 18 .
- a circle described vertically in the yz plane may be interpreted by remote device 18 differently from a circle in the xy plane or a horizontally executed circular gesture, e.g., as a pointing command.
- Substantially circular gestures described in various oblique planes may be given even more specialized interpretations.
- the clockwise circular gesture is recognized by sensing device 12 , and remote device 18 interprets the gesture as a zoom command.
- the clockwise circular gesture thus commands remote device 18 to smoothly increase the zoom (or magnification) level of remote information input interface 26 on display screen 20 using pointer 24 as a reference point for the zooming.
- remote device 18 identifies an area of display screen 20 around pointer 24 as having been selected by user 14 for the zoom command.
- hand 16 may move from an initial position 48 to a final position 50 .
- Scale indicator 42 shows symbols 28 , 30 , 32 having a final height of 1.5 units on remote information input interface 26 .
- zoom level indicator 40 shows that, in comparison with FIG. 1 , the clockwise circular gesture has increased the zoom level from the initial value of 100% immediately prior to the gesture to a final value of 150%.
- a corresponding increase in the size of symbols 28 , 30 , 32 is shown on display screen 20 as a result of the zoom command. That is to say, one or more of the user interface elements appearing in the selected area on display screen 20 are magnified.
- FIG. 3 is a view of portions of system 10 ( FIG. 1 ) operating under control of user 14 in accordance with an embodiment of the present invention.
- user 14 performs a leftward, substantially horizontal gesture using hand 16 as indicated by an arrow 52 on reference coordinate system 44 .
- the leftward gesture commands remote device 18 to move pointer 24 to the left on remote information input interface 26 .
- the leftward movement of pointer 24 executes from a first position (indicated by a cursor 54 (shown in broken outline) to a second position, indicated by a cursor 56 (shown in solid outline) as a result of the command.
- hand 16 may move from an initial position 58 to a final position 60 .
- a rightward gesture may be interpreted as a command to move pointer 24 to the right from the perspective of user 14 , while gestures performed upward and downward may similarly be interpreted by remote device 18 as commands to move pointer 24 upward and downward, respectively.
- Suitable calibration of sensing device 12 and remote device 18 assures a desired sensitivity, i.e., a correspondence between a spatiotemporal displacement of the control entity and the effect on elements shown on remote information input interface 26 . It is recommended to compensate for the viewing distance and viewing angle of user 14 using known methods. The compensation techniques described in U.S. Patent Application Publication No. 2009/0009593, entitled “Three-dimensional Projection Display” may be applied for this purpose.
- FIG. 4 is a schematic, pictorial illustration of system 10 ( FIG. 1 ) for remote information input, in accordance with an embodiment of the present invention.
- a first symbol arc 62 is shown within remote information input interface 26 on display screen 20 , in an embodiment that implements a T9® text input layout.
- T9 text input represents “text on 9 keys,” a method for streamlining input of text on numeric keypads, typically for mobile devices, available at the T9 web site (t9.com).
- T9 text input represents “text on 9 keys,” a method for streamlining input of text on numeric keypads, typically for mobile devices, available at the T9 web site (t9.com).
- T9 text input represents “text on 9 keys,” a method for streamlining input of text on numeric keypads, typically for mobile devices, available at the T9 web site (t9.com).
- T9 web site t9.com
- First symbol arc 62 comprises an arcuate, nearly semi-circular display of groups of alphanumeric symbol buttons 64 to simulate relaxed movement of hand 16 while user 14 sits comfortably on a chair 66 .
- First symbol arc 62 may comprise additional symbol buttons 68 to support input of special symbols, e.g., space, backspace, or carriage return.
- Sensing device 12 detects the movement and remote device 18 interprets the movement by highlighting each of additional symbol buttons 68 and alphanumeric symbol buttons 64 sequentially as hand 16 moves through semi-circular arc 70 from a first position 72 to a second position 74 . Provision of an arcuate display enables hand 16 to move while an elbow 118 of the same upper extremity as hand 16 rests on chair 66 .
- a portion of a sphere 122 is shaded within a spherical coordinate system 124 using an axis based upon elbow 118 to indicate an approximate range of motion of hand 16 when elbow 118 rests on a surface.
- a three-dimensional space is mapped to spherical coordinate system 124 , and is also mapped to a two-dimensional coordinate system. The latter can be conveniently appreciated as a plane in reference coordinate system 44 .
- an emphasized symbol button 76 is shown on display screen 20 to indicate that performance of a second gesture, described hereinbelow as a selection gesture, will result in selection of the symbols displayed therein. That is to say, remote device 18 identifies emphasized symbol button 76 as the area of display screen 20 that is currently selected by user 14 .
- Corresponding selection gestures performed by hand 16 at other points along semi-circular arc 70 would select corresponding symbols of first symbol arc 62 .
- the first gesture and the second gesture may be recognized by remote device 18 according to time-varying coordinates on the two-dimensional coordinate system and the spherical coordinate system, respectively.
- FIG. 5 is a view of portions of system 10 ( FIG. 1 ) operating under remote control of user 14 in accordance with an embodiment of the present invention.
- the selection gesture is typically performed in two stages, as described hereinabove.
- User 14 may perform a first stage of the selection gesture by moving hand 16 downward in a vertical arc 78 generally directed toward display screen 20 .
- Remote device 18 uses the threshold, described hereinabove, to determine completion of each selection stage.
- the selection gesture may pivot about the elbow or shoulder, whichever is applicable. Of course, when pivoting about the shoulder the advantages of resting a portion of the arm on a surface are lost.
- hand 16 may move from an initial position 80 to an intermediate position 130 while performing the first stage, and then to a final position 82 while performing a second stage.
- Sensing device 12 detects the movement, and remote device 18 interprets the selection gesture as a command to display a second symbol arc 84 directly below first symbol arc 62 , comprising individually delineated symbol buttons 86 , which are grouped together in highlighted symbol button 76 .
- user 14 next moves hand 16 in another arc-like movement, which is detected by sensing device 12 .
- Remote device 18 interprets the movement by highlighting each of individually delineated symbol buttons 86 as described hereinabove.
- FIG. 4 and FIG. 5 may be viewed as a sequence of actions, whereby in FIG. 4 user 14 selects an area of display screen 20 , e.g., one of additional symbol buttons 68 , and in FIG. 5 selects one of the user interface elements, e.g., highlighted symbol button 76 , in order to display second symbol arc 84 and to input one of individually delineated symbol buttons 86 .
- arcuate displays like first symbol arc 62 for remotely inputting information may provide particularly enhanced ergonomic value.
- the motions involved in their use for remote information input are not fatiguing, e.g., in comparison with a standard “QWERTY” keyboard layout.
- Virtual keyboard layouts such as the QWERTY keyboard layout may not as conveniently permit remote information input with a resting or fixed elbow position.
- the selection gesture is made by moving hand 16 downwards. That is, it involves a forward displacement of the hand in the z-axis with respect to sensing device 12 .
- remote device 18 interprets the movement as a selection gesture and ignores the motion component in the xy plane. In both cases, remote device 18 may provide enhanced ergonomic value when recognizing these selection gestures, as they allow user 14 to use a natural selection motion, as indicated by the location of hand 16 on the xy axis.
- remote device 18 may cause a corresponding user interface element, e.g., shortcuts 96 , 98 to be shown on remote information input interface 26 , offering suggestions for completing a word.
- a corresponding user interface element e.g., shortcuts 96 , 98
- FIG. 5 letters “S” and “A” have been previously input, and the autocomplete feature of remote device 18 provides shortcuts 96 and 98 for selection. Previously input information may be emphasized on shortcuts 96 and 98 .
- FIG. 6 is a view of portions of system 10 ( FIG. 1 ) operating under remote control of user 14 in accordance with an embodiment of the present invention.
- a domain-specific language model 132 may be used to determine the probability of a symbol or next letter being preferred by user 14 .
- language model 132 is shown as a computer program module operated by device 18 .
- user 14 has previously input the letters “INVENTIO”.
- User 14 next moves hand 16 over emphasized symbol button 76 .
- Device 18 uses the domain-specific language model and determines that the probability of a letter “N” is significantly higher than another letter shown in emphasized symbol button 76 , as shown in a shortcut 126 .
- device 18 uses the domain-specific language model to shift an adjusted second symbol arc 128 to place the letter with a highest probability beneath emphasized symbol button 76 . Both stages of the selection gesture may be performed by user 14 in a continuous motion to select a preferred letter, thus minimizing required movement by hand 16 .
- language model herein refers to any suitable statistical model for assigning a probability to a sequence of letters or words by means of a probability distribution.
- FIG. 7 is a view of portions of system 10 ( FIG. 1 ) operating under remote control of user 14 in accordance with an embodiment of the present invention.
- User 14 may choose to perform a deselection gesture after inputting information remotely, or to deselect second symbol arc 84 .
- the deselection gesture may comprise raising hand 16 as indicated by an upwardly directed vertical arc 100 which is a reversal of vertical arc 78 ( FIG. 5 ).
- Sensing device 12 detects the deselection gesture, and remote device 18 interprets the movement as a command to cancel the selection of second symbol arc 84 ( FIG. 5 ) which has responsively been removed from remote information input interface 26 in FIG. 7 .
- any movement by hand 16 after the deselection gesture has been performed is interpreted by remote device 18 as a command to resume highlighting alphanumeric symbol buttons 64 on first symbol arc 62 for selection.
- hand 16 moves from an initial position 102 to a final position 104 .
- remote device 18 automatically removes second symbol arc 84 without requiring the deselection gesture to be performed.
- remote information input requires less movement by user 14 than in the previous embodiment.
- Device 18 typically requires user 14 to return hand 16 to final position 104 before recognizing a new selection.
- Embodiments of the present invention that utilize the T9 text input layout as symbol arcs on remote information input interface 26 may provide an advantage whereby input is provided remotely without the need to move the control entity in three dimensions.
- moving hand 16 in an arcuate motion along semi-circular arc 70 ( FIG. 4 ) is interpreted by remote device 18 as movement within two dimensions, e.g., leftward, rightward, upward and downward.
- semi-circular arc 70 By limiting semi-circular arc 70 to motions substantially parallel to the xz plane, 3-dimensional interpretation issues are avoided.
- a complex movement in three dimensions, e.g., to perform the point-and-click gesture is not required, thus simplifying interpretation of the gesture and thereby facilitating remote information input.
- FIG. 8 is a flow chart of a method for remotely interfacing with a computer system, in accordance with an embodiment of the present invention.
- user 14 FIG. 1
- User 14 would thus need to perform efficient, streamlined search commands to interact remotely with a computer application running on the remote device.
- the process steps are described below in a particular linear sequence for clarity of presentation. However, it will be evident that some of them can be performed in parallel, asynchronously, or in different orders. The process can be performed, for example, by system 10 .
- User interface elements comprising a remote information input interface to a computer application are presented to a user on a display screen in a display presentation step 106 .
- the computer application may be a media search and presentation system. It is assumed that the computer application has been loaded, and that a three-dimensional sensing device is in operation.
- the sensing device can be any three-dimensional sensor or camera, provided that it generates data for interpretation by the remote device.
- the user performs a first gesture in a three-dimensional space using a control entity, e.g., a part of the user's body.
- a sensing device such as sensing device 12 ( FIG. 1 ) detects the gesture made by the control entity, e.g., hand 16 , in a gesture detecting step 108 .
- the computer iteratively analyzes three-dimensional data provided by the sensing device, for example by constructing a three-dimensional map as described in commonly assigned co-pending U.S. application Ser. No. 12/683,452, which is herein incorporated by reference.
- an area of the display screen is identified by the computer in a selected area identification step 110 .
- the first gesture is recognized by the computer as a command to increase the magnification level of user interface elements within the selected area on the display screen in a magnification level adjusting step 112 .
- Any gesture recognition algorithm may be employed to carry out magnification level adjusting step 112 , so long as the system can relate the user gesture to a recognized command and a location of interest on the remote information input interface.
- a second gesture is recognized by the computer as a command to select one of the user interface elements within the selected area in a selection gesture detecting step 114 .
- the second gesture can be for any purpose, for example to perform another zoom command, to input a symbol, or to alter the remote information input interface in accordance with the gesture identified.
- the clockwise circular gesture command described with respect to FIG. 2 might correspond to an instruction to increase the zoom level of the remote information input interface on the display screen, while a counter-clockwise circular gesture, in which the direction of the motion is reversed, could result in an instruction to decrease the zoom level. Many such combinations will occur to a developer of computer applications or other signal processing systems.
- An updated display screen results, and is shown in subsequent iterations of the method. In practice the process iterates so long as the remote device is active or some error occurs.
- the method then terminates at a final step 116 .
- the circular gestures comprise requiring at least one complete circle to be performed by the control entity before the zoom level is changed.
- multiple control entities are used to perform the specialized gesture.
- the zoom command may be input using a second hand (not shown) to complement hand 16 ( FIG. 1 ).
- remote device 18 recognizes the second hand by analyzing input from sensing device 12 , movement of the second hand farther away from hand 16 may be interpreted as the zoom command to increase the zoom level, and vice versa.
- Using multiple control entities may provide an advantage wherein pointer 24 is not moved prior to the change in zoom level. Thus, the changes in magnification may be performed around pointer 24 .
- language model 132 is used to assign a probability to each letter on remote information input interface 26 .
- Device 18 may order the letters accordingly, displaying the letters in a continuous ungrouped series of individual letters, rather than in groups, e.g., the group of three letters displayed in emphasized symbol button 76 .
- Device 18 may invite access to letters on the display having relatively high probabilities, e.g., by presenting them in closer proximity to the center of remote information input interface 26 .
- letters having relatively high and low probabilities may be grouped together and placed into secondary symbol arcs (not shown).
- the spatial distribution of letters in a symbol arc may reflect their respective probabilities.
- letters having relatively high and low probabilities of selection may be spaced apart and crowded together, respectively, in varying degrees.
- text input created by the user is shown as a linear stream of characters running across the screen, from left to right, for example.
- the user may perform a special gesture, recognized by the remote device, to mark a neutral reference position. Movement of the user's hand in a direction along the sequence, such as to the right of the reference position will then cause the display to advance to the right along the text stream whereas movement to the left will scroll backward through the text stream.
- the scroll speed presented by the remote device on the display may initially be slow when the user gestures sideways to the right or left and may gradually accelerate the longer the user's hand is in the advance or reverse position.
- the above-mentioned language model may be used to display alternative choices of additional characters and even words to append to the stream. These choices may be displayed above and/or below the existing line of characters, with the likeliest choices typically vertically closest to the line and possibly magnified.
- the user selects the desired choice by upward and downward motions of the hand, perpendicular to the direction of the text sequence.
- the user can add text quickly and efficiently using simple right/left and up/down motions.
- the user's right/left and up/down hand motions may be made in a generally planar space or, if the elbow is resting on a surface as in some of the embodiments described above, may be over a generally spherical surface.
- the right/left and up/down motions are not limited to a two-dimensional plane, but may be mapped to a two-dimensional coordinate system by the remote device. Within this latter coordinate system, one dimension of hand movement controls the speed of scrolling forward and back (wherein backward movement may delete characters previously appended to the stream), while the other dimension controls the selection of new characters.
- This sort of embodiment may be used to present and add text input in a sort of continuous “flight mode”: As the user scrolls to the right (forward) to add text to the stream being created, various potential continuations of the existing text are presented to the right of the existing text, above and/or below the text line.
- the potential continuations may be ordered or otherwise presented in such a way that the effort necessary to select a given continuation is a decreasing function of the likelihood that the given continuation is the correct one, based on the language model (higher likelihood yields lower effort). For example, likelier continuations may be presented with larger size and/or in closer proximity to the current cursor position.
Abstract
Description
- This Application claims the benefit of U.S. Provisional Application No. 61/159,808 filed Mar. 13, 2009, which is herein incorporated by reference.
- 1. Field of the Invention
- This invention relates generally to user interfaces for computerized systems and specifically to user interfaces with three-dimensional characteristics.
- 2. Description of the Related Art
- Many different types of user interface devices and methods are currently available. Common tactile interface devices include the computer keyboard, mouse and joystick. Touch screens detect the presence and location of a touch by a finger or other object within the display area. Infrared remote controls are widely used, and “wearable” hardware devices have been developed, as well, for purposes of remote control.
- Computer interfaces based on three-dimensional sensing of parts of the user's body have also been proposed. For example, PCT International Publication WO 03/071410, whose disclosure is incorporated herein by reference, describes a gesture recognition system using depth-perceptive sensors. A three-dimensional sensor provides position information, which is used to identify gestures created by a body part of interest.
- The gestures are recognized based on the shape of the body part and its position and orientation over an interval. The gesture is classified for determining an input into a related electronic device.
- As another example, U.S. Pat. No. 7,348,963, whose disclosure is incorporated herein by reference, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures three-dimensional information regarding an object in an interactive area located in front of the display screen. A computer system directs the display screen to change the visual image in response to the object.
- An embodiment of the invention provides a method for operating a computerized system, which is carried out by presenting user interface elements on a display screen of the computerized system and detecting a first gesture made in a three-dimensional space by a distal portion of an upper extremity of a user while a segment of the distal portion thereof rests on a surface. In response to the first gesture, an area of the display screen selected by the user is identified, and a corresponding user interface element is displayed. After displaying the corresponding user interface element, a second gesture made by the distal portion while the segment continues to rest on the surface is detected so as to select one of the user interface elements that appears in the selected area.
- In another embodiment, the method further includes mapping an operation to the corresponding user interface element, wherein the second gesture causes the operation to be performed.
- In yet another embodiment, the method further includes mapping a three-dimensional location of the distal portion of the upper extremity to two parameters of a two-dimensional parametric surface, which is a section of a sphere that corresponds to possible locations of the distal portion of the upper extremity while the segment of the distal portion rests on a surface, and mapping the two parameters to corresponding parameters in a planar two-dimensional coordinate system of the display screen where the user interface elements are located
- According to an aspect of the method, the first gesture describes a first arc-like movement forming at least a portion of a horizontal circular arc mapped to the section of the sphere as a pointing command, and the second gesture includes a second arc-like movement mapped to the section of the sphere as a selection command.
- According to an additional aspect of the method, the pointing command includes pointing to a letter on the display screen and the selection command includes inputting the letter to the system.
- According to one aspect of the method, a threshold is defined, and includes displaying a subset of letters when a magnitude of the arc-like movement is less than the threshold and inputting the letter when the magnitude of the arc-like movement is greater than the threshold.
- According to an aspect of the method, the subset of letters is shifted on the display screen using a language model to determine a probability of a preferred letter, and wherein inputting the letter is performed in a single continuous motion with high probability.
- According to one aspect of the method, the segment of the distal portion may include an elbow, a wrist, or a forearm.
- An embodiment of the invention provides a method for operating a computerized system, which is carried out by presenting user interface elements on a display screen of the computerized system and detecting a first gesture made in a three-dimensional space by a part of a body of a user. An area of the display screen selected by the user is identified responsively to the first gesture, and a magnification level of one or more of the user elements appearing in the selected area on the display screen is increased. After increasing the magnification level, a second gesture made by the part of the body of the user is detected so as to select one of the user elements that appear in the selected area.
- According to an aspect of the method, a third gesture made by the part of the body is detected, and the magnification level is decreased in response to the third gesture.
- According to an additional aspect of the method, the first and the third gesture include circular motions of a hand of the user in opposite, respective directions.
- According to one aspect of the method, detecting the second gesture includes actuating a shortcut on the display in response to the second gesture.
- According to yet another aspect of the method, selecting the magnified alphanumeric symbol includes adding the selected magnified alphanumeric symbol to a word spelled on the display screen, wherein the method includes detecting a third gesture made by the part of the body, opposite to the second gesture, and removing one or more symbols from the word in response to the third gesture.
- According to one aspect of the method, the first gesture includes a three-dimensional movement by the part of the body of the user.
- According to still another aspect of the method, presenting user interface elements includes displaying a plurality of symbols arranged in at least one arc.
- According to a further aspect of the method, displaying the plurality of symbols includes presenting a set of symbols in a first arc, and increasing the magnification level includes presenting a magnified subset of the set of symbols in a second arc adjacent to the first arc.
- According to an aspect of the method, detecting the first gesture includes detecting an arcuate movement of a hand of the user, and associating the arcuate movement with the plurality of symbols in the at least one arc.
- According to an additional aspect of the method, presenting user interface elements includes presenting a sequence of textual characters, and increasing the magnification level includes displaying further characters for addition to the sequence using a language model to select the further characters.
- According to another aspect of the method, detecting the first gesture includes scrolling forward or backward along the sequence responsively to first movements of a hand of the user in first and second directions along the sequence, and detecting the second gesture includes selecting the further characters for addition to the sequence in response to second movements of the hand in at least a third direction perpendicular to the first and second directions.
- An embodiment of the invention provides a computer software product for operating a computer system, including a sensing device, which is configured to detect at least a part of a body of a user, a display screen, which is configured to present user interface elements, and a processor, which is coupled to the sensing device so as to detect a first gesture made in a three-dimensional space by the part of the body. The processor is additionally configured to identify an area of the display screen selected by the user in response to the first gesture, and to increase a magnification level of one or more of the user interface elements appearing in the selected area on the display screen, and after increasing the magnification level, to detect a second gesture made by the part of the body so as to select one of the user interface elements that appears in the selected area.
- An embodiment of the invention provides a computer software product for operating a computerized system, including a computer storage medium in which computer program instructions are stored, which instructions, when executed by a computer, cause the computer to present user interface elements on a display screen of the computerized system, to detect a first gesture made in a three-dimensional space by a part of a body of a user, to identify an area of the display screen selected by the user in response to the first gesture, and to increase a magnification level of one or more of the user interface elements appearing in the selected area on the display screen. After increasing the magnification level, the instructions cause the computer to detect a second gesture made by the part of the body of the user so as to select one of the user interface elements that appears in the selected area.
- There is also provided, in accordance with an embodiment of the present invention, a method for operating a computerized system, including the steps of presenting user interface elements on a display screen of the computerized system and detecting a gesture made in a three-dimensional space by a part of a body of a user. While the user performs the gesture, one or more of the user interface elements on the display screen are continuously modified responsively to a direction of the gesture.
- In some embodiments, continuously modifying the one or more of the user interface elements includes increasing or decreasing a magnification level of at least one of the user interface elements, typically by zooming in on a user interface element toward which the gesture is directed.
- In other embodiments, presenting the user interface elements includes presenting a sequence of textual characters, which is continuously modifying by adding characters to the sequence while scrolling over the sequence responsively to the gesture. Adding the characters typically includes presenting choices of further characters to add to the sequence, using a language model to determine the choices, and selecting at least one of the choices responsively to the gesture. Presenting the choices may include determining, based on the language model, a respective likelihood of correctness of each of the choices, and displaying the choices so that an effort required by the user to select a given choice is a decreasing function of the likelihood. Additionally or alternatively, continuously modifying the one or more of the user interface elements may include scrolling forward or backward along the sequence responsively to first movements of a hand of the user in first and second directions along the sequence, and selecting further characters for addition to the sequence responsively to second movements of the hand in at least a third direction perpendicular to the first and second directions.
- For a better understanding of the present invention, reference is made to the detailed description of the invention, by way of example, which is to be read in conjunction with the following drawings, wherein like elements are given like reference numerals, and wherein:
-
FIG. 1 is a schematic, pictorial illustration of a system for remote gesture-mediated information input, in accordance with an embodiment of the present invention; -
FIG. 2 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention; -
FIG. 3 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention; -
FIG. 4 is a schematic, pictorial illustration of a system for remote information input, in accordance with an embodiment of the present invention; -
FIG. 5 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention; -
FIG. 6 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention; -
FIG. 7 is a view of portions of a system operating under remote control of a user in accordance with an embodiment of the present invention; and -
FIG. 8 is a flow chart of a method for remotely interfacing with a computer system, in accordance with an embodiment of the present invention. - In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various principles of the present invention. It will be apparent to one skilled in the art, however, that not all these details are necessarily always needed for practicing the present invention. In this instance, well-known circuits, control logic, and the details of computer program instructions for conventional algorithms and processes have not been shown in detail in order not to obscure the general concepts unnecessarily.
- Embodiments of the present invention that are described hereinbelow provide improved methods, products and systems for providing remote input to an electronic device. When a user interacts remotely with a device that requires accurate input and navigation decisions, these embodiments provide an intuitive and streamlined interface.
- Remote input may be provided for interaction with a remote device such as a gaming console, an interactive television, a computerized cellular phone, or a computer. In the context of the present application and claims, the term “remote device” herein refers to any remotely governable device containing a processing unit. A sensing device may be used to detect a virtual control, such as a virtual keyboard. The sensing device is typically a three-dimensional camera that detects information that includes the position of a body (or at least parts of the body) of the user or other tangible entities wielded or operated by the user for interacting with a computer application running on the remote device, all of which are sometimes referred to herein for convenience as “control entities”. The sensing device detects the presence and changes of position of a control entity, i.e. its speed and direction. The remote device interprets movements detected by the sensing device as described hereinbelow.
- The virtual control is positioned in front of the user, usually between the user and a display. The remote device presents user interface elements on the display. The sensing device detects the movements of the control entity in a three-dimensional space, such as a user's hand manipulating the virtual control, and translates them into commands for the remote device. For example, movement of the control entity using a circular gesture may be interpreted by the remote device as a command to adjust a magnification (or zoom) level of a remote information input interface comprising the user interface elements on the display. “Magnification” in this context is not limited to simple visual magnification: New details or relevant potentially selected options may be exposed. In movements referred to herein as “3-dimensional” movements, a distance between the control entity and the sensing device varies as a gesture, e.g., the circular gesture, is performed. Movements in which the distance between the control entity and the sensing device remains substantially constant are classified as either “1-dimensional” or “2-dimensional movements”. Performing a clockwise circular gesture with the control entity may command the remote device to increase the magnification level of the display, whereas a counter-clockwise circular movement may cause minification. The terms “clockwise” and “counterclockwise” are used arbitrarily herein to distinguish two meaningful gestures. These terms have no necessary physical meanings with respect to the actual configuration of the embodiments.
- Thus, by interpretation of the user's gestures, the remote device causes the interface to zoom in and out, with a potentially infinite zoom range. The actual distance between the user and the remote device, however, remains largely constant. Conceptually, the result of this arrangement is that the interface is essentially three-dimensional, and the elements of the interface may be continuously regrouped in three-dimensional space so as to move toward or away from the user as required, depending on the direction of the user's gesture. Thus, for example, the display may zoom in on a user interface element toward which a gesture is directed. This zoom may continue until the user's hand reaches the virtual location of the user element in the three-dimensional space, whereupon the element is selected (with or without an additional selection gesture).
- In other embodiments of the invention, the virtual control may be a virtual keyboard for remote information input, such as remote text input. The display presents a remote information input interface representing the virtual keyboard. The symbol positions, e.g., the relative positions of the letters in the virtual keyboard, may be held constant. The relative size of the symbols may be varied on the display in response to movements by the control entity. The distance of the control entity from the sensing unit and the relative motion of the control entity with respect to the sensing unit may be interpreted as a manipulation of the virtual control, and hence as commands to the remote device as described hereinbelow. The virtual control may be interpreted by the remote device to be constructed or oriented so as to accommodate right or left handed operation as the case may be. The positions of the symbols in the virtual keyboard may be held constant, while the relative size of the symbols may be varied to reflect expected relevance or likelihood of each symbol being a suggested next symbol for remote input, e.g., based upon context awareness.
- Movement of the control entity forward towards the sensing unit may cause a selected symbol, e.g., a letter to be input. Movement of the control entity away from the sensing unit may cause the previously input symbol to be erased. Selection is typically performed in two stages. A first selection stage may be used to determine a specific group of symbols. A second selection stage is used to input an individual symbol from the specific group of symbols. In some embodiments, the first selection stage is made by an arc-like movement of the control entity, e.g., clockwise or counter-clockwise, followed by a slight movement towards the sensing unit. A threshold, which may be fixed or adaptive, is defined to enable the remote device to detect completion of each selection stage. The remote device interprets a movement with a magnitude less than the threshold as completion of the first selection stage. When the magnitude of the movement is greater than the threshold, the remote device interprets the movement as completion of the second stage. The values of the threshold may depend upon the resolving capabilities of the sensing device, and are typically set so as to detect significant motion, while ignoring “jitter” by the control entity. In some cases, there is no pause between the stages, e.g., when a single continuous movement is performed. The second selection stage may constitute an additional arc-like movement by the control entity towards the sensing unit and downwards, as described in more detail hereinbelow.
- Selection stages may be combinations of continuous and discrete, curved and linear movements in many directions with respect to the sensing unit. In any case the user can execute the combinations while resting his elbow (or other relatively proximal segment of his upper extremity, e.g., a wrist or forearm) on a surface such as an armrest of a chair or a table, while using a distal portion as the control entity. Thus, the selection may be performed by the user's hand, fingers or forearm, or combinations thereof without tiring the arm.
- To illustrate additional gestures by way of example and not of limitation, dropping the control entity downwards could be interpreted as a command to begin a new line, a new paragraph, or to input a highlighted symbol. Hovering the control unit longer than an activation threshold may stimulate an autocomplete feature to offer suggestions to complete a word or a sentence, thus causing several shortcuts to be shown on the display. In the context of the present application and claims, the term “shortcut” herein refers to an option on a remote information input interface that appears upon user interaction, which offers a choice that is available in the current state, or context, of the computer application. That is to say, a user interface element, e.g. a shortcut, may be mapped to an operation to be performed upon selection of the shortcut. For example, the operation may cause the word shown in the shortcut to be input to the remote device. Common examples of such shortcuts include a context menu that appears upon a mouse click operation such as a “right-click”, and an iconic link whose activation triggers some function in a remote device. The activation threshold may be customized to vary according to specific symbols and application scenarios. The display may be altered to reflect potential likelihood or relevance of a next letter, group of letters, or symbol based upon an analysis of previously input symbols, e.g., by highlighting likely symbols on the display. A subset of letters may be shown on the display to simplify navigation, and may themselves be linked to shortcuts. The relative location of the subset of letters may be shifted on the display so as to enable the user to select a preferred letter in a single selection motion. Typically, the relative location of the subset of letters is altered without any change to the order of the letters.
- In yet other embodiments, the distance and motion relative to the display unit, rather than to the sensing unit, are used for interpreting the commands.
- Turning now to the drawings, reference is initially made to
FIG. 1 , which is a schematic, pictorial illustration of asystem 10 for gesture-mediated remote information input, in accordance with an embodiment of the present invention.System 10 incorporates asensing device 12, typically a three-dimensional camera, which detects information that includes the body (or at least parts of the body) of auser 14 or other control entities for controlling a computer application as described hereinabove. For the purposes of illustration, the control entity is described using an example of ahand 16 for providing remote input tosystem 10. In gaming applications, such control entities could include portions of objects being manipulated byuser 14, e.g., as hockey sticks, golf clubs, bows, and tennis rackets. The arrangement described in commonly assigned application Ser. No. 12/352,622, filed Jan. 13, 2009, which is hereby incorporated by reference, is suitable for use insystem 10 assensing device 12. Other known three-dimensional cameras may also be employed as sensingdevice 12. Although this embodiment relates to one particular system for providing remote information input, the principles of providing remote information input that are implemented insystem 10 may similarly be applied, mutatis mutandis, in other types of remote information input or gesture control systems, using other techniques for providing remote information input or remote control via specialized gesture. - Information detected by sensing
device 12 is processed by aremote device 18, which drives adisplay screen 20 accordingly.Sensing device 12 is connected toremote device 18 via asensing interface 22, which may comprise a Bluetooth® adapter, an Infrared Data Association (IrDA) device, a cable connection, a universal serial bus (USB) interface, or any communication interface for outputting sensor data that allowsremote device 18 to import remote sensing data.Remote device 18 typically comprises a general-purpose computer processor, which is programmed in software to carry out the functions described hereinbelow. The software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on tangible storage media, such as optical, magnetic, or electronic memory media. Alternatively or additionally, some or all of the image functions may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable digital signal processor (DSP). Althoughremote device 18 is shown inFIG. 1 , as a separate unit from sensingdevice 12, some or all of the processing functions ofremote device 18 may be performed by suitable dedicated circuitry within the housing ofsensing device 12 or otherwise associated withsensing device 12. -
Display screen 20 presents user interface elements comprising apointer 24 and a remoteinformation input interface 26, which comprisessymbols display interface 38 connectsdisplay screen 20 toremote device 18, and may comprise a Bluetooth® adapter, an IrDA device, a cable connection, or any communication interface for outputting image data that allowsremote device 18 to export visual display data, e.g., in the form of a compressed image. The symbol selection layout provides a simplified example for the purposes of illustration. In the present example,symbols symbols information input interface 26 may also comprise azoom level indicator 40 to provide a visual indicator of the zoom level of remoteinformation input interface 26.Zoom level indicator 40 may be shown as a slider, similar to sliders utilized in web browsers and other applications. The zoom level is typically allowed to range within certain limits, e.g., from 50% to 500%. In the example ofFIG. 1 ,zoom level indicator 40 shows an initial value of 100%. Ascale indicator 42 showssymbols information input interface 26. - Reference is now additionally made to
FIG. 2 , which is a view of portions of system 10 (FIG. 1 ) operating under remote control ofuser 14 in accordance with an embodiment of the present invention.FIG. 1 andFIG. 2 may be viewed as a sequence of actions. InFIG. 1 ,user 14 is about to perform a specialized gesture, and inFIG. 2 completes the gesture.User 14, who is typically viewingdisplay screen 20, performs the specialized gesture, e.g., a clockwise circular gesture from the perspective ofuser 14, usinghand 16 as described on a reference coordinatesystem 44 by directedbroken lines 46. Alternatively, other types of hand movements may be used to invoke image zoom and other functions, such as moving the hand toward and away from the display screen. In reference coordinatesystem 44 the x, y, and z axes are to be interpreted as horizontal, vertical, and depth coordinates, respectively, with respect to sensingdevice 12. Thus, the distance betweenhand 16 andsensing device 12 vary as the specialized gesture is performed. - The inclination of the plane of the circle may be significant, according to specifications programmed in
remote device 18. Thus, a circle described vertically in the yz plane may be interpreted byremote device 18 differently from a circle in the xy plane or a horizontally executed circular gesture, e.g., as a pointing command. Substantially circular gestures described in various oblique planes may be given even more specialized interpretations. The clockwise circular gesture is recognized by sensingdevice 12, andremote device 18 interprets the gesture as a zoom command. The clockwise circular gesture thus commandsremote device 18 to smoothly increase the zoom (or magnification) level of remoteinformation input interface 26 ondisplay screen 20 usingpointer 24 as a reference point for the zooming. That is to say,remote device 18 identifies an area ofdisplay screen 20 aroundpointer 24 as having been selected byuser 14 for the zoom command. By performing the gesture,hand 16 may move from aninitial position 48 to afinal position 50.Scale indicator 42 showssymbols information input interface 26. - In a similar fashion, counter-clockwise circular gestures may be interpreted by
remote device 18 as a command to decrease the zoom level. In the example ofFIG. 2 ,zoom level indicator 40 shows that, in comparison withFIG. 1 , the clockwise circular gesture has increased the zoom level from the initial value of 100% immediately prior to the gesture to a final value of 150%. A corresponding increase in the size ofsymbols display screen 20 as a result of the zoom command. That is to say, one or more of the user interface elements appearing in the selected area ondisplay screen 20 are magnified. - Reference is now made to
FIG. 3 , which is a view of portions of system 10 (FIG. 1 ) operating under control ofuser 14 in accordance with an embodiment of the present invention. In the example ofFIG. 3 ,user 14 performs a leftward, substantially horizontalgesture using hand 16 as indicated by anarrow 52 on reference coordinatesystem 44. The leftward gesture commandsremote device 18 to movepointer 24 to the left on remoteinformation input interface 26. In the example ofFIG. 3 , the leftward movement of pointer 24 (FIG. 2 ) executes from a first position (indicated by a cursor 54 (shown in broken outline) to a second position, indicated by a cursor 56 (shown in solid outline) as a result of the command. By performing the gesture,hand 16 may move from aninitial position 58 to afinal position 60. - A rightward gesture may be interpreted as a command to move
pointer 24 to the right from the perspective ofuser 14, while gestures performed upward and downward may similarly be interpreted byremote device 18 as commands to movepointer 24 upward and downward, respectively. - Suitable calibration of
sensing device 12 andremote device 18 assures a desired sensitivity, i.e., a correspondence between a spatiotemporal displacement of the control entity and the effect on elements shown on remoteinformation input interface 26. It is recommended to compensate for the viewing distance and viewing angle ofuser 14 using known methods. The compensation techniques described in U.S. Patent Application Publication No. 2009/0009593, entitled “Three-dimensional Projection Display” may be applied for this purpose. - Reference is now made to
FIG. 4 , which is a schematic, pictorial illustration of system 10 (FIG. 1 ) for remote information input, in accordance with an embodiment of the present invention. Afirst symbol arc 62 is shown within remoteinformation input interface 26 ondisplay screen 20, in an embodiment that implements a T9® text input layout. T9 text input represents “text on 9 keys,” a method for streamlining input of text on numeric keypads, typically for mobile devices, available at the T9 web site (t9.com). Many suitable variations will occur to those skilled in the art for streamlining information input by providing an improved symbol layout on remoteinformation input interface 26.First symbol arc 62 comprises an arcuate, nearly semi-circular display of groups ofalphanumeric symbol buttons 64 to simulate relaxed movement ofhand 16 whileuser 14 sits comfortably on achair 66.First symbol arc 62 may compriseadditional symbol buttons 68 to support input of special symbols, e.g., space, backspace, or carriage return. - As
user 14 moves a control entity, such ashand 16 usually, but not necessarily while seated, and typically in a horizontal arc-like movement as indicated by an approximatelysemicircular arc 70 adjacent to reference coordinatesystem 44.Sensing device 12 detects the movement andremote device 18 interprets the movement by highlighting each ofadditional symbol buttons 68 andalphanumeric symbol buttons 64 sequentially ashand 16 moves throughsemi-circular arc 70 from afirst position 72 to asecond position 74. Provision of an arcuate display enableshand 16 to move while anelbow 118 of the same upper extremity ashand 16 rests onchair 66. A portion of asphere 122 is shaded within a spherical coordinatesystem 124 using an axis based uponelbow 118 to indicate an approximate range of motion ofhand 16 whenelbow 118 rests on a surface. In the arrangement ofFIG. 4 , a three-dimensional space is mapped to spherical coordinatesystem 124, and is also mapped to a two-dimensional coordinate system. The latter can be conveniently appreciated as a plane in reference coordinatesystem 44. - It is recommended that the movement of
semi-circular arc 70 be parallel to the xy plane in reference coordinatesystem 44. However, the movement may also be made so that the angle between a plane of the motion ofsemi-circular arc 70 and the xy plane is above 0 degrees, typically up to 45 degrees. In the example ofFIG. 4 , an emphasizedsymbol button 76 is shown ondisplay screen 20 to indicate that performance of a second gesture, described hereinbelow as a selection gesture, will result in selection of the symbols displayed therein. That is to say,remote device 18 identifies emphasizedsymbol button 76 as the area ofdisplay screen 20 that is currently selected byuser 14. Corresponding selection gestures performed byhand 16 at other points alongsemi-circular arc 70 would select corresponding symbols offirst symbol arc 62. The first gesture and the second gesture may be recognized byremote device 18 according to time-varying coordinates on the two-dimensional coordinate system and the spherical coordinate system, respectively. - Reference is now additionally made to
FIG. 5 , which is a view of portions of system 10 (FIG. 1 ) operating under remote control ofuser 14 in accordance with an embodiment of the present invention. The selection gesture is typically performed in two stages, as described hereinabove.User 14 may perform a first stage of the selection gesture by movinghand 16 downward in avertical arc 78 generally directed towarddisplay screen 20.Remote device 18 uses the threshold, described hereinabove, to determine completion of each selection stage. The selection gesture may pivot about the elbow or shoulder, whichever is applicable. Of course, when pivoting about the shoulder the advantages of resting a portion of the arm on a surface are lost. By performing the selection gesture,hand 16 may move from aninitial position 80 to anintermediate position 130 while performing the first stage, and then to afinal position 82 while performing a second stage.Sensing device 12 detects the movement, andremote device 18 interprets the selection gesture as a command to display asecond symbol arc 84 directly belowfirst symbol arc 62, comprising individually delineatedsymbol buttons 86, which are grouped together in highlightedsymbol button 76. In the present example,user 14next moves hand 16 in another arc-like movement, which is detected by sensingdevice 12.Remote device 18 interprets the movement by highlighting each of individually delineatedsymbol buttons 86 as described hereinabove.User 14 may then perform the second stage of the selection gesture by movinghand 16 further downward to remotely input one of individually delineatedsymbol buttons 86. The second stage of the selection gesture is indicated by a further downwardly directedvertical arc 120.Hand 16 may move fromintermediate position 130 tofinal position 82 while performing the second stage.FIG. 4 andFIG. 5 may be viewed as a sequence of actions, whereby inFIG. 4 user 14 selects an area ofdisplay screen 20, e.g., one ofadditional symbol buttons 68, and inFIG. 5 selects one of the user interface elements, e.g., highlightedsymbol button 76, in order to displaysecond symbol arc 84 and to input one of individually delineatedsymbol buttons 86. - Use of arcuate displays like
first symbol arc 62 for remotely inputting information may provide particularly enhanced ergonomic value. The motions involved in their use for remote information input are not fatiguing, e.g., in comparison with a standard “QWERTY” keyboard layout. Virtual keyboard layouts such as the QWERTY keyboard layout may not as conveniently permit remote information input with a resting or fixed elbow position. - As noted above, the selection gesture is made by moving
hand 16 downwards. That is, it involves a forward displacement of the hand in the z-axis with respect to sensingdevice 12. There are two variants of the motion. In onecase user 14 may movehand 16 towards the center (origin) of spherical coordinate system 124 (FIG. 4 ), such that the motion involves both a component in the xy plane and a component in yz plane. In the other case,user 14 may movehand 16 directly downwards, such that only a component in the yz plane exists. In both cases,remote device 18 is able to distinguish the selection gesture from other linear or curved movements in the xy plane alone, by calculating the displacement of the hand along the z-axis. When the magnitude of the motion component in the z-axis is greater than another predefined threshold,remote device 18 interprets the movement as a selection gesture and ignores the motion component in the xy plane. In both cases,remote device 18 may provide enhanced ergonomic value when recognizing these selection gestures, as they allowuser 14 to use a natural selection motion, as indicated by the location ofhand 16 on the xy axis. - If
user 14 causes pointer 24 (FIG. 1 ) to hover oversecond symbol arc 84,remote device 18 may cause a corresponding user interface element, e.g.,shortcuts information input interface 26, offering suggestions for completing a word. In the example ofFIG. 5 , letters “S” and “A” have been previously input, and the autocomplete feature ofremote device 18 providesshortcuts shortcuts - Reference is now made to
FIG. 6 , which is a view of portions of system 10 (FIG. 1 ) operating under remote control ofuser 14 in accordance with an embodiment of the present invention. A domain-specific language model 132 may be used to determine the probability of a symbol or next letter being preferred byuser 14. In the example ofFIG. 6 ,language model 132 is shown as a computer program module operated bydevice 18. In the example ofFIG. 6 ,user 14 has previously input the letters “INVENTIO”.User 14next moves hand 16 over emphasizedsymbol button 76.Device 18 uses the domain-specific language model and determines that the probability of a letter “N” is significantly higher than another letter shown in emphasizedsymbol button 76, as shown in ashortcut 126. In someembodiments device 18 uses the domain-specific language model to shift an adjustedsecond symbol arc 128 to place the letter with a highest probability beneath emphasizedsymbol button 76. Both stages of the selection gesture may be performed byuser 14 in a continuous motion to select a preferred letter, thus minimizing required movement byhand 16. In the context of the present application and claims, the term “language model” herein refers to any suitable statistical model for assigning a probability to a sequence of letters or words by means of a probability distribution. - Reference is now made to
FIG. 7 , which is a view of portions of system 10 (FIG. 1 ) operating under remote control ofuser 14 in accordance with an embodiment of the present invention.User 14 may choose to perform a deselection gesture after inputting information remotely, or to deselectsecond symbol arc 84. The deselection gesture may comprise raisinghand 16 as indicated by an upwardly directedvertical arc 100 which is a reversal of vertical arc 78 (FIG. 5 ).Sensing device 12 detects the deselection gesture, andremote device 18 interprets the movement as a command to cancel the selection of second symbol arc 84 (FIG. 5 ) which has responsively been removed from remoteinformation input interface 26 inFIG. 7 . Thereafter, any movement byhand 16 after the deselection gesture has been performed is interpreted byremote device 18 as a command to resume highlightingalphanumeric symbol buttons 64 onfirst symbol arc 62 for selection. In performing the deselection gesture,hand 16 moves from aninitial position 102 to afinal position 104. - In alternative embodiments, after
user 14 performs the above-described selection gesture,remote device 18 automatically removessecond symbol arc 84 without requiring the deselection gesture to be performed. Thus, remote information input requires less movement byuser 14 than in the previous embodiment.Device 18 typically requiresuser 14 to returnhand 16 tofinal position 104 before recognizing a new selection. - Embodiments of the present invention that utilize the T9 text input layout as symbol arcs on remote
information input interface 26 may provide an advantage whereby input is provided remotely without the need to move the control entity in three dimensions. As described hereinabove, movinghand 16 in an arcuate motion along semi-circular arc 70 (FIG. 4 ) is interpreted byremote device 18 as movement within two dimensions, e.g., leftward, rightward, upward and downward. By limitingsemi-circular arc 70 to motions substantially parallel to the xz plane, 3-dimensional interpretation issues are avoided. A complex movement in three dimensions, e.g., to perform the point-and-click gesture, is not required, thus simplifying interpretation of the gesture and thereby facilitating remote information input. - Reference is now made to
FIG. 8 , which is a flow chart of a method for remotely interfacing with a computer system, in accordance with an embodiment of the present invention. Shown by way of example, user 14 (FIG. 1 ) may need to search a large volume of media without using a physical keyboard or other interface connected to a remote device.User 14 would thus need to perform efficient, streamlined search commands to interact remotely with a computer application running on the remote device. The process steps are described below in a particular linear sequence for clarity of presentation. However, it will be evident that some of them can be performed in parallel, asynchronously, or in different orders. The process can be performed, for example, bysystem 10. - User interface elements comprising a remote information input interface to a computer application are presented to a user on a display screen in a
display presentation step 106. The computer application may be a media search and presentation system. It is assumed that the computer application has been loaded, and that a three-dimensional sensing device is in operation. The sensing device can be any three-dimensional sensor or camera, provided that it generates data for interpretation by the remote device. - The user performs a first gesture in a three-dimensional space using a control entity, e.g., a part of the user's body. A sensing device, such as sensing device 12 (
FIG. 1 ), detects the gesture made by the control entity, e.g.,hand 16, in agesture detecting step 108. The computer iteratively analyzes three-dimensional data provided by the sensing device, for example by constructing a three-dimensional map as described in commonly assigned co-pending U.S. application Ser. No. 12/683,452, which is herein incorporated by reference. In response to the detected gesture, an area of the display screen is identified by the computer in a selectedarea identification step 110. - The first gesture is recognized by the computer as a command to increase the magnification level of user interface elements within the selected area on the display screen in a magnification
level adjusting step 112. Any gesture recognition algorithm may be employed to carry out magnificationlevel adjusting step 112, so long as the system can relate the user gesture to a recognized command and a location of interest on the remote information input interface. - A second gesture is recognized by the computer as a command to select one of the user interface elements within the selected area in a selection
gesture detecting step 114. The second gesture can be for any purpose, for example to perform another zoom command, to input a symbol, or to alter the remote information input interface in accordance with the gesture identified. For example, the clockwise circular gesture command described with respect toFIG. 2 might correspond to an instruction to increase the zoom level of the remote information input interface on the display screen, while a counter-clockwise circular gesture, in which the direction of the motion is reversed, could result in an instruction to decrease the zoom level. Many such combinations will occur to a developer of computer applications or other signal processing systems. An updated display screen results, and is shown in subsequent iterations of the method. In practice the process iterates so long as the remote device is active or some error occurs. - The method then terminates at a
final step 116. - In some embodiments, the circular gestures comprise requiring at least one complete circle to be performed by the control entity before the zoom level is changed. In alternative embodiments, multiple control entities are used to perform the specialized gesture. For example, the zoom command may be input using a second hand (not shown) to complement hand 16 (
FIG. 1 ). Onceremote device 18 recognizes the second hand by analyzing input from sensingdevice 12, movement of the second hand farther away fromhand 16 may be interpreted as the zoom command to increase the zoom level, and vice versa. Using multiple control entities may provide an advantage whereinpointer 24 is not moved prior to the change in zoom level. Thus, the changes in magnification may be performed aroundpointer 24. - In variants of the embodiments of
FIG. 6 andFIG. 7 ,language model 132 is used to assign a probability to each letter on remoteinformation input interface 26.Device 18 may order the letters accordingly, displaying the letters in a continuous ungrouped series of individual letters, rather than in groups, e.g., the group of three letters displayed in emphasizedsymbol button 76.Device 18 may invite access to letters on the display having relatively high probabilities, e.g., by presenting them in closer proximity to the center of remoteinformation input interface 26. Alternatively, letters having relatively high and low probabilities may be grouped together and placed into secondary symbol arcs (not shown). - In yet another variant, the spatial distribution of letters in a symbol arc may reflect their respective probabilities. Thus, letters having relatively high and low probabilities of selection may be spaced apart and crowded together, respectively, in varying degrees.
- Other commercial methods for remote information input may be used in conjunction with the specialized gestures and command interpretation by remote devices using three-dimensional sensing described hereinabove. For example, concepts described by the MessagEase™ text input system, available for sale at the MessagEase web site (exideas.com), may be enhanced accordingly.
- In an alternative embodiment (not shown specifically in the figures), text input created by the user is shown as a linear stream of characters running across the screen, from left to right, for example. The user may perform a special gesture, recognized by the remote device, to mark a neutral reference position. Movement of the user's hand in a direction along the sequence, such as to the right of the reference position will then cause the display to advance to the right along the text stream whereas movement to the left will scroll backward through the text stream. The scroll speed presented by the remote device on the display may initially be slow when the user gestures sideways to the right or left and may gradually accelerate the longer the user's hand is in the advance or reverse position. As the text stream advances, the above-mentioned language model may be used to display alternative choices of additional characters and even words to append to the stream. These choices may be displayed above and/or below the existing line of characters, with the likeliest choices typically vertically closest to the line and possibly magnified. The user selects the desired choice by upward and downward motions of the hand, perpendicular to the direction of the text sequence. Thus, following the initial special gesture, the user can add text quickly and efficiently using simple right/left and up/down motions.
- The user's right/left and up/down hand motions may be made in a generally planar space or, if the elbow is resting on a surface as in some of the embodiments described above, may be over a generally spherical surface. In either case, the right/left and up/down motions are not limited to a two-dimensional plane, but may be mapped to a two-dimensional coordinate system by the remote device. Within this latter coordinate system, one dimension of hand movement controls the speed of scrolling forward and back (wherein backward movement may delete characters previously appended to the stream), while the other dimension controls the selection of new characters.
- This sort of embodiment may be used to present and add text input in a sort of continuous “flight mode”: As the user scrolls to the right (forward) to add text to the stream being created, various potential continuations of the existing text are presented to the right of the existing text, above and/or below the text line. The potential continuations may be ordered or otherwise presented in such a way that the effort necessary to select a given continuation is a decreasing function of the likelihood that the given continuation is the correct one, based on the language model (higher likelihood yields lower effort). For example, likelier continuations may be presented with larger size and/or in closer proximity to the current cursor position.
- It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.
Claims (50)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/721,582 US20100235786A1 (en) | 2009-03-13 | 2010-03-11 | Enhanced 3d interfacing for remote devices |
US14/311,444 US20140304647A1 (en) | 2009-03-13 | 2014-06-23 | Enhanced 3d interfacing for remote devices |
US15/806,350 US10719214B2 (en) | 2009-03-13 | 2017-11-08 | Enhanced 3D interfacing for remote devices |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15980809P | 2009-03-13 | 2009-03-13 | |
US12/721,582 US20100235786A1 (en) | 2009-03-13 | 2010-03-11 | Enhanced 3d interfacing for remote devices |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/311,444 Continuation US20140304647A1 (en) | 2009-03-13 | 2014-06-23 | Enhanced 3d interfacing for remote devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100235786A1 true US20100235786A1 (en) | 2010-09-16 |
Family
ID=42728878
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/721,582 Abandoned US20100235786A1 (en) | 2009-03-13 | 2010-03-11 | Enhanced 3d interfacing for remote devices |
US14/311,444 Abandoned US20140304647A1 (en) | 2009-03-13 | 2014-06-23 | Enhanced 3d interfacing for remote devices |
US15/806,350 Active 2031-02-09 US10719214B2 (en) | 2009-03-13 | 2017-11-08 | Enhanced 3D interfacing for remote devices |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/311,444 Abandoned US20140304647A1 (en) | 2009-03-13 | 2014-06-23 | Enhanced 3d interfacing for remote devices |
US15/806,350 Active 2031-02-09 US10719214B2 (en) | 2009-03-13 | 2017-11-08 | Enhanced 3D interfacing for remote devices |
Country Status (2)
Country | Link |
---|---|
US (3) | US20100235786A1 (en) |
WO (1) | WO2010103482A2 (en) |
Cited By (116)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090031240A1 (en) * | 2007-07-27 | 2009-01-29 | Gesturetek, Inc. | Item selection using enhanced control |
US20090217211A1 (en) * | 2008-02-27 | 2009-08-27 | Gesturetek, Inc. | Enhanced input using recognized gestures |
US20100034457A1 (en) * | 2006-05-11 | 2010-02-11 | Tamir Berliner | Modeling of humanoid forms from depth maps |
US20100141684A1 (en) * | 2008-12-05 | 2010-06-10 | Kabushiki Kaisha Toshiba | Mobile communication device and method for scaling data up/down on touch screen |
US20100277411A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | User tracking feedback |
US20110032191A1 (en) * | 2009-08-04 | 2011-02-10 | Cooke Benjamin T | Video system and remote control with touch interface for supplemental content display |
US20110052006A1 (en) * | 2009-08-13 | 2011-03-03 | Primesense Ltd. | Extraction of skeletons from 3d maps |
US20110080475A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Methods And Systems For Determining And Tracking Extremities Of A Target |
US20110081044A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Systems And Methods For Removing A Background Of An Image |
US20110080336A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Human Tracking System |
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US20110302536A1 (en) * | 2010-06-07 | 2011-12-08 | Empire Technology Development Llc | User movement interpretation in computer generated reality |
US20110304649A1 (en) * | 2010-06-10 | 2011-12-15 | Microsoft Corporation | Character selection |
US20120005569A1 (en) * | 2010-07-05 | 2012-01-05 | Roh Hyeongseok | Mobile terminal and method for controlling the same |
US20120019460A1 (en) * | 2010-07-20 | 2012-01-26 | Hitachi Consumer Electronics Co., Ltd. | Input method and input apparatus |
US20120036479A1 (en) * | 2010-08-04 | 2012-02-09 | Shunichi Kasahara | Information processing apparatus, information processing method and program |
US20120042246A1 (en) * | 2010-06-10 | 2012-02-16 | Microsoft Corporation | Content gestures |
US20120198026A1 (en) * | 2011-01-27 | 2012-08-02 | Egain Communications Corporation | Personal web display and interaction experience system |
WO2012143829A2 (en) | 2011-04-20 | 2012-10-26 | Koninklijke Philips Electronics N.V. | Gesture based control of element or item |
US20120280916A1 (en) * | 2011-05-02 | 2012-11-08 | Verizon Patent And Licensing, Inc. | Methods and Systems for Facilitating Data Entry by Way of a Touch Screen |
WO2013000099A1 (en) * | 2011-06-29 | 2013-01-03 | Intel Corporation | Techniques for gesture recognition |
US8422034B2 (en) | 2010-04-21 | 2013-04-16 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US20130117027A1 (en) * | 2011-11-07 | 2013-05-09 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling electronic apparatus using recognition and motion recognition |
US8467072B2 (en) | 2011-02-14 | 2013-06-18 | Faro Technologies, Inc. | Target apparatus and method of making a measurement with the target apparatus |
US8467071B2 (en) | 2010-04-21 | 2013-06-18 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US20130176219A1 (en) * | 2012-01-09 | 2013-07-11 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US20130181897A1 (en) * | 2010-09-22 | 2013-07-18 | Shimane Prefectural Government | Operation input apparatus, operation input method, and program |
US8537371B2 (en) | 2010-04-21 | 2013-09-17 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8582867B2 (en) | 2010-09-16 | 2013-11-12 | Primesense Ltd | Learning-based pose estimation from depth maps |
US8594425B2 (en) | 2010-05-31 | 2013-11-26 | Primesense Ltd. | Analysis of three-dimensional scenes |
US8615108B1 (en) | 2013-01-30 | 2013-12-24 | Imimtek, Inc. | Systems and methods for initializing motion tracking of human hands |
US8655021B2 (en) | 2012-06-25 | 2014-02-18 | Imimtek, Inc. | Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints |
US8724119B2 (en) | 2010-04-21 | 2014-05-13 | Faro Technologies, Inc. | Method for using a handheld appliance to select, lock onto, and track a retroreflector with a laser tracker |
US20140211047A1 (en) * | 2013-01-29 | 2014-07-31 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and control method thereof |
US20140232650A1 (en) * | 2013-02-15 | 2014-08-21 | Microsoft Corporation | User Center-Of-Mass And Mass Distribution Extraction Using Depth Images |
US8830312B2 (en) | 2012-06-25 | 2014-09-09 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching within bounded regions |
US20140282278A1 (en) * | 2013-03-14 | 2014-09-18 | Glen J. Anderson | Depth-based user interface gesture control |
US20140282223A1 (en) * | 2013-03-13 | 2014-09-18 | Microsoft Corporation | Natural user interface scrolling and targeting |
EP2677397A3 (en) * | 2012-06-21 | 2014-10-08 | Fujitsu Limited | Character input method and information processing apparatus |
US8872762B2 (en) | 2010-12-08 | 2014-10-28 | Primesense Ltd. | Three dimensional user interface cursor control |
US8881051B2 (en) | 2011-07-05 | 2014-11-04 | Primesense Ltd | Zoom-based gesture user interface |
US8891827B2 (en) | 2009-10-07 | 2014-11-18 | Microsoft Corporation | Systems and methods for tracking a model |
US20140368434A1 (en) * | 2013-06-13 | 2014-12-18 | Microsoft Corporation | Generation of text by way of a touchless interface |
EP2816446A1 (en) * | 2013-06-20 | 2014-12-24 | LSI Corporation | User interface comprising radial layout soft keypad |
US8933876B2 (en) | 2010-12-13 | 2015-01-13 | Apple Inc. | Three dimensional user interface session control |
US8959013B2 (en) | 2010-09-27 | 2015-02-17 | Apple Inc. | Virtual keyboard for a non-tactile three dimensional user interface |
US9002099B2 (en) | 2011-09-11 | 2015-04-07 | Apple Inc. | Learning-based estimation of hand and finger pose |
JP2015510648A (en) * | 2012-02-24 | 2015-04-09 | アマゾン・テクノロジーズ、インコーポレイテッド | Navigation technique for multidimensional input |
US20150103004A1 (en) * | 2013-10-16 | 2015-04-16 | Leap Motion, Inc. | Velocity field interaction for free space gesture interface and control |
US9019267B2 (en) | 2012-10-30 | 2015-04-28 | Apple Inc. | Depth mapping with enhanced resolution |
US20150121314A1 (en) * | 2013-10-24 | 2015-04-30 | Jens Bombolowsky | Two-finger gestures |
US9030498B2 (en) | 2011-08-15 | 2015-05-12 | Apple Inc. | Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface |
US9035876B2 (en) | 2008-01-14 | 2015-05-19 | Apple Inc. | Three-dimensional user interface session control |
US9041914B2 (en) | 2013-03-15 | 2015-05-26 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US9047507B2 (en) | 2012-05-02 | 2015-06-02 | Apple Inc. | Upper-body skeleton extraction from depth maps |
US20150201124A1 (en) * | 2014-01-15 | 2015-07-16 | Samsung Electronics Co., Ltd. | Camera system and method for remotely controlling compositions of self-portrait pictures using hand gestures |
US9092665B2 (en) | 2013-01-30 | 2015-07-28 | Aquifi, Inc | Systems and methods for initializing motion tracking of human hands |
US9122311B2 (en) | 2011-08-24 | 2015-09-01 | Apple Inc. | Visual feedback for tactile and non-tactile user interfaces |
US9158375B2 (en) | 2010-07-20 | 2015-10-13 | Apple Inc. | Interactive reality augmentation for natural interaction |
US9164173B2 (en) | 2011-04-15 | 2015-10-20 | Faro Technologies, Inc. | Laser tracker that uses a fiber-optic coupler and an achromatic launch to align and collimate two wavelengths of light |
US20150301591A1 (en) * | 2012-10-31 | 2015-10-22 | Audi Ag | Method for inputting a control command for a component of a motor vehicle |
US20150332471A1 (en) * | 2014-05-14 | 2015-11-19 | Electronics And Telecommunications Research Institute | User hand detecting device for detecting user's hand region and method thereof |
US9201501B2 (en) | 2010-07-20 | 2015-12-01 | Apple Inc. | Adaptive projector |
US9207309B2 (en) | 2011-04-15 | 2015-12-08 | Faro Technologies, Inc. | Six degree-of-freedom laser tracker that cooperates with a remote line scanner |
US9218063B2 (en) | 2011-08-24 | 2015-12-22 | Apple Inc. | Sessionless pointing user interface |
US20150370472A1 (en) * | 2014-06-19 | 2015-12-24 | Xerox Corporation | 3-d motion control for document discovery and retrieval |
US9229534B2 (en) | 2012-02-28 | 2016-01-05 | Apple Inc. | Asymmetric mapping for tactile and non-tactile user interfaces |
US20160017656A1 (en) * | 2013-03-15 | 2016-01-21 | Springs Window Fashions, Llc | Window covering motorized lift and control operating system |
US9285874B2 (en) | 2011-02-09 | 2016-03-15 | Apple Inc. | Gaze detection in a 3D mapping environment |
US9298266B2 (en) | 2013-04-02 | 2016-03-29 | Aquifi, Inc. | Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US9310891B2 (en) | 2012-09-04 | 2016-04-12 | Aquifi, Inc. | Method and system enabling natural user interface gestures with user wearable glasses |
US9377865B2 (en) | 2011-07-05 | 2016-06-28 | Apple Inc. | Zoom-based gesture user interface |
US9377863B2 (en) | 2012-03-26 | 2016-06-28 | Apple Inc. | Gaze-enhanced virtual touchscreen |
US9377885B2 (en) | 2010-04-21 | 2016-06-28 | Faro Technologies, Inc. | Method and apparatus for locking onto a retroreflector with a laser tracker |
US9395174B2 (en) | 2014-06-27 | 2016-07-19 | Faro Technologies, Inc. | Determining retroreflector orientation by optimizing spatial fit |
US9398243B2 (en) | 2011-01-06 | 2016-07-19 | Samsung Electronics Co., Ltd. | Display apparatus controlled by motion and motion control method thereof |
US9400170B2 (en) | 2010-04-21 | 2016-07-26 | Faro Technologies, Inc. | Automatic measurement of dimensional data within an acceptance region by a laser tracker |
US20160232404A1 (en) * | 2015-02-10 | 2016-08-11 | Yusuke KITAZONO | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US20160232674A1 (en) * | 2015-02-10 | 2016-08-11 | Wataru Tanaka | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US9459758B2 (en) | 2011-07-05 | 2016-10-04 | Apple Inc. | Gesture-based interface with enhanced features |
US9482529B2 (en) | 2011-04-15 | 2016-11-01 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US9482755B2 (en) | 2008-11-17 | 2016-11-01 | Faro Technologies, Inc. | Measurement system having air temperature compensation between a target and a laser tracker |
US9504920B2 (en) | 2011-04-25 | 2016-11-29 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
US9507417B2 (en) | 2014-01-07 | 2016-11-29 | Aquifi, Inc. | Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US9513711B2 (en) | 2011-01-06 | 2016-12-06 | Samsung Electronics Co., Ltd. | Electronic device controlled by a motion and controlling method thereof using different motions to activate voice versus motion recognition |
TWI571768B (en) * | 2015-04-29 | 2017-02-21 | 由田新技股份有限公司 | A human interface synchronous system, device, method, computer readable media, and computer program product |
US20170068322A1 (en) * | 2015-09-04 | 2017-03-09 | Eyesight Mobile Technologies Ltd. | Gesture recognition control device |
US9600078B2 (en) | 2012-02-03 | 2017-03-21 | Aquifi, Inc. | Method and system enabling natural user interface gestures with an electronic system |
US9619105B1 (en) | 2014-01-30 | 2017-04-11 | Aquifi, Inc. | Systems and methods for gesture based interaction with viewpoint dependent user interfaces |
US9638507B2 (en) | 2012-01-27 | 2017-05-02 | Faro Technologies, Inc. | Measurement machine utilizing a barcode to identify an inspection plan for an object |
US9686532B2 (en) | 2011-04-15 | 2017-06-20 | Faro Technologies, Inc. | System and method of acquiring three-dimensional coordinates using multiple coordinate measurement devices |
US9772394B2 (en) | 2010-04-21 | 2017-09-26 | Faro Technologies, Inc. | Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker |
US20170277943A1 (en) * | 2016-03-25 | 2017-09-28 | Fuji Xerox Co., Ltd. | Hand-raising detection device, non-transitory computer readable medium, and hand-raising detection method |
US9798388B1 (en) | 2013-07-31 | 2017-10-24 | Aquifi, Inc. | Vibrotactile system to augment 3D input systems |
US9824293B2 (en) | 2015-02-10 | 2017-11-21 | Nintendo Co., Ltd. | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US9857868B2 (en) | 2011-03-19 | 2018-01-02 | The Board Of Trustees Of The Leland Stanford Junior University | Method and system for ergonomic touch-free interface |
US10025975B2 (en) | 2015-02-10 | 2018-07-17 | Nintendo Co., Ltd. | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US10043279B1 (en) | 2015-12-07 | 2018-08-07 | Apple Inc. | Robust detection and classification of body parts in a depth map |
US10338672B2 (en) * | 2011-02-18 | 2019-07-02 | Business Objects Software Ltd. | System and method for manipulating objects in a graphical user interface |
US10366278B2 (en) | 2016-09-20 | 2019-07-30 | Apple Inc. | Curvature-based face detector |
KR20190142290A (en) * | 2019-12-12 | 2019-12-26 | 삼성전자주식회사 | Method for control a camera apparatus and the camera apparatus |
US10817151B2 (en) | 2014-04-25 | 2020-10-27 | Dropbox, Inc. | Browsing and selecting content items based on user gestures |
US10901518B2 (en) | 2013-12-16 | 2021-01-26 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras in the interaction space |
US10936145B2 (en) * | 2013-05-17 | 2021-03-02 | Ultrahaptics IP Two Limited | Dynamic interactive objects |
WO2021051200A1 (en) * | 2019-09-17 | 2021-03-25 | Huawei Technologies Co., Ltd. | User interface control based on elbow-anchored arm gestures |
US10963446B2 (en) | 2014-04-25 | 2021-03-30 | Dropbox, Inc. | Techniques for collapsing views of content items in a graphical user interface |
US11119577B2 (en) | 2013-02-01 | 2021-09-14 | Samsung Electronics Co., Ltd | Method of controlling an operation of a camera apparatus and a camera apparatus |
US11194404B2 (en) | 2013-05-17 | 2021-12-07 | Ultrahaptics IP Two Limited | Cursor mode switching |
US20220206563A1 (en) * | 2020-12-29 | 2022-06-30 | Snap Inc. | Body ui for augmented reality components |
US20230116341A1 (en) * | 2021-09-30 | 2023-04-13 | Futian ZHANG | Methods and apparatuses for hand gesture-based control of selection focus |
US20230214458A1 (en) * | 2016-02-17 | 2023-07-06 | Ultrahaptics IP Two Limited | Hand Pose Estimation for Machine Learning Based Gesture Recognition |
US20230229240A1 (en) * | 2022-01-20 | 2023-07-20 | Htc Corporation | Method for inputting letters, host, and computer readable storage medium |
US11841920B1 (en) | 2016-02-17 | 2023-12-12 | Ultrahaptics IP Two Limited | Machine learning based gesture recognition |
US11854308B1 (en) | 2016-02-17 | 2023-12-26 | Ultrahaptics IP Two Limited | Hand initialization for machine learning based gesture recognition |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
US11941166B2 (en) | 2020-12-29 | 2024-03-26 | Snap Inc. | Body UI for augmented reality components |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2474950B1 (en) | 2011-01-05 | 2013-08-21 | Softkinetic Software | Natural gesture based user interface methods and systems |
FR2980292B1 (en) | 2011-09-16 | 2013-10-11 | Prynel | METHOD AND SYSTEM FOR ACQUIRING AND PROCESSING IMAGES FOR MOTION DETECTION |
KR101237472B1 (en) * | 2011-12-30 | 2013-02-28 | 삼성전자주식회사 | Electronic apparatus and method for controlling electronic apparatus thereof |
US10585485B1 (en) * | 2014-11-10 | 2020-03-10 | Amazon Technologies, Inc. | Controlling content zoom level based on user head movement |
CN110581987A (en) * | 2018-06-07 | 2019-12-17 | 宏碁股份有限公司 | Three-dimensional display with gesture sensing function |
KR102582863B1 (en) | 2018-09-07 | 2023-09-27 | 삼성전자주식회사 | Electronic device and method for recognizing user gestures based on user intention |
Citations (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2007402A (en) * | 1931-01-02 | 1935-07-09 | Ericsson Telephones Ltd | Totalizator |
US4550250A (en) * | 1983-11-14 | 1985-10-29 | Hei, Inc. | Cordless digital graphics input device |
US4789921A (en) * | 1987-02-20 | 1988-12-06 | Minnesota Mining And Manufacturing Company | Cone shaped Fresnel reflector |
US4988981A (en) * | 1987-03-17 | 1991-01-29 | Vpl Research, Inc. | Computer data entry and manipulation apparatus and method |
US5264836A (en) * | 1991-01-15 | 1993-11-23 | Apple Computer, Inc. | Three dimensional cursor |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5588139A (en) * | 1990-06-07 | 1996-12-24 | Vpl Research, Inc. | Method and system for generating objects for a multi-person virtual world using data flow networks |
US5594469A (en) * | 1995-02-21 | 1997-01-14 | Mitsubishi Electric Information Technology Center America Inc. | Hand gesture machine control system |
US5846134A (en) * | 1995-07-14 | 1998-12-08 | Latypov; Nurakhmed Nurislamovich | Method and apparatus for immersion of a user into virtual reality |
US5852672A (en) * | 1995-07-10 | 1998-12-22 | The Regents Of The University Of California | Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects |
US5862256A (en) * | 1996-06-14 | 1999-01-19 | International Business Machines Corporation | Distinguishing gestures from handwriting in a pen based computer by size discrimination |
US5864635A (en) * | 1996-06-14 | 1999-01-26 | International Business Machines Corporation | Distinguishing gestures from handwriting in a pen based computer by stroke analysis |
US5870196A (en) * | 1995-10-16 | 1999-02-09 | European Community | Optical three-dimensional profilometry method based on processing SPECKLE images in partially coherent light, and interferometer implementing such a method |
US5917937A (en) * | 1997-04-15 | 1999-06-29 | Microsoft Corporation | Method for performing stereo matching to recover depths, colors and opacities of surface elements |
US5973700A (en) * | 1992-09-16 | 1999-10-26 | Eastman Kodak Company | Method and apparatus for optimizing the resolution of images which have an apparent depth |
US6002808A (en) * | 1996-07-26 | 1999-12-14 | Mitsubishi Electric Information Technology Center America, Inc. | Hand gesture control system |
US6005548A (en) * | 1996-08-14 | 1999-12-21 | Latypov; Nurakhmed Nurislamovich | Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods |
US6064387A (en) * | 1998-01-23 | 2000-05-16 | Dell, Usa, L.P. | Animated cursor and icon for computers |
US6072494A (en) * | 1997-10-15 | 2000-06-06 | Electric Planet, Inc. | Method and apparatus for real-time gesture recognition |
US6084979A (en) * | 1996-06-20 | 2000-07-04 | Carnegie Mellon University | Method for creating virtual reality |
US6111580A (en) * | 1995-09-13 | 2000-08-29 | Kabushiki Kaisha Toshiba | Apparatus and method for controlling an electronic device with user action |
US6191773B1 (en) * | 1995-04-28 | 2001-02-20 | Matsushita Electric Industrial Co., Ltd. | Interface apparatus |
US6215890B1 (en) * | 1997-09-26 | 2001-04-10 | Matsushita Electric Industrial Co., Ltd. | Hand gesture recognizing device |
US6243054B1 (en) * | 1998-07-01 | 2001-06-05 | Deluca Michael | Stereoscopic user interface method and apparatus |
US6252988B1 (en) * | 1998-07-09 | 2001-06-26 | Lucent Technologies Inc. | Method and apparatus for character recognition using stop words |
US6262740B1 (en) * | 1997-08-01 | 2001-07-17 | Terarecon, Inc. | Method for rendering sections of a volume data set |
US6345111B1 (en) * | 1997-02-28 | 2002-02-05 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US6345893B2 (en) * | 1998-06-15 | 2002-02-12 | Vega Vista, Inc. | Ergonomic systems and methods for operating computers |
US20020057383A1 (en) * | 1998-10-13 | 2002-05-16 | Ryuichi Iwamura | Motion sensing interface |
US20020071607A1 (en) * | 2000-10-31 | 2002-06-13 | Akinori Kawamura | Apparatus, method, and program for handwriting recognition |
US6452584B1 (en) * | 1997-04-23 | 2002-09-17 | Modern Cartoon, Ltd. | System for data management based on hand gestures |
US6456262B1 (en) * | 2000-05-09 | 2002-09-24 | Intel Corporation | Microdisplay with eye gaze detection |
US20020158873A1 (en) * | 2001-01-26 | 2002-10-31 | Todd Williamson | Real-time virtual viewpoint in simulated reality environment |
US6507353B1 (en) * | 1999-12-10 | 2003-01-14 | Godot Huard | Influencing virtual actors in an interactive environment |
US6512838B1 (en) * | 1999-09-22 | 2003-01-28 | Canesta, Inc. | Methods for enhancing performance and data acquired from three-dimensional image systems |
US6519363B1 (en) * | 1999-01-13 | 2003-02-11 | International Business Machines Corporation | Method and system for automatically segmenting and recognizing handwritten Chinese characters |
US20030057972A1 (en) * | 1999-07-26 | 2003-03-27 | Paul Pfaff | Voltage testing and measurement |
US20030088463A1 (en) * | 1999-10-21 | 2003-05-08 | Steven Fischman | System and method for group advertisement optimization |
US20030156756A1 (en) * | 2002-02-15 | 2003-08-21 | Gokturk Salih Burak | Gesture recognition system using depth perceptive sensors |
US20030185444A1 (en) * | 2002-01-10 | 2003-10-02 | Tadashi Honda | Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing |
US20030227453A1 (en) * | 2002-04-09 | 2003-12-11 | Klaus-Peter Beier | Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data |
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US6681031B2 (en) * | 1998-08-10 | 2004-01-20 | Cybernet Systems Corporation | Gesture-controlled interfaces for self-service machines and other applications |
US6686921B1 (en) * | 2000-08-01 | 2004-02-03 | International Business Machines Corporation | Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object |
US6690370B2 (en) * | 1995-06-07 | 2004-02-10 | Geovector Corp. | Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time |
US20040046744A1 (en) * | 1999-11-04 | 2004-03-11 | Canesta, Inc. | Method and apparatus for entering data using a virtual input device |
US6741251B2 (en) * | 2001-08-16 | 2004-05-25 | Hewlett-Packard Development Company, L.P. | Method and apparatus for varying focus in a scene |
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
US20040135744A1 (en) * | 2001-08-10 | 2004-07-15 | Oliver Bimber | Virtual showcases |
US20040155962A1 (en) * | 2003-02-11 | 2004-08-12 | Marks Richard L. | Method and apparatus for real time motion capture |
US20040174770A1 (en) * | 2002-11-27 | 2004-09-09 | Rees Frank L. | Gauss-Rees parametric ultrawideband system |
US6791540B1 (en) * | 1999-06-11 | 2004-09-14 | Canon Kabushiki Kaisha | Image processing apparatus |
US20040184640A1 (en) * | 2003-03-17 | 2004-09-23 | Samsung Electronics Co., Ltd. | Spatial motion recognition system and method using a virtual handwriting plane |
US20040183775A1 (en) * | 2002-12-13 | 2004-09-23 | Reactrix Systems | Interactive directed light/sound system |
US20040184659A1 (en) * | 2003-03-17 | 2004-09-23 | Samsung Electronics Co., Ltd. | Handwriting trajectory recognition system and method |
US20040193413A1 (en) * | 2003-03-25 | 2004-09-30 | Wilson Andrew D. | Architecture for controlling a computer using hand gestures |
US6803928B2 (en) * | 2000-06-06 | 2004-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Extended virtual table: an optical extension for table-like projection systems |
US6853935B2 (en) * | 2000-11-30 | 2005-02-08 | Canon Kabushiki Kaisha | Information processing apparatus, mixed reality presentation apparatus, method thereof, and storage medium |
US20050031166A1 (en) * | 2003-05-29 | 2005-02-10 | Kikuo Fujimura | Visual tracking using depth data |
US6857746B2 (en) * | 2002-07-01 | 2005-02-22 | Io2 Technology, Llc | Method and system for free-space imaging display and interface |
US20050088407A1 (en) * | 2003-10-24 | 2005-04-28 | Matthew Bell | Method and system for managing an interactive video display system |
US20050089194A1 (en) * | 2003-10-24 | 2005-04-28 | Matthew Bell | Method and system for processing captured image information in an interactive video display system |
US20050110964A1 (en) * | 2002-05-28 | 2005-05-26 | Matthew Bell | Interactive video window display system |
US20050122308A1 (en) * | 2002-05-28 | 2005-06-09 | Matthew Bell | Self-contained interactive video display system |
US20050162381A1 (en) * | 2002-05-28 | 2005-07-28 | Matthew Bell | Self-contained interactive video display system |
US20050190972A1 (en) * | 2004-02-11 | 2005-09-01 | Thomas Graham A. | System and method for position determination |
US6951515B2 (en) * | 1999-06-11 | 2005-10-04 | Canon Kabushiki Kaisha | Game apparatus for mixed reality space, image processing method thereof, and program storage medium |
US20050254726A1 (en) * | 2004-02-25 | 2005-11-17 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer program products for imperceptibly embedding structured light patterns in projected color images for display on planar and non-planar surfaces |
US20050265583A1 (en) * | 1999-03-08 | 2005-12-01 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
US6977654B2 (en) * | 2002-10-30 | 2005-12-20 | Iviz, Inc. | Data visualization with animated speedometer dial charts |
US20060010400A1 (en) * | 2004-06-28 | 2006-01-12 | Microsoft Corporation | Recognizing gestures and using gestures for interacting with software applications |
US7023436B2 (en) * | 2000-04-19 | 2006-04-04 | Sony Corporation | Three-dimensional model processing device, three-dimensional model processing method, program providing medium |
US20060092138A1 (en) * | 2004-10-29 | 2006-05-04 | Microsoft Corporation | Systems and methods for interacting with a computer through handwriting to a screen |
US7042440B2 (en) * | 1997-08-22 | 2006-05-09 | Pryor Timothy R | Man machine interfaces and applications |
US7042442B1 (en) * | 2000-06-27 | 2006-05-09 | International Business Machines Corporation | Virtual invisible keyboard |
US20060110008A1 (en) * | 2003-11-14 | 2006-05-25 | Roel Vertegaal | Method and apparatus for calibration-free eye tracking |
US20060115155A1 (en) * | 2000-11-10 | 2006-06-01 | Microsoft Corporation | Implicit page breaks for digitally represented handwriting |
US20060139314A1 (en) * | 2002-05-28 | 2006-06-29 | Matthew Bell | Interactive video display system |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US20070130547A1 (en) * | 2005-12-01 | 2007-06-07 | Navisense, Llc | Method and system for touchless user interface control |
US20080170776A1 (en) * | 2007-01-12 | 2008-07-17 | Albertson Jacob C | Controlling resource access based on user gesturing in a 3d captured image stream of the user |
US20090027337A1 (en) * | 2007-07-27 | 2009-01-29 | Gesturetek, Inc. | Enhanced camera-based input |
US20090040215A1 (en) * | 2007-08-10 | 2009-02-12 | Nitin Afzulpurkar | Interpreting Sign Language Gestures |
US7590941B2 (en) * | 2003-10-09 | 2009-09-15 | Hewlett-Packard Development Company, L.P. | Communication and collaboration system using rich media environments |
US7762665B2 (en) * | 2003-03-21 | 2010-07-27 | Queen's University At Kingston | Method and apparatus for communication between humans and devices |
US20110018795A1 (en) * | 2009-07-27 | 2011-01-27 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling electronic device using user interaction |
US20110081072A1 (en) * | 2008-06-13 | 2011-04-07 | Techno Dream 21 Co., Ltd. | Image processing device, image processing method, and program |
US20110164141A1 (en) * | 2008-07-21 | 2011-07-07 | Marius Tico | Electronic Device Directional Audio-Video Capture |
US20110193939A1 (en) * | 2010-02-09 | 2011-08-11 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US20110254765A1 (en) * | 2010-04-18 | 2011-10-20 | Primesense Ltd. | Remote text input using handwriting |
US20110292036A1 (en) * | 2010-05-31 | 2011-12-01 | Primesense Ltd. | Depth sensor with application interface |
US20110310010A1 (en) * | 2010-06-17 | 2011-12-22 | Primesense Ltd. | Gesture based user interface |
US20120078614A1 (en) * | 2010-09-27 | 2012-03-29 | Primesense Ltd. | Virtual keyboard for a non-tactile three dimensional user interface |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8905834B2 (en) * | 2007-11-09 | 2014-12-09 | Igt | Transparent card display |
US7934156B2 (en) * | 2006-09-06 | 2011-04-26 | Apple Inc. | Deletion gestures on a portable multifunction device |
CN101874404B (en) * | 2007-09-24 | 2013-09-18 | 高通股份有限公司 | Enhanced interface for voice and video communications |
KR20100101389A (en) | 2009-03-09 | 2010-09-17 | 삼성전자주식회사 | Display apparatus for providing a user menu, and method for providing ui applied thereto |
US8200321B2 (en) | 2009-05-20 | 2012-06-12 | Sotera Wireless, Inc. | Method for measuring patient posture and vital signs |
US20110289455A1 (en) | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
-
2010
- 2010-03-11 US US12/721,582 patent/US20100235786A1/en not_active Abandoned
- 2010-03-11 WO PCT/IB2010/051055 patent/WO2010103482A2/en active Application Filing
-
2014
- 2014-06-23 US US14/311,444 patent/US20140304647A1/en not_active Abandoned
-
2017
- 2017-11-08 US US15/806,350 patent/US10719214B2/en active Active
Patent Citations (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2007402A (en) * | 1931-01-02 | 1935-07-09 | Ericsson Telephones Ltd | Totalizator |
US4550250A (en) * | 1983-11-14 | 1985-10-29 | Hei, Inc. | Cordless digital graphics input device |
US4789921A (en) * | 1987-02-20 | 1988-12-06 | Minnesota Mining And Manufacturing Company | Cone shaped Fresnel reflector |
US4988981B1 (en) * | 1987-03-17 | 1999-05-18 | Vpl Newco Inc | Computer data entry and manipulation apparatus and method |
US4988981A (en) * | 1987-03-17 | 1991-01-29 | Vpl Research, Inc. | Computer data entry and manipulation apparatus and method |
US5588139A (en) * | 1990-06-07 | 1996-12-24 | Vpl Research, Inc. | Method and system for generating objects for a multi-person virtual world using data flow networks |
US5264836A (en) * | 1991-01-15 | 1993-11-23 | Apple Computer, Inc. | Three dimensional cursor |
US5973700A (en) * | 1992-09-16 | 1999-10-26 | Eastman Kodak Company | Method and apparatus for optimizing the resolution of images which have an apparent depth |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5594469A (en) * | 1995-02-21 | 1997-01-14 | Mitsubishi Electric Information Technology Center America Inc. | Hand gesture machine control system |
US6191773B1 (en) * | 1995-04-28 | 2001-02-20 | Matsushita Electric Industrial Co., Ltd. | Interface apparatus |
US6690370B2 (en) * | 1995-06-07 | 2004-02-10 | Geovector Corp. | Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time |
US5852672A (en) * | 1995-07-10 | 1998-12-22 | The Regents Of The University Of California | Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects |
US5846134A (en) * | 1995-07-14 | 1998-12-08 | Latypov; Nurakhmed Nurislamovich | Method and apparatus for immersion of a user into virtual reality |
US6111580A (en) * | 1995-09-13 | 2000-08-29 | Kabushiki Kaisha Toshiba | Apparatus and method for controlling an electronic device with user action |
US5870196A (en) * | 1995-10-16 | 1999-02-09 | European Community | Optical three-dimensional profilometry method based on processing SPECKLE images in partially coherent light, and interferometer implementing such a method |
US5862256A (en) * | 1996-06-14 | 1999-01-19 | International Business Machines Corporation | Distinguishing gestures from handwriting in a pen based computer by size discrimination |
US5864635A (en) * | 1996-06-14 | 1999-01-26 | International Business Machines Corporation | Distinguishing gestures from handwriting in a pen based computer by stroke analysis |
US6084979A (en) * | 1996-06-20 | 2000-07-04 | Carnegie Mellon University | Method for creating virtual reality |
US6002808A (en) * | 1996-07-26 | 1999-12-14 | Mitsubishi Electric Information Technology Center America, Inc. | Hand gesture control system |
US6005548A (en) * | 1996-08-14 | 1999-12-21 | Latypov; Nurakhmed Nurislamovich | Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods |
US6345111B1 (en) * | 1997-02-28 | 2002-02-05 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US5917937A (en) * | 1997-04-15 | 1999-06-29 | Microsoft Corporation | Method for performing stereo matching to recover depths, colors and opacities of surface elements |
US6452584B1 (en) * | 1997-04-23 | 2002-09-17 | Modern Cartoon, Ltd. | System for data management based on hand gestures |
US6262740B1 (en) * | 1997-08-01 | 2001-07-17 | Terarecon, Inc. | Method for rendering sections of a volume data set |
US7042440B2 (en) * | 1997-08-22 | 2006-05-09 | Pryor Timothy R | Man machine interfaces and applications |
US6215890B1 (en) * | 1997-09-26 | 2001-04-10 | Matsushita Electric Industrial Co., Ltd. | Hand gesture recognizing device |
US6256033B1 (en) * | 1997-10-15 | 2001-07-03 | Electric Planet | Method and apparatus for real-time gesture recognition |
US6072494A (en) * | 1997-10-15 | 2000-06-06 | Electric Planet, Inc. | Method and apparatus for real-time gesture recognition |
US6064387A (en) * | 1998-01-23 | 2000-05-16 | Dell, Usa, L.P. | Animated cursor and icon for computers |
US6345893B2 (en) * | 1998-06-15 | 2002-02-12 | Vega Vista, Inc. | Ergonomic systems and methods for operating computers |
US6559813B1 (en) * | 1998-07-01 | 2003-05-06 | Deluca Michael | Selective real image obstruction in a virtual reality display apparatus and method |
US6243054B1 (en) * | 1998-07-01 | 2001-06-05 | Deluca Michael | Stereoscopic user interface method and apparatus |
US6252988B1 (en) * | 1998-07-09 | 2001-06-26 | Lucent Technologies Inc. | Method and apparatus for character recognition using stop words |
US6681031B2 (en) * | 1998-08-10 | 2004-01-20 | Cybernet Systems Corporation | Gesture-controlled interfaces for self-service machines and other applications |
US20020057383A1 (en) * | 1998-10-13 | 2002-05-16 | Ryuichi Iwamura | Motion sensing interface |
US6519363B1 (en) * | 1999-01-13 | 2003-02-11 | International Business Machines Corporation | Method and system for automatically segmenting and recognizing handwritten Chinese characters |
US20050265583A1 (en) * | 1999-03-08 | 2005-12-01 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
US7003134B1 (en) * | 1999-03-08 | 2006-02-21 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
US6951515B2 (en) * | 1999-06-11 | 2005-10-04 | Canon Kabushiki Kaisha | Game apparatus for mixed reality space, image processing method thereof, and program storage medium |
US6791540B1 (en) * | 1999-06-11 | 2004-09-14 | Canon Kabushiki Kaisha | Image processing apparatus |
US20030057972A1 (en) * | 1999-07-26 | 2003-03-27 | Paul Pfaff | Voltage testing and measurement |
US6512838B1 (en) * | 1999-09-22 | 2003-01-28 | Canesta, Inc. | Methods for enhancing performance and data acquired from three-dimensional image systems |
US20030063775A1 (en) * | 1999-09-22 | 2003-04-03 | Canesta, Inc. | Methods for enhancing performance and data acquired from three-dimensional image systems |
US20030088463A1 (en) * | 1999-10-21 | 2003-05-08 | Steven Fischman | System and method for group advertisement optimization |
US20040046744A1 (en) * | 1999-11-04 | 2004-03-11 | Canesta, Inc. | Method and apparatus for entering data using a virtual input device |
US6507353B1 (en) * | 1999-12-10 | 2003-01-14 | Godot Huard | Influencing virtual actors in an interactive environment |
US7023436B2 (en) * | 2000-04-19 | 2006-04-04 | Sony Corporation | Three-dimensional model processing device, three-dimensional model processing method, program providing medium |
US6456262B1 (en) * | 2000-05-09 | 2002-09-24 | Intel Corporation | Microdisplay with eye gaze detection |
US6803928B2 (en) * | 2000-06-06 | 2004-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Extended virtual table: an optical extension for table-like projection systems |
US7042442B1 (en) * | 2000-06-27 | 2006-05-09 | International Business Machines Corporation | Virtual invisible keyboard |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US6686921B1 (en) * | 2000-08-01 | 2004-02-03 | International Business Machines Corporation | Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object |
US7013046B2 (en) * | 2000-10-31 | 2006-03-14 | Kabushiki Kaisha Toshiba | Apparatus, method, and program for handwriting recognition |
US20020071607A1 (en) * | 2000-10-31 | 2002-06-13 | Akinori Kawamura | Apparatus, method, and program for handwriting recognition |
US20060115155A1 (en) * | 2000-11-10 | 2006-06-01 | Microsoft Corporation | Implicit page breaks for digitally represented handwriting |
US6853935B2 (en) * | 2000-11-30 | 2005-02-08 | Canon Kabushiki Kaisha | Information processing apparatus, mixed reality presentation apparatus, method thereof, and storage medium |
US20020158873A1 (en) * | 2001-01-26 | 2002-10-31 | Todd Williamson | Real-time virtual viewpoint in simulated reality environment |
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
US20040135744A1 (en) * | 2001-08-10 | 2004-07-15 | Oliver Bimber | Virtual showcases |
US6741251B2 (en) * | 2001-08-16 | 2004-05-25 | Hewlett-Packard Development Company, L.P. | Method and apparatus for varying focus in a scene |
US20030185444A1 (en) * | 2002-01-10 | 2003-10-02 | Tadashi Honda | Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing |
US20030156756A1 (en) * | 2002-02-15 | 2003-08-21 | Gokturk Salih Burak | Gesture recognition system using depth perceptive sensors |
US20030227453A1 (en) * | 2002-04-09 | 2003-12-11 | Klaus-Peter Beier | Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data |
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US20050110964A1 (en) * | 2002-05-28 | 2005-05-26 | Matthew Bell | Interactive video window display system |
US20050122308A1 (en) * | 2002-05-28 | 2005-06-09 | Matthew Bell | Self-contained interactive video display system |
US20050162381A1 (en) * | 2002-05-28 | 2005-07-28 | Matthew Bell | Self-contained interactive video display system |
US20060139314A1 (en) * | 2002-05-28 | 2006-06-29 | Matthew Bell | Interactive video display system |
US6857746B2 (en) * | 2002-07-01 | 2005-02-22 | Io2 Technology, Llc | Method and system for free-space imaging display and interface |
US6977654B2 (en) * | 2002-10-30 | 2005-12-20 | Iviz, Inc. | Data visualization with animated speedometer dial charts |
US20040174770A1 (en) * | 2002-11-27 | 2004-09-09 | Rees Frank L. | Gauss-Rees parametric ultrawideband system |
US20040183775A1 (en) * | 2002-12-13 | 2004-09-23 | Reactrix Systems | Interactive directed light/sound system |
US20040155962A1 (en) * | 2003-02-11 | 2004-08-12 | Marks Richard L. | Method and apparatus for real time motion capture |
US20040184640A1 (en) * | 2003-03-17 | 2004-09-23 | Samsung Electronics Co., Ltd. | Spatial motion recognition system and method using a virtual handwriting plane |
US20040184659A1 (en) * | 2003-03-17 | 2004-09-23 | Samsung Electronics Co., Ltd. | Handwriting trajectory recognition system and method |
US7762665B2 (en) * | 2003-03-21 | 2010-07-27 | Queen's University At Kingston | Method and apparatus for communication between humans and devices |
US20040193413A1 (en) * | 2003-03-25 | 2004-09-30 | Wilson Andrew D. | Architecture for controlling a computer using hand gestures |
US20050031166A1 (en) * | 2003-05-29 | 2005-02-10 | Kikuo Fujimura | Visual tracking using depth data |
US7590941B2 (en) * | 2003-10-09 | 2009-09-15 | Hewlett-Packard Development Company, L.P. | Communication and collaboration system using rich media environments |
US20050089194A1 (en) * | 2003-10-24 | 2005-04-28 | Matthew Bell | Method and system for processing captured image information in an interactive video display system |
US20050088407A1 (en) * | 2003-10-24 | 2005-04-28 | Matthew Bell | Method and system for managing an interactive video display system |
US20060110008A1 (en) * | 2003-11-14 | 2006-05-25 | Roel Vertegaal | Method and apparatus for calibration-free eye tracking |
US20050190972A1 (en) * | 2004-02-11 | 2005-09-01 | Thomas Graham A. | System and method for position determination |
US20050254726A1 (en) * | 2004-02-25 | 2005-11-17 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer program products for imperceptibly embedding structured light patterns in projected color images for display on planar and non-planar surfaces |
US20060010400A1 (en) * | 2004-06-28 | 2006-01-12 | Microsoft Corporation | Recognizing gestures and using gestures for interacting with software applications |
US20060092138A1 (en) * | 2004-10-29 | 2006-05-04 | Microsoft Corporation | Systems and methods for interacting with a computer through handwriting to a screen |
US20070130547A1 (en) * | 2005-12-01 | 2007-06-07 | Navisense, Llc | Method and system for touchless user interface control |
US20080170776A1 (en) * | 2007-01-12 | 2008-07-17 | Albertson Jacob C | Controlling resource access based on user gesturing in a 3d captured image stream of the user |
US20090027337A1 (en) * | 2007-07-27 | 2009-01-29 | Gesturetek, Inc. | Enhanced camera-based input |
US20090040215A1 (en) * | 2007-08-10 | 2009-02-12 | Nitin Afzulpurkar | Interpreting Sign Language Gestures |
US20110081072A1 (en) * | 2008-06-13 | 2011-04-07 | Techno Dream 21 Co., Ltd. | Image processing device, image processing method, and program |
US20110164141A1 (en) * | 2008-07-21 | 2011-07-07 | Marius Tico | Electronic Device Directional Audio-Video Capture |
US20110018795A1 (en) * | 2009-07-27 | 2011-01-27 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling electronic device using user interaction |
US20110193939A1 (en) * | 2010-02-09 | 2011-08-11 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US20110254765A1 (en) * | 2010-04-18 | 2011-10-20 | Primesense Ltd. | Remote text input using handwriting |
US20110292036A1 (en) * | 2010-05-31 | 2011-12-01 | Primesense Ltd. | Depth sensor with application interface |
US20110310010A1 (en) * | 2010-06-17 | 2011-12-22 | Primesense Ltd. | Gesture based user interface |
US20120078614A1 (en) * | 2010-09-27 | 2012-03-29 | Primesense Ltd. | Virtual keyboard for a non-tactile three dimensional user interface |
Cited By (228)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100034457A1 (en) * | 2006-05-11 | 2010-02-11 | Tamir Berliner | Modeling of humanoid forms from depth maps |
US8249334B2 (en) | 2006-05-11 | 2012-08-21 | Primesense Ltd. | Modeling of humanoid forms from depth maps |
US10509536B2 (en) | 2007-07-27 | 2019-12-17 | Qualcomm Incorporated | Item selection using enhanced control |
US8726194B2 (en) | 2007-07-27 | 2014-05-13 | Qualcomm Incorporated | Item selection using enhanced control |
US8659548B2 (en) | 2007-07-27 | 2014-02-25 | Qualcomm Incorporated | Enhanced camera-based input |
US11960706B2 (en) | 2007-07-27 | 2024-04-16 | Qualcomm Incorporated | Item selection using enhanced control |
US20090031240A1 (en) * | 2007-07-27 | 2009-01-29 | Gesturetek, Inc. | Item selection using enhanced control |
US10268339B2 (en) | 2007-07-27 | 2019-04-23 | Qualcomm Incorporated | Enhanced camera-based input |
US11500514B2 (en) | 2007-07-27 | 2022-11-15 | Qualcomm Incorporated | Item selection using enhanced control |
US9035876B2 (en) | 2008-01-14 | 2015-05-19 | Apple Inc. | Three-dimensional user interface session control |
US8555207B2 (en) | 2008-02-27 | 2013-10-08 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US11954265B2 (en) | 2008-02-27 | 2024-04-09 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US9164591B2 (en) | 2008-02-27 | 2015-10-20 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US20090217211A1 (en) * | 2008-02-27 | 2009-08-27 | Gesturetek, Inc. | Enhanced input using recognized gestures |
US11561620B2 (en) | 2008-02-27 | 2023-01-24 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US9507432B2 (en) | 2008-02-27 | 2016-11-29 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US10025390B2 (en) | 2008-02-27 | 2018-07-17 | Qualcomm Incorporated | Enhanced input using recognized gestures |
US9453913B2 (en) | 2008-11-17 | 2016-09-27 | Faro Technologies, Inc. | Target apparatus for three-dimensional measurement system |
US9482755B2 (en) | 2008-11-17 | 2016-11-01 | Faro Technologies, Inc. | Measurement system having air temperature compensation between a target and a laser tracker |
US20100141684A1 (en) * | 2008-12-05 | 2010-06-10 | Kabushiki Kaisha Toshiba | Mobile communication device and method for scaling data up/down on touch screen |
US8405682B2 (en) * | 2008-12-05 | 2013-03-26 | Fujitsu Mobile Communications Limited | Mobile communication device and method for scaling data up/down on touch screen |
US9898675B2 (en) * | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
US20100277411A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | User tracking feedback |
US9232167B2 (en) * | 2009-08-04 | 2016-01-05 | Echostar Technologies L.L.C. | Video system and remote control with touch interface for supplemental content display |
US20110032191A1 (en) * | 2009-08-04 | 2011-02-10 | Cooke Benjamin T | Video system and remote control with touch interface for supplemental content display |
US20110052006A1 (en) * | 2009-08-13 | 2011-03-03 | Primesense Ltd. | Extraction of skeletons from 3d maps |
US8565479B2 (en) | 2009-08-13 | 2013-10-22 | Primesense Ltd. | Extraction of skeletons from 3D maps |
US8891827B2 (en) | 2009-10-07 | 2014-11-18 | Microsoft Corporation | Systems and methods for tracking a model |
US9582717B2 (en) | 2009-10-07 | 2017-02-28 | Microsoft Technology Licensing, Llc | Systems and methods for tracking a model |
US20110080336A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Human Tracking System |
US8897495B2 (en) | 2009-10-07 | 2014-11-25 | Microsoft Corporation | Systems and methods for tracking a model |
US8963829B2 (en) | 2009-10-07 | 2015-02-24 | Microsoft Corporation | Methods and systems for determining and tracking extremities of a target |
US8542910B2 (en) | 2009-10-07 | 2013-09-24 | Microsoft Corporation | Human tracking system |
US20110081044A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Systems And Methods For Removing A Background Of An Image |
US8970487B2 (en) | 2009-10-07 | 2015-03-03 | Microsoft Technology Licensing, Llc | Human tracking system |
US8564534B2 (en) | 2009-10-07 | 2013-10-22 | Microsoft Corporation | Human tracking system |
US10147194B2 (en) * | 2009-10-07 | 2018-12-04 | Microsoft Technology Licensing, Llc | Systems and methods for removing a background of an image |
US8867820B2 (en) * | 2009-10-07 | 2014-10-21 | Microsoft Corporation | Systems and methods for removing a background of an image |
US8861839B2 (en) | 2009-10-07 | 2014-10-14 | Microsoft Corporation | Human tracking system |
US9522328B2 (en) | 2009-10-07 | 2016-12-20 | Microsoft Technology Licensing, Llc | Human tracking system |
US9679390B2 (en) | 2009-10-07 | 2017-06-13 | Microsoft Technology Licensing, Llc | Systems and methods for removing a background of an image |
US20110080475A1 (en) * | 2009-10-07 | 2011-04-07 | Microsoft Corporation | Methods And Systems For Determining And Tracking Extremities Of A Target |
US20170278251A1 (en) * | 2009-10-07 | 2017-09-28 | Microsoft Technology Licensing, Llc | Systems and methods for removing a background of an image |
US9659377B2 (en) | 2009-10-07 | 2017-05-23 | Microsoft Technology Licensing, Llc | Methods and systems for determining and tracking extremities of a target |
US9821226B2 (en) | 2009-10-07 | 2017-11-21 | Microsoft Technology Licensing, Llc | Human tracking system |
US8787663B2 (en) | 2010-03-01 | 2014-07-22 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US20110211754A1 (en) * | 2010-03-01 | 2011-09-01 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US8724120B2 (en) | 2010-04-21 | 2014-05-13 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US8537371B2 (en) | 2010-04-21 | 2013-09-17 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US9007601B2 (en) | 2010-04-21 | 2015-04-14 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US8724119B2 (en) | 2010-04-21 | 2014-05-13 | Faro Technologies, Inc. | Method for using a handheld appliance to select, lock onto, and track a retroreflector with a laser tracker |
US9400170B2 (en) | 2010-04-21 | 2016-07-26 | Faro Technologies, Inc. | Automatic measurement of dimensional data within an acceptance region by a laser tracker |
US8654354B2 (en) | 2010-04-21 | 2014-02-18 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8437011B2 (en) | 2010-04-21 | 2013-05-07 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US9772394B2 (en) | 2010-04-21 | 2017-09-26 | Faro Technologies, Inc. | Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker |
US9377885B2 (en) | 2010-04-21 | 2016-06-28 | Faro Technologies, Inc. | Method and apparatus for locking onto a retroreflector with a laser tracker |
US8654355B2 (en) | 2010-04-21 | 2014-02-18 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US10480929B2 (en) | 2010-04-21 | 2019-11-19 | Faro Technologies, Inc. | Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker |
US8467071B2 (en) | 2010-04-21 | 2013-06-18 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US10209059B2 (en) | 2010-04-21 | 2019-02-19 | Faro Technologies, Inc. | Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker |
US9146094B2 (en) | 2010-04-21 | 2015-09-29 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US8896848B2 (en) | 2010-04-21 | 2014-11-25 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8537375B2 (en) | 2010-04-21 | 2013-09-17 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8576380B2 (en) | 2010-04-21 | 2013-11-05 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8422034B2 (en) | 2010-04-21 | 2013-04-16 | Faro Technologies, Inc. | Method and apparatus for using gestures to control a laser tracker |
US8594425B2 (en) | 2010-05-31 | 2013-11-26 | Primesense Ltd. | Analysis of three-dimensional scenes |
US8824737B2 (en) | 2010-05-31 | 2014-09-02 | Primesense Ltd. | Identifying components of a humanoid form in three-dimensional scenes |
US8781217B2 (en) | 2010-05-31 | 2014-07-15 | Primesense Ltd. | Analysis of three-dimensional scenes with a surface model |
US8966400B2 (en) * | 2010-06-07 | 2015-02-24 | Empire Technology Development Llc | User movement interpretation in computer generated reality |
US20110302536A1 (en) * | 2010-06-07 | 2011-12-08 | Empire Technology Development Llc | User movement interpretation in computer generated reality |
US20120042246A1 (en) * | 2010-06-10 | 2012-02-16 | Microsoft Corporation | Content gestures |
US9009594B2 (en) * | 2010-06-10 | 2015-04-14 | Microsoft Technology Licensing, Llc | Content gestures |
US20110304649A1 (en) * | 2010-06-10 | 2011-12-15 | Microsoft Corporation | Character selection |
US20120005569A1 (en) * | 2010-07-05 | 2012-01-05 | Roh Hyeongseok | Mobile terminal and method for controlling the same |
US9600153B2 (en) * | 2010-07-05 | 2017-03-21 | Lg Electronics Inc. | Mobile terminal for displaying a webpage and method of controlling the same |
US20120019460A1 (en) * | 2010-07-20 | 2012-01-26 | Hitachi Consumer Electronics Co., Ltd. | Input method and input apparatus |
US9158375B2 (en) | 2010-07-20 | 2015-10-13 | Apple Inc. | Interactive reality augmentation for natural interaction |
US9201501B2 (en) | 2010-07-20 | 2015-12-01 | Apple Inc. | Adaptive projector |
US20120036479A1 (en) * | 2010-08-04 | 2012-02-09 | Shunichi Kasahara | Information processing apparatus, information processing method and program |
US8954888B2 (en) * | 2010-08-04 | 2015-02-10 | Sony Corporation | Information processing apparatus, information processing method and program associated with a graphical user interface with proximity sensor triggered menu options |
US8582867B2 (en) | 2010-09-16 | 2013-11-12 | Primesense Ltd | Learning-based pose estimation from depth maps |
US9329691B2 (en) * | 2010-09-22 | 2016-05-03 | Shimane Prefectural Government | Operation input apparatus and method using distinct determination and control areas |
US20130181897A1 (en) * | 2010-09-22 | 2013-07-18 | Shimane Prefectural Government | Operation input apparatus, operation input method, and program |
US8959013B2 (en) | 2010-09-27 | 2015-02-17 | Apple Inc. | Virtual keyboard for a non-tactile three dimensional user interface |
US8872762B2 (en) | 2010-12-08 | 2014-10-28 | Primesense Ltd. | Three dimensional user interface cursor control |
US8933876B2 (en) | 2010-12-13 | 2015-01-13 | Apple Inc. | Three dimensional user interface session control |
US9398243B2 (en) | 2011-01-06 | 2016-07-19 | Samsung Electronics Co., Ltd. | Display apparatus controlled by motion and motion control method thereof |
US9513711B2 (en) | 2011-01-06 | 2016-12-06 | Samsung Electronics Co., Ltd. | Electronic device controlled by a motion and controlling method thereof using different motions to activate voice versus motion recognition |
US20120198026A1 (en) * | 2011-01-27 | 2012-08-02 | Egain Communications Corporation | Personal web display and interaction experience system |
US8825734B2 (en) * | 2011-01-27 | 2014-09-02 | Egain Corporation | Personal web display and interaction experience system |
US9633129B2 (en) | 2011-01-27 | 2017-04-25 | Egain Corporation | Personal web display and interaction experience system |
US9285874B2 (en) | 2011-02-09 | 2016-03-15 | Apple Inc. | Gaze detection in a 3D mapping environment |
US9342146B2 (en) | 2011-02-09 | 2016-05-17 | Apple Inc. | Pointing-based display interaction |
US9454225B2 (en) | 2011-02-09 | 2016-09-27 | Apple Inc. | Gaze-based display control |
US8467072B2 (en) | 2011-02-14 | 2013-06-18 | Faro Technologies, Inc. | Target apparatus and method of making a measurement with the target apparatus |
US8593648B2 (en) | 2011-02-14 | 2013-11-26 | Faro Technologies, Inc. | Target method using indentifier element to obtain sphere radius |
US10338672B2 (en) * | 2011-02-18 | 2019-07-02 | Business Objects Software Ltd. | System and method for manipulating objects in a graphical user interface |
US8619265B2 (en) | 2011-03-14 | 2013-12-31 | Faro Technologies, Inc. | Automatic measurement of dimensional data with a laser tracker |
US9857868B2 (en) | 2011-03-19 | 2018-01-02 | The Board Of Trustees Of The Leland Stanford Junior University | Method and system for ergonomic touch-free interface |
US9494412B2 (en) | 2011-04-15 | 2016-11-15 | Faro Technologies, Inc. | Diagnosing multipath interference and eliminating multipath interference in 3D scanners using automated repositioning |
US10267619B2 (en) | 2011-04-15 | 2019-04-23 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US9686532B2 (en) | 2011-04-15 | 2017-06-20 | Faro Technologies, Inc. | System and method of acquiring three-dimensional coordinates using multiple coordinate measurement devices |
US10119805B2 (en) | 2011-04-15 | 2018-11-06 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US9164173B2 (en) | 2011-04-15 | 2015-10-20 | Faro Technologies, Inc. | Laser tracker that uses a fiber-optic coupler and an achromatic launch to align and collimate two wavelengths of light |
US9453717B2 (en) | 2011-04-15 | 2016-09-27 | Faro Technologies, Inc. | Diagnosing multipath interference and eliminating multipath interference in 3D scanners using projection patterns |
US9482529B2 (en) | 2011-04-15 | 2016-11-01 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US10578423B2 (en) | 2011-04-15 | 2020-03-03 | Faro Technologies, Inc. | Diagnosing multipath interference and eliminating multipath interference in 3D scanners using projection patterns |
US9207309B2 (en) | 2011-04-15 | 2015-12-08 | Faro Technologies, Inc. | Six degree-of-freedom laser tracker that cooperates with a remote line scanner |
US9448059B2 (en) | 2011-04-15 | 2016-09-20 | Faro Technologies, Inc. | Three-dimensional scanner with external tactical probe and illuminated guidance |
US9967545B2 (en) | 2011-04-15 | 2018-05-08 | Faro Technologies, Inc. | System and method of acquiring three-dimensional coordinates using multiple coordinate measurment devices |
US10302413B2 (en) | 2011-04-15 | 2019-05-28 | Faro Technologies, Inc. | Six degree-of-freedom laser tracker that cooperates with a remote sensor |
US9417703B2 (en) | 2011-04-20 | 2016-08-16 | Koninklijke Philips N.V. | Gesture based control of element or item |
WO2012143829A2 (en) | 2011-04-20 | 2012-10-26 | Koninklijke Philips Electronics N.V. | Gesture based control of element or item |
US9504920B2 (en) | 2011-04-25 | 2016-11-29 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
US8854321B2 (en) * | 2011-05-02 | 2014-10-07 | Verizon Patent And Licensing Inc. | Methods and systems for facilitating data entry by way of a touch screen |
US20120280916A1 (en) * | 2011-05-02 | 2012-11-08 | Verizon Patent And Licensing, Inc. | Methods and Systems for Facilitating Data Entry by Way of a Touch Screen |
WO2013000099A1 (en) * | 2011-06-29 | 2013-01-03 | Intel Corporation | Techniques for gesture recognition |
US9507427B2 (en) | 2011-06-29 | 2016-11-29 | Intel Corporation | Techniques for gesture recognition |
US9377865B2 (en) | 2011-07-05 | 2016-06-28 | Apple Inc. | Zoom-based gesture user interface |
US8881051B2 (en) | 2011-07-05 | 2014-11-04 | Primesense Ltd | Zoom-based gesture user interface |
US9459758B2 (en) | 2011-07-05 | 2016-10-04 | Apple Inc. | Gesture-based interface with enhanced features |
US9030498B2 (en) | 2011-08-15 | 2015-05-12 | Apple Inc. | Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface |
US9218063B2 (en) | 2011-08-24 | 2015-12-22 | Apple Inc. | Sessionless pointing user interface |
US9122311B2 (en) | 2011-08-24 | 2015-09-01 | Apple Inc. | Visual feedback for tactile and non-tactile user interfaces |
US9002099B2 (en) | 2011-09-11 | 2015-04-07 | Apple Inc. | Learning-based estimation of hand and finger pose |
US20130117027A1 (en) * | 2011-11-07 | 2013-05-09 | Samsung Electronics Co., Ltd. | Electronic apparatus and method for controlling electronic apparatus using recognition and motion recognition |
US20130176219A1 (en) * | 2012-01-09 | 2013-07-11 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US9638507B2 (en) | 2012-01-27 | 2017-05-02 | Faro Technologies, Inc. | Measurement machine utilizing a barcode to identify an inspection plan for an object |
US9600078B2 (en) | 2012-02-03 | 2017-03-21 | Aquifi, Inc. | Method and system enabling natural user interface gestures with an electronic system |
US9746934B2 (en) | 2012-02-24 | 2017-08-29 | Amazon Technologies, Inc. | Navigation approaches for multi-dimensional input |
US9423877B2 (en) | 2012-02-24 | 2016-08-23 | Amazon Technologies, Inc. | Navigation approaches for multi-dimensional input |
JP2015510648A (en) * | 2012-02-24 | 2015-04-09 | アマゾン・テクノロジーズ、インコーポレイテッド | Navigation technique for multidimensional input |
US9229534B2 (en) | 2012-02-28 | 2016-01-05 | Apple Inc. | Asymmetric mapping for tactile and non-tactile user interfaces |
US9377863B2 (en) | 2012-03-26 | 2016-06-28 | Apple Inc. | Gaze-enhanced virtual touchscreen |
US11169611B2 (en) | 2012-03-26 | 2021-11-09 | Apple Inc. | Enhanced virtual touchpad |
US9047507B2 (en) | 2012-05-02 | 2015-06-02 | Apple Inc. | Upper-body skeleton extraction from depth maps |
EP2677397A3 (en) * | 2012-06-21 | 2014-10-08 | Fujitsu Limited | Character input method and information processing apparatus |
US8830312B2 (en) | 2012-06-25 | 2014-09-09 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching within bounded regions |
US8655021B2 (en) | 2012-06-25 | 2014-02-18 | Imimtek, Inc. | Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints |
US9098739B2 (en) | 2012-06-25 | 2015-08-04 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching |
US9111135B2 (en) | 2012-06-25 | 2015-08-18 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera |
US8934675B2 (en) | 2012-06-25 | 2015-01-13 | Aquifi, Inc. | Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints |
US9310891B2 (en) | 2012-09-04 | 2016-04-12 | Aquifi, Inc. | Method and system enabling natural user interface gestures with user wearable glasses |
US9019267B2 (en) | 2012-10-30 | 2015-04-28 | Apple Inc. | Depth mapping with enhanced resolution |
US9612655B2 (en) * | 2012-10-31 | 2017-04-04 | Audi Ag | Method for inputting a control command for a component of a motor vehicle |
US20150301591A1 (en) * | 2012-10-31 | 2015-10-22 | Audi Ag | Method for inputting a control command for a component of a motor vehicle |
US9549126B2 (en) * | 2013-01-29 | 2017-01-17 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and control method thereof |
US20140211047A1 (en) * | 2013-01-29 | 2014-07-31 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and control method thereof |
KR102059598B1 (en) * | 2013-01-29 | 2019-12-26 | 삼성전자주식회사 | Digital photographing apparatus and control method thereof |
US9092665B2 (en) | 2013-01-30 | 2015-07-28 | Aquifi, Inc | Systems and methods for initializing motion tracking of human hands |
US8615108B1 (en) | 2013-01-30 | 2013-12-24 | Imimtek, Inc. | Systems and methods for initializing motion tracking of human hands |
US9129155B2 (en) | 2013-01-30 | 2015-09-08 | Aquifi, Inc. | Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map |
US11119577B2 (en) | 2013-02-01 | 2021-09-14 | Samsung Electronics Co., Ltd | Method of controlling an operation of a camera apparatus and a camera apparatus |
US20140232650A1 (en) * | 2013-02-15 | 2014-08-21 | Microsoft Corporation | User Center-Of-Mass And Mass Distribution Extraction Using Depth Images |
US9052746B2 (en) * | 2013-02-15 | 2015-06-09 | Microsoft Technology Licensing, Llc | User center-of-mass and mass distribution extraction using depth images |
US20140282223A1 (en) * | 2013-03-13 | 2014-09-18 | Microsoft Corporation | Natural user interface scrolling and targeting |
US9342230B2 (en) * | 2013-03-13 | 2016-05-17 | Microsoft Technology Licensing, Llc | Natural user interface scrolling and targeting |
US20140282278A1 (en) * | 2013-03-14 | 2014-09-18 | Glen J. Anderson | Depth-based user interface gesture control |
US9389779B2 (en) * | 2013-03-14 | 2016-07-12 | Intel Corporation | Depth-based user interface gesture control |
US9482514B2 (en) | 2013-03-15 | 2016-11-01 | Faro Technologies, Inc. | Diagnosing multipath interference and eliminating multipath interference in 3D scanners by directed probing |
US20160017656A1 (en) * | 2013-03-15 | 2016-01-21 | Springs Window Fashions, Llc | Window covering motorized lift and control operating system |
US9041914B2 (en) | 2013-03-15 | 2015-05-26 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
US9298266B2 (en) | 2013-04-02 | 2016-03-29 | Aquifi, Inc. | Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US11720181B2 (en) | 2013-05-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Cursor mode switching |
US11194404B2 (en) | 2013-05-17 | 2021-12-07 | Ultrahaptics IP Two Limited | Cursor mode switching |
US10936145B2 (en) * | 2013-05-17 | 2021-03-02 | Ultrahaptics IP Two Limited | Dynamic interactive objects |
US11429194B2 (en) | 2013-05-17 | 2022-08-30 | Ultrahaptics IP Two Limited | Cursor mode switching |
US11275480B2 (en) | 2013-05-17 | 2022-03-15 | Ultrahaptics IP Two Limited | Dynamic interactive objects |
US20140368434A1 (en) * | 2013-06-13 | 2014-12-18 | Microsoft Corporation | Generation of text by way of a touchless interface |
EP2816446A1 (en) * | 2013-06-20 | 2014-12-24 | LSI Corporation | User interface comprising radial layout soft keypad |
US9798388B1 (en) | 2013-07-31 | 2017-10-24 | Aquifi, Inc. | Vibrotactile system to augment 3D input systems |
US10152136B2 (en) * | 2013-10-16 | 2018-12-11 | Leap Motion, Inc. | Velocity field interaction for free space gesture interface and control |
US20210342013A1 (en) * | 2013-10-16 | 2021-11-04 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US20190113980A1 (en) * | 2013-10-16 | 2019-04-18 | Leap Motion, Inc. | Velocity field interaction for free space gesture interface and control |
US10635185B2 (en) | 2013-10-16 | 2020-04-28 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US11726575B2 (en) * | 2013-10-16 | 2023-08-15 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US20230333662A1 (en) * | 2013-10-16 | 2023-10-19 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US11068071B2 (en) | 2013-10-16 | 2021-07-20 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US10452154B2 (en) * | 2013-10-16 | 2019-10-22 | Ultrahaptics IP Two Limited | Velocity field interaction for free space gesture interface and control |
US20150103004A1 (en) * | 2013-10-16 | 2015-04-16 | Leap Motion, Inc. | Velocity field interaction for free space gesture interface and control |
US20150121314A1 (en) * | 2013-10-24 | 2015-04-30 | Jens Bombolowsky | Two-finger gestures |
US11068070B2 (en) | 2013-12-16 | 2021-07-20 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras with vectors |
US11567583B2 (en) | 2013-12-16 | 2023-01-31 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual configuration |
US11460929B2 (en) | 2013-12-16 | 2022-10-04 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras with vectors |
US11500473B2 (en) | 2013-12-16 | 2022-11-15 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras in the interaction space |
US11132064B2 (en) | 2013-12-16 | 2021-09-28 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual configuration |
US11775080B2 (en) | 2013-12-16 | 2023-10-03 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras with vectors |
US10901518B2 (en) | 2013-12-16 | 2021-01-26 | Ultrahaptics IP Two Limited | User-defined virtual interaction space and manipulation of virtual cameras in the interaction space |
US9507417B2 (en) | 2014-01-07 | 2016-11-29 | Aquifi, Inc. | Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
US20150201124A1 (en) * | 2014-01-15 | 2015-07-16 | Samsung Electronics Co., Ltd. | Camera system and method for remotely controlling compositions of self-portrait pictures using hand gestures |
US9619105B1 (en) | 2014-01-30 | 2017-04-11 | Aquifi, Inc. | Systems and methods for gesture based interaction with viewpoint dependent user interfaces |
US11954313B2 (en) | 2014-04-25 | 2024-04-09 | Dropbox, Inc. | Browsing and selecting content items based on user gestures |
US11921694B2 (en) | 2014-04-25 | 2024-03-05 | Dropbox, Inc. | Techniques for collapsing views of content items in a graphical user interface |
US10963446B2 (en) | 2014-04-25 | 2021-03-30 | Dropbox, Inc. | Techniques for collapsing views of content items in a graphical user interface |
US11460984B2 (en) | 2014-04-25 | 2022-10-04 | Dropbox, Inc. | Browsing and selecting content items based on user gestures |
US10817151B2 (en) | 2014-04-25 | 2020-10-27 | Dropbox, Inc. | Browsing and selecting content items based on user gestures |
US11392575B2 (en) | 2014-04-25 | 2022-07-19 | Dropbox, Inc. | Techniques for collapsing views of content items in a graphical user interface |
US9342751B2 (en) * | 2014-05-14 | 2016-05-17 | Electronics And Telecommunications Research Institute | User hand detecting device for detecting user's hand region and method thereof |
US20150332471A1 (en) * | 2014-05-14 | 2015-11-19 | Electronics And Telecommunications Research Institute | User hand detecting device for detecting user's hand region and method thereof |
US20150370472A1 (en) * | 2014-06-19 | 2015-12-24 | Xerox Corporation | 3-d motion control for document discovery and retrieval |
US9395174B2 (en) | 2014-06-27 | 2016-07-19 | Faro Technologies, Inc. | Determining retroreflector orientation by optimizing spatial fit |
US10025975B2 (en) | 2015-02-10 | 2018-07-17 | Nintendo Co., Ltd. | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US20160232674A1 (en) * | 2015-02-10 | 2016-08-11 | Wataru Tanaka | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US20160232404A1 (en) * | 2015-02-10 | 2016-08-11 | Yusuke KITAZONO | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US9824293B2 (en) | 2015-02-10 | 2017-11-21 | Nintendo Co., Ltd. | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
US9864905B2 (en) * | 2015-02-10 | 2018-01-09 | Nintendo Co., Ltd. | Information processing device, storage medium storing information processing program, information processing system, and information processing method |
TWI571768B (en) * | 2015-04-29 | 2017-02-21 | 由田新技股份有限公司 | A human interface synchronous system, device, method, computer readable media, and computer program product |
US20170068322A1 (en) * | 2015-09-04 | 2017-03-09 | Eyesight Mobile Technologies Ltd. | Gesture recognition control device |
US10120454B2 (en) * | 2015-09-04 | 2018-11-06 | Eyesight Mobile Technologies Ltd. | Gesture recognition control device |
US10043279B1 (en) | 2015-12-07 | 2018-08-07 | Apple Inc. | Robust detection and classification of body parts in a depth map |
US11854308B1 (en) | 2016-02-17 | 2023-12-26 | Ultrahaptics IP Two Limited | Hand initialization for machine learning based gesture recognition |
US11841920B1 (en) | 2016-02-17 | 2023-12-12 | Ultrahaptics IP Two Limited | Machine learning based gesture recognition |
US20230214458A1 (en) * | 2016-02-17 | 2023-07-06 | Ultrahaptics IP Two Limited | Hand Pose Estimation for Machine Learning Based Gesture Recognition |
US11714880B1 (en) * | 2016-02-17 | 2023-08-01 | Ultrahaptics IP Two Limited | Hand pose estimation for machine learning based gesture recognition |
US20170277943A1 (en) * | 2016-03-25 | 2017-09-28 | Fuji Xerox Co., Ltd. | Hand-raising detection device, non-transitory computer readable medium, and hand-raising detection method |
US10503969B2 (en) * | 2016-03-25 | 2019-12-10 | Fuji Xerox Co., Ltd. | Hand-raising detection device, non-transitory computer readable medium, and hand-raising detection method |
US10366278B2 (en) | 2016-09-20 | 2019-07-30 | Apple Inc. | Curvature-based face detector |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
US11301049B2 (en) * | 2019-09-17 | 2022-04-12 | Huawei Technologies Co., Ltd. | User interface control based on elbow-anchored arm gestures |
WO2021051200A1 (en) * | 2019-09-17 | 2021-03-25 | Huawei Technologies Co., Ltd. | User interface control based on elbow-anchored arm gestures |
KR102145523B1 (en) | 2019-12-12 | 2020-08-18 | 삼성전자주식회사 | Method for control a camera apparatus and the camera apparatus |
KR20190142290A (en) * | 2019-12-12 | 2019-12-26 | 삼성전자주식회사 | Method for control a camera apparatus and the camera apparatus |
US11500454B2 (en) * | 2020-12-29 | 2022-11-15 | Snap Inc. | Body UI for augmented reality components |
US11941166B2 (en) | 2020-12-29 | 2024-03-26 | Snap Inc. | Body UI for augmented reality components |
US20220206563A1 (en) * | 2020-12-29 | 2022-06-30 | Snap Inc. | Body ui for augmented reality components |
US20230116341A1 (en) * | 2021-09-30 | 2023-04-13 | Futian ZHANG | Methods and apparatuses for hand gesture-based control of selection focus |
US20230229240A1 (en) * | 2022-01-20 | 2023-07-20 | Htc Corporation | Method for inputting letters, host, and computer readable storage medium |
US11914789B2 (en) * | 2022-01-20 | 2024-02-27 | Htc Corporation | Method for inputting letters, host, and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2010103482A2 (en) | 2010-09-16 |
US10719214B2 (en) | 2020-07-21 |
US20180059925A1 (en) | 2018-03-01 |
US20140304647A1 (en) | 2014-10-09 |
WO2010103482A3 (en) | 2010-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10719214B2 (en) | Enhanced 3D interfacing for remote devices | |
US11500514B2 (en) | Item selection using enhanced control | |
Nizam et al. | A review of multimodal interaction technique in augmented reality environment | |
US9857868B2 (en) | Method and system for ergonomic touch-free interface | |
US20170228138A1 (en) | System and method for spatial interaction for viewing and manipulating off-screen content | |
EP2972669B1 (en) | Depth-based user interface gesture control | |
CN108052202A (en) | A kind of 3D exchange methods, device, computer equipment and storage medium | |
US20130120282A1 (en) | System and Method for Evaluating Gesture Usability | |
CN105980965A (en) | Systems, devices, and methods for touch-free typing | |
WO2014019085A1 (en) | One-dimensional input system and method | |
WO2010008835A1 (en) | Enhanced character input using recognized gestures | |
US10180714B1 (en) | Two-handed multi-stroke marking menus for multi-touch devices | |
EP2676178A1 (en) | Breath-sensitive digital interface | |
Vogel et al. | Hand occlusion with tablet-sized direct pen input | |
CN102934060A (en) | Virtual touch interface | |
EP4307096A1 (en) | Key function execution method, apparatus and device, and storage medium | |
CN112527112A (en) | Multi-channel immersive flow field visualization man-machine interaction method | |
Raees et al. | VEN-3DVE: vision based egocentric navigation for 3D virtual environments | |
Xiao et al. | A hand gesture-based interface for design review using leap motion controller | |
Chun et al. | A combination of static and stroke gesture with speech for multimodal interaction in a virtual environment | |
Bai et al. | Asymmetric Bimanual Interaction for Mobile Virtual Reality. | |
KR101559424B1 (en) | A virtual keyboard based on hand recognition and implementing method thereof | |
US20230031240A1 (en) | Systems and methods for processing electronic images of pathology data and reviewing the pathology data | |
Uddin | Improving Multi-Touch Interactions Using Hands as Landmarks | |
CN105242795A (en) | Method for inputting English letters by azimuth gesture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PRIMESENSE LTD, ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAIZELS, AVIAD;SHPUNT, ALEXANDER;LITVAK, SHAI;SIGNING DATES FROM 20100311 TO 20100321;REEL/FRAME:024127/0266 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PRIMESENSE LTD.;REEL/FRAME:034293/0092 Effective date: 20140828 |
|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION # 13840451 AND REPLACE IT WITH CORRECT APPLICATION # 13810451 PREVIOUSLY RECORDED ON REEL 034293 FRAME 0092. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:PRIMESENSE LTD.;REEL/FRAME:035624/0091 Effective date: 20140828 |