US9317134B2 - Proximity object tracker - Google Patents

Proximity object tracker Download PDF

Info

Publication number
US9317134B2
US9317134B2 US13/972,064 US201313972064A US9317134B2 US 9317134 B2 US9317134 B2 US 9317134B2 US 201313972064 A US201313972064 A US 201313972064A US 9317134 B2 US9317134 B2 US 9317134B2
Authority
US
United States
Prior art keywords
image
intersection region
display screen
image sensor
user input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/972,064
Other versions
US20150363009A1 (en
Inventor
Ian Clarkson
Evan Hildreth
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US13/972,064 priority Critical patent/US9317134B2/en
Assigned to GESTURETEK, INC. reassignment GESTURETEK, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CLARKSON, IAN, HILDRETH, EVAN
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GESTURETEK, INC.
Publication of US20150363009A1 publication Critical patent/US20150363009A1/en
Application granted granted Critical
Publication of US9317134B2 publication Critical patent/US9317134B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0421Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0428Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual
    • G06K9/00335
    • G06T7/004
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • H04N5/2256
    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05YINDEXING SCHEME RELATING TO HINGES OR OTHER SUSPENSION DEVICES FOR DOORS, WINDOWS OR WINGS AND DEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION, CHECKS FOR WINGS AND WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
    • E05Y2800/00Details, accessories and auxiliary operations not otherwise provided for
    • E05Y2800/10Additional functions
    • E05Y2800/106Lighting

Definitions

  • the present disclosure generally relates to object tracking.
  • Cameras have been used to capture images of objects.
  • Techniques have been developed to analyze one or more images of an object present within one or more images to detect a position of the object. For example, optical flow has been used to detect motion of an object by analyzing multiple images of the object taken successively in time.
  • an electronic system in one aspect, includes a camera having a field of view of a first area and an illumination source that is angled with respect to the camera and that is configured to illuminate a second area. The second area intersects the first area to define an intersection region within the field of view of the camera.
  • the electronic system also includes a processing unit configured to perform operations. The operations includes capturing an image from the camera and analyzing the image captured by the camera to detect an object within the intersection region. The operations also include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
  • the electronic system may include a display screen configured to display a graphical user interface.
  • the camera may be positioned at a first side of the display screen, may be angled with respect to the display screen, and the field of view of the camera may be of the first area in front of the display screen.
  • the illumination source may be positioned at a second side of the display screen, may be angled with respect to the display screen, and may be configured to illuminate the second area in front of the display screen.
  • the second side of the display screen may be opposite of the first side of the display screen and the second area in front of the display screen may intersect the first area in front of the display screen to define the intersection region in front of the display screen.
  • the operations performed by the processing unit may include comparing pixels of the image captured by the camera to a brightness threshold to produce a binary image. Pixels in the binary image may indicate whether or not the corresponding pixels in the image captured by the camera meet the brightness threshold. In these examples, the operations also may include grouping pixels within the binary image into one or more blobs, grouping the one or more blobs into one or more clusters, and determining a position of one or more objects in the binary image based on the one or more clusters.
  • the operations may include clustering blobs within the binary image into one or more clusters based on a tracking mode of the electronic system. For instance, when the electronic system is configured in a single object tracking mode, the operations may include clustering blobs within the binary image into a single cluster, determining a position of the single object based on the single cluster; and determining user input based on the position of the single object.
  • the operations may include clustering blobs in a horizontal direction from an outer edge of first and second sides of the binary image to a center of the binary image to identify a first cluster at the first side of the image and a second cluster at the second side of the image.
  • the operations also may include determining a position of a first object based on the first cluster, determining a position of a second object based on the second cluster, and determining user input based on the position of the first object and the position of the second object.
  • the operations further may include weighting proximity of blobs in the horizontal direction higher than proximity of blobs in a vertical direction in clustering blobs together.
  • the operations may include clustering blobs in a vertical direction from an outer edge of a top and a bottom of the binary image to a center of the binary image to identify a first blob at a top portion of the image and a second blob at a bottom portion of the image.
  • the operations also may include determining a position of a first object based on the first cluster, determining a position of a second object based on the second cluster, and determining user input based on the position of the first object and the position of the second object.
  • the operations further may include weighting proximity of blobs in the vertical direction higher than proximity of blobs in a horizontal direction in clustering blobs together.
  • the operations may include determining a tracking mode of the electronic system from among at least a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode.
  • the operations may include clustering blobs within the binary image into a single cluster and computing a position of the single object based on the single cluster.
  • the operations may include clustering blobs in a horizontal direction from an outer edge of first and second sides of the binary image to a center of the binary image to identify a first cluster at the first side of the image and a second cluster at the second side of the image, computing a position of a first object based on the first cluster, and computing a position of a second object based on the second cluster.
  • the operations may include clustering blobs in a vertical direction from an outer edge of a top and a bottom of the binary image to a center of the binary image to identify a first cluster at a top portion of the image and a second cluster at a bottom portion of the image, computing a position of a first object based on the first cluster, and computing a position of a second object based on the second cluster.
  • the operations may include mapping a position of the detected object to an interface displayed by the application being controlled and determining user input based on the mapped position of the detected object to the interface displayed by the application being controlled. In these implementations, the operations may include determining whether the mapped position of the detected object corresponds to an element displayed in the interface displayed by the application being controlled. In addition, in these implementations, the operations may include mapping the position of the detected object to a cursor position in the interface displayed by the application being controlled and determining user input based on the cursor position in the interface displayed by the application being controlled.
  • the operations may include detecting performance of a gesture by the detected object based on positions of the detected object determined within a series of images captured by the camera and determining user input based on the detected gesture.
  • the operations may include detecting a swipe gesture and determining user input based on the detected swipe gesture.
  • the operations may include detecting a gesture in which two detected objects are moving horizontally together or apart and determining user input based on the detected gesture in which two detected objects are moving horizontally together or apart.
  • the illumination source may be a first illumination source and the electronic system may include a second illumination source that is angled with respect to the camera differently than the first illumination source and that is configured to illuminate a third area.
  • the third area may intersect the first area, may be different than the second area, and, in combination with the second area, may define a combined intersection region within the field of view of the camera.
  • the operations may include controlling the first and second illumination sources to illuminate in sequence with images captured by the camera in an alternating pattern, identifying a first image captured when the first illumination source was illuminated and the second illumination sources was not illuminated, identifying a second image captured when the first illumination source was not illuminated and the second illumination sources was illuminated, and analyzing the first and second images in combination to determine a position of an object within the combined intersection region defined by the first and second illumination sources.
  • the operations may include capturing a grayscale image and comparing pixels of the grayscale image to a brightness threshold to produce a binary image. Pixels in the binary image may indicate whether or not the corresponding pixels in the grayscale image captured by the camera meet the brightness threshold.
  • the operations also may include grouping pixels within the binary image into blobs, referencing the grayscale image in clustering blobs within the binary image into one or more clusters, and determining a position of one or more objects in the binary image based on results of the clustering.
  • the illumination source may be an infrared emitter.
  • the operations may include ignoring objects that are within the camera's field of view and outside of the intersection region.
  • the operations also may include using motion information to detect a moving object within the intersection region.
  • the motion information may include motion history data and/or optical flow data.
  • the operations may include controlling the illumination source to illuminate while the camera is capturing the image to define the intersection region within the image captured by the camera. In these examples, the operations may include controlling the illumination source to turn on prior to capturing the image from the camera. In addition, in these examples, the operations may include controlling the illumination source to illuminate in sequence with images captured by the camera in an alternating pattern such that a first image captured by the camera is captured when the illumination source is illuminated and a second image captured by the camera is captured when the illumination source is not illuminated. The operations may include subtracting the second image from the first image to produce a resulting image and analyzing the resulting image to detect the object within the intersection region.
  • At least one computer-readable storage medium is encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to perform operations.
  • the operations include capturing an image from a camera and analyzing the image captured by the camera to detect an object within an intersection region defined within the camera's field of view by an illumination source.
  • the operations also include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
  • a method in yet another aspect, includes capturing an image from a camera and analyzing the image captured by the camera to detect an object within an intersection region defined within the camera's field of view by an illumination source. The method also may include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
  • FIGS. 1, 2A -B, 3 , 15 A-C, 16 , 18 , 19 , and 20 are diagrams of exemplary systems.
  • FIGS. 4, 5, 6, 9, 14, and 17 are flowcharts of exemplary processes.
  • FIGS. 7 and 8 are diagrams of exemplary clusters.
  • FIGS. 10, 11, 12, and 13 are diagrams of exemplary gestures and associated user interfaces.
  • a system includes a light source placed to one of the four sides of a display (e.g., top side of the user-facing surface of the display) with its light oriented towards a tracking region in front of the display.
  • the system also includes a camera placed on the opposite side of the display (e.g., bottom side of the user-facing surface of the display) and oriented towards the tracking region in front of the display.
  • the light source may be a row of infrared emitters (which may or may not be flashing) and the one or more objects (e.g., one or more hands) are tracked within camera images as blobs either individually or as a group.
  • the placement of the camera and the infrared emitters and their angle relative to each other create an intersection region that defines the tracking region and limits the potential for errors.
  • the tracking region may be moved around or redefined to a certain degree, as long as the infrared emitters do not illuminate other objects beyond the tracked object that are still in the view of the camera.
  • FIG. 1 illustrates an example of a tracking system 100 .
  • the system 100 includes a display screen 102 , a camera 104 , and an illumination source 106 .
  • the display screen 102 may be, for example, a computer monitor, digital picture frame, or a television screen, or a non-electric screen upon which an image is projected. In some examples, the display screen 102 may be behind a glass window.
  • the display screen 102 may be configured to display a graphical user interface for an application which includes one or more interface controls.
  • the camera 104 captures images.
  • the camera 104 is positioned at the top side of the display screen 102 and is angled downward with respect to the display screen 102 .
  • a field-of-view 108 of the camera 104 is located in front of the display screen 102 .
  • the camera 104 may be positioned at a different side of the display screen 102 (e.g., the bottom, left, or right side) or may be embedded within or included in the display screen 102 .
  • the camera 104 also may be positioned behind the display screen 102 .
  • the illumination source 106 is positioned at the bottom edge of the display screen 102 (e.g., at the opposite edge of the display screen 102 as compared to the position of the camera 104 ) and is angled upward with respect to the display screen 102 . In other configurations, the illumination source 106 may be positioned at a different edge of the display screen 102 (e.g., the top edge or a side edge).
  • the illumination source 106 may be, for example, a set of one or more infrared LEDs (Light Emitting Diodes).
  • the illumination source 106 is configured to illuminate an illuminated area 110 located in front of the display screen 102 .
  • the illuminated area 110 intersects the field-of-view 108 to define an intersection region 112 in front of the display screen 102 .
  • the illumination source 106 may be controlled to illuminate while the camera 104 is capturing one or more images (e.g., the illumination source 106 may be turned on before the camera 104 captures images).
  • the captured camera images may be analyzed (e.g., by one or more processors) to detect one or more illuminated objects within the intersection region 112 .
  • An object may be, for example, a hand, finger, other body part, a stylus, pointer, remote control device, game controller, etc. Objects within the field-of-view 108 but outside of the intersection region 112 may be ignored.
  • the camera 104 and the illumination source 106 are positioned such that control objects such as a user's hand are included in the intersection region 112 but other objects such as a user's head or torso are not included in the intersection region 112 (even if the other objects are included in the field-of-view 108 and/or the illuminated area 110 ).
  • a user 114 is standing in front of the display screen 102 .
  • the user 114 extends a hand 116 so that it is positioned within the intersection region 112 .
  • the illumination source 106 illuminates the hand 116 and may illuminate other objects, such as the head of the user 114 .
  • the camera 104 may capture one or more images while the hand 116 is within the intersection region 112 .
  • a processor may analyze the camera images for illuminated objects. Because the hand 116 is within the intersection region 112 , the processor detects the hand 116 as an illuminated object.
  • the processor is able to ignore objects that are outside of the intersection region 112 and that are unrelated to the input being provided by the user 114 with the user's hand 116 .
  • the illumination source 106 illuminates a portion of the user's head and the camera 104 captures images of the arm and torso of the user 114 .
  • the processor ignores these objects when attempting to detect an object providing user input.
  • objects may be detected, for example, by comparing pixels of the camera images to a brightness threshold to produce a binary image and by clustering pixels within the binary image into one or more blobs based on whether a tracking mode is a single hand tracking mode, a two hand adjacent tracking mode, or a two hand stacked tracking mode.
  • One or more object positions may be determined based on the results of the clustering.
  • a user input may be determined based on the detection of one or more objects within the intersection region 112 .
  • the position of a detected object may be mapped to a user interface of an application displayed on the display screen 102 .
  • movement of the object in a horizontal and/or vertical direction may be detected.
  • a gesture may be detected based on one or more determined positions of the detected object.
  • a “swipe” gesture, a “clap” gesture, a “pounding” gesture, a “chopping” gesture, or a “grab” gesture may be detected, to name a few examples.
  • a “gesture” is intended to refer to a form of non-verbal communication made with a whole or part of a human body or multiple human bodies, and is contrasted with verbal communication such as speech.
  • a gesture may be defined by a movement, change or transformation between a first position, pose, or expression and a second pose, position or expression.
  • Example gestures include for instance, an “air quote” gesture, a bowing gesture, a curtsey, a cheek-kiss, a finger or hand motion, a genuflection, a head bobble or movement, a high-five, a raised fist, a salute, a swiping or wave motion, a thumbs-up motion, or a finger pointing gesture.
  • a gesture may be derived that defines an idea, opinion, emotion, communication, command, demonstration or expression of the user.
  • the user's gesture may be a single or multiple finger gesture; a single hand gesture; a single hand and arm gesture; a single hand and arm, and body gesture; a bimanual gesture; or a transformation of any other expressive body state.
  • the body part or parts used to perform relevant gestures are generally referred to as an “object.”
  • the user may express a command using their entire body or with other physical objects, in which case their entire body or the other physical objects may be the object.
  • a user may more subtly express a command by wiggling a finger, in which case the finger may be the object.
  • the user's gesture in a single image or between two images may be expressive of an enabling or “engagement” gesture.
  • An object may also be a physical device, such as an infrared finger light, a retro-reflector, or a remote control.
  • An application displayed on the display screen 102 may be controlled based on the determined user input. For example, if a swipe gesture is detected, a next picture may be displayed in a photo viewing application. As another example, if a “pounding” gesture is detected, a drum noise may be played based on a detected object position matching a corresponding position of a graphic of a drum displayed on the user interface displayed on the display screen 102 . As yet another example, a television channel may be changed based on a detected change in vertical position (e.g., up, down) of the detected object.
  • FIG. 2A illustrates a front view of a tracking system 200 .
  • the system 200 includes a camera 202 positioned at a top side of a display screen 204 and an illumination source 206 positioned at a bottom side of the display screen 204 .
  • the lens of the camera 202 may be positioned a particular distance (e.g., five centimeters, ten centimeters) above the top side of the display screen 204 .
  • the illumination source 206 may include a row of multiple illuminators (e.g., multiple infrared LEDs and may be positioned a particular distance (e.g., five centimeters, ten centimeters) below the bottom side of the display screen 204 .
  • the display screen 202 may be, for example, a twenty one inch computer monitor (e.g., the distance from one corner of the display screen 204 to the opposite corner may be twenty one inches).
  • FIG. 2B illustrates a side view of a tracking system 215 .
  • the system 215 includes a camera 220 positioned at a top side of a display screen 222 .
  • the camera 220 is angled downward relative to the display screen 222 .
  • the camera 220 may be positioned, for example, at a thirty degree angle.
  • An illumination source 224 is positioned at the bottom side of the display screen 222 and is angled upward towards the display screen 222 .
  • the illumination source 224 may be positioned, for example, at a thirty degree angle.
  • the positions (e.g., distances from the display screen 222 , angles) of the illumination source 224 and the camera 220 may be configured such that control objects (e.g., hand, pointer) used by typical users are captured within an intersection region defined by the intersection of the field-of-view of the camera 220 and an illuminated area illuminated by the illumination source 224 and so that objects not intended as control objects are not captured in the intersection region.
  • control objects e.g., hand, pointer
  • the angle of the camera 220 and/or the angle of the illumination source 224 may affect the size and location of an intersection region defined by the intersection of the field-of-view of the camera 220 and an illuminated area illuminated by the illumination source 224 . Additionally, the size and location of the intersection region may affect detection of objects in the intersection region. For example, if the angle of the camera 220 is configured so that the camera 220 is facing relatively straight out (e.g., at a small angle relative to a horizontal plane), an object may not be detected (e.g., may not be in the field-of-view of the camera 220 ) if the object is close to the display screen 222 and/or near the bottom of the display screen 222 . Additionally, in such a configuration it may be difficult to detect an object such as a user's hand because the hand may be in front of other objects such as the user's head or torso in the captured camera image, making it difficult to distinguish the hand.
  • the angle of the camera 220 is about forty-five degrees to a display screen, it may become difficult to distinguish between a user's in-and-out movements and a user's up-and-down movements (e.g., both movements may appear similar in a sequence of captured camera images).
  • the angle of the camera 220 is configured so that the camera 220 is facing relatively straight down (e.g., at a small angle relative to a vertical plane), a user's up-and-down movements may be difficult to track.
  • An example configuration of camera angle and illumination source angle creates an intersection region that is close enough to the display screen 222 so that a user's outstretched hand or finger may be detected and so that the user's arm, torso, or head are not detected (e.g., such as the intersection region 112 shown in FIG. 1 ).
  • a “sharp edge” (e.g., steep change in intensity over a short distance) to the illuminated area is created by the illumination source 224 in order to reduce (e g, minimize) the area where an object may be detected unreliably because it is illuminated by weak or extraneous infrared light from the illumination source 224 .
  • LEDs with a narrow angle lens may be used, such as LEDs with a small angle of half intensity (e.g., where the angle of half intensity defines how far from center an LED drops to half of its maximum intensity).
  • An LED with a narrow-angled lens produces a relatively steep drop-off of light intensity as compared to LEDs with wider-angled lens.
  • Using an LED with a narrow-angled lens forms a narrower but more sharply defined illumination region as compared with using an LED with a wider-angled lens.
  • An LED lens that produces an oval illumination pattern may be used, so that the angle of half intensity of the lens in one dimension is narrow, and the angle of half intensity in the other dimension for the lens is wider.
  • Using lenses that produce oval illumination patterns, may allow for LEDs to be spaced further apart than if other types of LEDs are used.
  • the illumination source 224 may be a first row of LEDs and an illumination source 226 may be a second row of LEDs.
  • the illumination source 224 may be a row of narrow-angled LEDs which produce a sharp illuminated edge and the illumination source 226 may be a row of wider-angled LEDs which illuminate an area between the illuminated area created by the illumination source 224 and the display screen 222 .
  • a sharp illuminated edge may also be created by using a channel, shield, or mirror which blocks emitted light on one side, thereby producing an illumination region where the edge nearest the channel, shield, or mirror is sharp while the edge near the surface of the display screen 222 is a softer illuminated edge.
  • a sharp edge may be created by using a custom asymmetric lens.
  • FIG. 3 illustrates an example of a tracking system 300 .
  • the system 300 includes a display screen 301 , a storage medium 302 , a camera 304 , a processor 305 , and an illumination source 309 .
  • the system 300 may be included in or used in conjunction with a digital picture frame, a television, a monitor, a product display unit, or any type of media system.
  • the display screen 301 renders a visual display image.
  • the display screen 301 may be a monitor display, a television display, a liquid crystal display (LCD), a plasma display device, a projector with a projector screen, an auto-stereoscopic display, a cathode ray tube (CRT) display, a digital light processing (DLP) display, a digital picture frame display, or any other type of display device configured to render a visual display image.
  • the display screen 301 may include one or more display devices.
  • the display screen 301 may display images associated with an application.
  • the display screen 301 may render display images generated by an application (e.g., a photo viewing application).
  • the display images generated by the application may include a user interface with interface controls.
  • the camera 304 is a device that captures images.
  • the camera 304 may be a digital camera, a digital video camera, or any other type of device that captures images.
  • the camera 304 may be a single camera and the system 300 may include only the single camera. In other implementations, multiple cameras may be used.
  • the camera 304 may capture images of an object interacting with an interface displayed on the display screen 301 .
  • the camera 304 may capture images of a user or person physically interacting (e.g., with a finger or hand) with an interface displayed on the display screen 301 .
  • the camera 304 may be any type of image sensor and may be a line scan sensor.
  • the illumination source 309 is a device that provides a light source.
  • the illumination source 309 may be a flash device, an incandescent light bulb, a fluorescent light bulb, an LED, a halogen light source, a neon light source, a xenon light source, an infrared light source, or any other type of device configured to illuminate an object being imaged by the camera 304 .
  • a flash device may, over one or more cycles, project electromagnetic radiation and then extinguish the projected electromagnetic radiation.
  • the illumination source 309 may include one or more illuminators.
  • the illumination source 309 may generate light to assist in capturing a high quality image of an object being captured by the camera 304 .
  • the illumination source 309 may be used in particular situations. For instance, the illumination source 309 may be used at nighttime or in dark rooms.
  • the illumination source 309 may be positioned to define an intersection region within the field of view of the camera 304 . Defining an intersection region using the illumination source 309 may increase the accuracy of object detection with a single camera and also may increase the number of control objects that may be detected by a single camera. Using a single camera may help reduce costs of the system and enable gesture-based input control to be realized in less expensive devices.
  • the storage medium 302 stores and records information or data, and may be an optical storage medium, magnetic storage medium, flash memory, or any other storage medium type.
  • the storage medium 302 includes a vocabulary 310 and a gesture recognition module 314 .
  • the vocabulary 310 includes information regarding gestures that the system 300 may recognize.
  • the vocabulary 310 may include gesture definitions which describe, for each recognized gesture, a set of movements included in a gesture.
  • the gesture recognition module 314 receives captured images from the camera 304 , maps a position of a detected object to an interface displayed on the display screen 301 , and detects a gesture based on comparing positions of the detected object within a series of images to gesture definitions stored in the vocabulary 310 to determine whether a recognizable gesture has been performed.
  • the processor 305 may accept input from a user interface displayed on the display screen 301 and may analyze images captured by the camera 304 .
  • the processor 305 may execute applications and operating systems being run on the system 300 .
  • the system 300 may include multiple processors (or other control circuitry) and may include memory (or other computer-readable storage media) that stores application programs, operating systems, user input programs, and data used by the application programs, operating systems, and user input programs.
  • the system 300 does not include the display screen 301 .
  • the system 300 may be configured to detect objects in an intersection region where the intersection region is located in front of a different physical object such as a door, elevator, machine, radio, media player, or other object.
  • the system 300 is located in front of an area of space, such as a doorway or entryway.
  • FIG. 4 illustrates a process 400 for controlling an application.
  • the operations of the process 400 are described generally as being performed by the system 300 .
  • the operations of the process 400 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 400 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 captures an image from a camera ( 402 ). For example, in reference to FIG. 1 , an image may be captured by the camera 104 . In some implementations, the system 300 may control the illumination source 106 to illuminate while the camera 104 is capturing an image (e.g., the illumination source 106 may be turned on prior to the capturing of images by the camera 104 ).
  • the illumination source is controlled to illuminate in sequence with images captured by the camera in an alternating pattern such that a first image captured by the camera is captured when the illumination source is illuminated and a second image captured by the camera is captured when the illumination source is not illuminated.
  • the captured camera image may include an intersection region which is defined by the intersection of the field-of-view of the camera and an area illuminated by an illumination source.
  • the intersection region is located in front of a display screen. In other implementations, the intersection region is located in front of another type of object, such as a radio, elevator, painting, manufacturing device, automatic teller machine, light switch, vending machine, beverage dispenser, or any other physical object. In some implementations, the intersection region is located in front of an area of space, such as a doorway.
  • the system 300 analyzes the image captured by the camera to detect an object within the intersection region ( 404 ). For example, in reference to FIG. 1 , the hand 116 located within the intersection region 112 is detected while the head, arm, or torso of the user 114 which are located outside of the intersection region 112 are not detected.
  • the system 300 may ignore objects that are within the camera's field of view and outside of the intersection region by analyzing the image for illuminated objects. Because any objects within the camera's field of view and outside of the intersection region are not illuminated, the system 300 ignores (e.g., does not detect) these objects.
  • a camera image captured while the illumination source is turned off may be subtracted from a camera image captured while the illumination source was turned on to produce a resulting image.
  • the resulting image may be analyzed to determine whether one or more objects are illuminated in the camera image captured when the illumination source was turned on. Subtracting the camera image captured when the illumination source was turned off may remove ambient light which was present in both camera images.
  • the system 300 detects an object within an image by analyzing multiple images taken over time to detect moving objects.
  • the system 300 may use an optical flow process or examine a motion history image to detect objects in motion.
  • the system 300 tracks the objects in motion and ignores static objects. For example, in a situation in which a user's hand and the user's face are present within an intersection region and the user is moving his or her hand while keeping his or her face stationary, the system 300 detects and tracks the moving hand as an object of interest, but does not track the user's face as an object of interest.
  • the system 300 detects an object within an image by analyzing shapes within the image.
  • the system 300 may attempt to detect a hand within the intersection region of the image. In attempting to detect a hand, the system 300 may compare shapes of illuminated objects within the image to a shape of a typical hand. When the system determines that a shape of an illuminated object matches the shape of the typical hand, the system 300 detects and tracks the object as an object of interest. When the system determines that a shape of an illuminated object does not match the shape of the typical hand, the system 300 does not track the object as an object of interest. Analyzing a camera image to detect an object within the intersection region is described in more detail below with respect to FIG. 5 .
  • FIG. 5 illustrates a process 500 for analyzing a camera image to detect an object within an intersection region.
  • the operations of the process 500 are described generally as being performed by the system 300 .
  • the process 500 may used in analyzing an image captured by the camera to detect an object within the intersection region referenced above with respect to reference numeral 404 .
  • the operations of the process 500 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 500 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 compares pixels of the image captured by the camera to a brightness threshold to produce a binary image ( 502 ). For example, pixels in the camera image having a brightness value above a threshold may be identified in the binary image with a value of one and pixels having a brightness value below the threshold may be identified in the binary image with a value of zero.
  • the system 300 groups pixels within the binary image into one or more blobs ( 504 ). For example, pixels may be clustered into one or more blobs based on proximity of the pixels together.
  • the system 300 groups one or more blobs within the binary image into one or more clusters based on a tracking mode ( 506 ). For example, blobs may be clustered into one or more clusters based on whether a tracking mode is a single object tracking mode, a two object adjacent tracking mode, or a two object stacked tracking mode.
  • the system 300 determines a position of one or more objects in the image captured by the camera based on the one or more clusters ( 508 ). For example, a position of a user's hand or finger, two hands, a stylus or other pointing device, a game controller, a remote control, or some other object may be determined Determining the position of one or more objects based on a tracking mode is discussed in more detail below with respect to FIG. 6 .
  • FIG. 6 illustrates a process 600 for determining a position of one or more hands based on a tracking mode.
  • the process 600 may be used in grouping blobs within the binary image into one or more clusters based on a tracking mode referenced above with respect to reference numeral 506 and in determining a position of one or more objects in the image captured by the camera based on the one or more clusters referenced above with respect to reference numeral 508 .
  • the process 600 may be used to detect objects other than hands, such as fingers, pointing devices, etc.
  • the operations of the process 600 are described generally as being performed by the system 300 .
  • the operations of the process 600 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 600 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 determines a tracking mode from among at least a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode ( 602 ). For example, a current tracking mode setting may be retrieved from the storage medium 302 referenced above with respect to FIG. 3 . In this example, the current tracking mode setting may be set based on user input (e.g., input setting the tracking mode as one of a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode).
  • the current tracking mode setting may be set based on which application is being controlled by the system 300 .
  • the system 300 may provide multiple applications (e.g., multiple games) that are controlled using different types of user input. The current tracking mode, therefore, is set based on the application being used and the type of input expected by that application.
  • the system 300 may detect the position of a single hand (or other object).
  • the system 300 may detect the position of two hands, where the two hands are held side by side in a horizontal orientation, with a gap between the two hands.
  • the system 300 may detect the position of two hands, where the two hands are stacked vertically, one on top of the other, with a gap between the two hands.
  • the system 300 clusters blobs within the binary image into a single cluster and determines a position of the single hand based on the single cluster ( 604 ). For example, the system 300 may cluster blobs within a binary image which was created based on performing a threshold brightness test as discussed above with respect to reference numeral 502 . Blobs may be clustered in the binary image using a k-means process, with a desired cluster count equal to one.
  • the system 300 may determine a position of the single hand based on the single cluster by computing a centroid of one or more blobs within the image. For example, when the single cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the single hand.
  • the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the single hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
  • the system 300 clusters blobs in a horizontal direction from an outer edge of first and second sides of the image to a center of the image to identify a first cluster at the first side of the image and a second cluster at the second side of the image, determines a position of a first hand based on the first cluster, and determines a position of a second hand based on the second cluster ( 606 ).
  • Blobs may be clustered, for example, using a k-means process with a desired cluster count equal to two.
  • one blob may be detected and a centroid of the one blob may be computed as a position of a single detected hand.
  • the system 300 may indicate that, even though a two hand tracking mode is set, only a single hand was found.
  • the system 300 computes a first centroid of the first blob as a position of the first hand and computes a second centroid of the second blob as a position of the second hand.
  • the blobs may be clustered into a first cluster and a second cluster, for example, using a k-means process with a desired cluster count equal to two.
  • the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the first hand.
  • the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the first hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
  • the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the second hand.
  • the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the second hand.
  • the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
  • proximity of blobs in the horizontal direction may be weighted higher than proximity of blobs in a vertical direction in clustering blobs into clusters.
  • a distance function which weights proximity of blobs in the horizontal direction higher than proximity of blobs in the vertical direction may be provided to a k-means clustering process.
  • FIG. 7 illustrates a binary image map 700 which includes blob centroids 702 - 706 at coordinates (2,2), (3,20), and (7,2), respectively.
  • the blob centroids 702 and 704 may be clustered together in a two hand adjacent tracking mode.
  • the blob centroid 702 may be clustered with the blob centroid 704 rather than with the blob centroid 706 despite the fact that the distance between the blob centroid 702 and the blob centroid 706 is less than the distance between the blob centroid 702 and the blob centroid 704 and despite the fact that the blob centroid 702 and the blob centroid 706 share the same Y coordinate.
  • the blob centroid 702 may be clustered with the blob centroid 704 rather than with the blob centroid 706 because the difference in the horizontal direction between the blob centroid 702 and the blob centroid 704 (i.e., one pixel) is less than the difference in the horizontal direction between the blob centroid 702 and the blob centroid 706 (i.e., five pixels).
  • the system 300 clusters blobs in a vertical direction from an outer edge of a top and a bottom of the image to a center of the image to identify a first cluster at the top of the image and a second cluster at the bottom of the image, determines a position of a first hand based on the first cluster, and determines a position of a second hand based on the second cluster ( 608 ).
  • blobs may be clustered using a k-means process with a desired cluster count equal to two and in some scenarios, such as if one hand is within the intersection region or if the user's two hands are placed close together, one blob may be detected and a centroid of the one blob may be computed as a position of a single detected hand.
  • the system 300 computes a first centroid of the first blob as a position of the first hand and computes a second centroid of the second blob as a position of the second hand.
  • the blobs may be clustered into a first cluster and a second cluster, for example, using a k-means process with a desired cluster count equal to two.
  • the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the first hand.
  • the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the first hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
  • the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the second hand.
  • the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the second hand.
  • the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
  • proximity of blobs in the vertical direction may be weighted higher than proximity of blobs in a horizontal direction in clustering blobs into clusters.
  • a distance function which weights proximity of blobs in the vertical direction higher than proximity of blobs in the horizontal direction may be provided to a k-means clustering process.
  • FIG. 8 illustrates a binary image map 800 which includes blob centroids 802 - 806 at coordinates (2,2), (20,3), and (2,7), respectively.
  • the blob centroids 802 and 804 may be clustered together in a two hand stacked tracking mode.
  • the blob centroid 802 may be clustered with the blob centroid 804 rather than with the blob centroid 806 despite the fact that the distance between the blob centroid 802 and the blob centroid 806 is less than the distance between the blob centroid 802 and the blob centroid 804 and despite the fact that the blob centroid 802 and the blob centroid 806 share the same X coordinate.
  • the blob centroid 802 may be clustered with the blob centroid 804 rather than with the blob centroid 806 because the difference in the vertical direction between the blob centroid 802 and the blob centroid 804 (i.e., one pixel) is less than the difference in the vertical direction between the blob centroid 802 and the blob centroid 806 (i.e., five pixels).
  • the system 300 determines user input based on the object detected within the intersection region ( 406 ). For example, a gesture may be detected based on positions of the object detected within a series of images and a user input may be determined based on the recognized gesture. For example, a “swipe” user input may be detected and a “change station” user input may be determined based on the recognized swipe gesture. As another example, the position of the detected object may be mapped to a user interface control displayed by an application on a display screen. Determining user input for an application user interface is discussed in more detail below with respect to FIG. 9 .
  • FIG. 9 illustrates a process 900 for determining user input based on an object detected within an intersection region.
  • the operations of the process 900 are described generally as being performed by the system 300 .
  • the process 900 may used in determining user input based on the object detected within the intersection region referenced above with respect to reference numeral 406 .
  • the operations of the process 900 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 900 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 maps a position of a detected object to an interface displayed by the application being controlled ( 902 ). For example, the position of the detected object in a binary image may be mapped to a user interface displayed on a display screen. The position of the detected object may be mapped to a user interface control or graphic displayed on the user interface. For some user interface controls, such as a slider control, the position of the detected object may be mapped to a particular location on the user interface control. As another example, the position of the detected object may be mapped to the position of a cursor displayed on the user interface.
  • the system 300 detects a gesture based on positions of a detected object with a series of images ( 904 ). For example, if the position of the detected object is mapped to a cursor position while in a single object tracking mode, a movement gesture may be detected within the series of images to detect movement of the object from a first position to a second position. As another example, in a single object tracking mode, a swipe gesture may be detected if multiple detected positions of the object within a series of images indicate a fast side-to-side horizontal movement of the object.
  • a tracking mode is, for example, a two hand adjacent tracking mode.
  • a “chop” gesture may be detected if positions of two objects within a series of images indicate that one detected object remains stationary and the other object moves quickly up and down in a vertical direction.
  • a “drumming” gesture may be detected if positions of two objects within a series of images indicate that both objects move up and down.
  • a “grab”, or “move hands together” gesture may be detected if positions of two objects within a series of images indicate that both objects start side-by-side a particular distance apart and then move inward towards each other resulting in the two objects being close to one another.
  • a “move hands apart” gesture may be detected if the positions of the two objects indicate that the objects start side-by-side and then move outward away from each other, resulting in the two objects being farther apart.
  • the system 300 determines user input based on the mapped position of the detected object and/or the detected gesture ( 906 ). For instance, in the example where the object is mapped to a cursor position and where a movement gesture is detected, a cursor movement user input may be determined. In the example where the mapped position of the detected object corresponds to an element displayed in the user interface displayed by the application being controlled, a command to select the user interface element may be determined.
  • a “next photo” user input may be detected.
  • a “chop” gesture is detected, a user input for a game may be determined which indicates a “swing” of a hammer or other object for a character within the game.
  • a decrease volume or increase volume user input may be determined, respectively, or a zoom in or a zoom out user input may be determined, respectively.
  • FIG. 10 illustrates an example where movement of a cursor is controlled.
  • an object is mapped to a cursor position
  • a movement gesture is detected
  • a cursor movement user input is determined
  • movement of a cursor is controlled.
  • a hand 1002 is detected in a camera image captured by a camera 1004 at a first position and the position of the hand 1002 is mapped to a first cursor position 1005 on a user interface 1006 displayed on a display screen 1008 .
  • Movement of the hand 1002 is detected within a series of camera images captured by the camera 1004 and a second position of the hand 1002 is determined, as indicated by a hand 1010 .
  • a cursor movement user input is determined based on the detected movement gesture, and the position of the cursor is moved from the first cursor position 1005 to a second cursor position 1012 in a direction and magnitude corresponding to the difference in the detected positions of the hand 1002 and the hand 1010 .
  • FIG. 11 illustrates an example where a photo viewing application is controlled to display a different photo.
  • a hand-swipe gesture is detected, a “next-photo” user input is determined, and a displayed photo is replaced with a new photo.
  • a hand 1102 is detected in a camera image captured by a camera 1104 .
  • the user moves their hand 1102 to the left in a swiping motion, as indicated by a hand 1105 . Movement of the hand 1102 is detected within a series of camera images captured by the camera 1104 and a left swipe gesture is determined.
  • a “next photo” user input is determined based on the detected left swipe gesture.
  • a photo 1106 is displayed on a user interface 1108 displayed on a display screen 1110 . Based on the determined “next photo” user input, the photo 1106 is removed from the user interface 1108 and a different, next photo 1112 is displayed in place of the photo 1106 on the user interface 1108 .
  • FIG. 12 illustrates an example where a game is controlled.
  • a left hand 1202 and a right hand 1204 are detected in one or more images captured by a camera 1206 .
  • the positions of the hands 1202 and 1204 are mapped to cursor positions 1210 and 1212 , respectively, on a user interface 1214 displayed on a display screen 1216 .
  • the user makes a “chopping” gesture with their left hand 1202 , as indicated by a hand 1218 . Movement of the hand 1202 is detected within a series of images captured by the camera 1206 and a “chop” gesture is determined.
  • a “pound character” user input is determined based on the chop gesture.
  • a game animation is controlled based on the “pound character” user input.
  • a state of an animated character graphic 1220 may be determined corresponding to the time of the detected chop gesture (e.g., the character graphic 1220 may be alternating between an “in the hole” state and an “out of the hole” state). If the character graphic 1220 is in an “out of the hole” state at the time of the detected chop gesture, it may be determined that the character associated with the character graphic 1220 has been “hit”.
  • the game may be controlled accordingly, such as to change the character graphic 1220 to a different graphic or to otherwise change the user interface 1214 (e.g., the character may “yell”, make a face, get a “bump on the head”, disappear, appear “knocked out”, etc.), and/or a score may be incremented, or some other indication of success may be displayed.
  • the character may “yell”, make a face, get a “bump on the head”, disappear, appear “knocked out”, etc.”
  • a score may be incremented, or some other indication of success may be displayed.
  • FIG. 13 illustrates an example where volume of a media player is controlled.
  • a left hand 1302 and a right hand 1304 are detected in one or more images captured by a camera 1306 .
  • the user makes a “move hands together” gesture by moving the hands 1302 and 1304 inward towards each other.
  • the change in positions of the hands 1302 and 1304 are detected within a series of images captured by the camera 1306 and a “move hands together” gesture is detected.
  • a decrease-volume user input command is determined based on the detected gesture.
  • Volume of a media player application is decreased in a magnitude corresponding to the amount of horizontal movement of the hands 1302 and 1304 (e.g., a larger inward movement results in a larger decrease in volume).
  • a volume indicator control 1308 on a user interface 1310 displayed on a display screen 1312 is updated accordingly to indicate the decreased volume.
  • the gesture may be detected, an increase-volume user input command may be determined, the volume of the media player application may be increased in a magnitude corresponding to the amount of outward movement of the hands 1302 and 1304 , and the volume indicator control 1308 may be updated accordingly.
  • An application or system without a corresponding display screen may be controlled based on the determined user input.
  • the user input may be a “change station” user input determined based on a recognized swipe gesture performed in front of a car radio player and the car radio player may be controlled to change to a next station in a list of defined stations.
  • the user input may be a “summon elevator” user input determined based on an object (e.g., hand) detected in front of an elevator door, and an elevator system may be controlled to transfer an elevator from another floor to the floor where the elevator door is located.
  • the user input may be an “open door” user input based on a detected object (e.g., person) in front of a doorway, and a door may be opened in response to the user input.
  • FIG. 14 illustrates a process 1400 for determining a position of an object.
  • the operations of the process 1400 are described generally as being performed by the system 300 .
  • the operations of the process 1400 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 1400 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 controls multiple illumination sources to illuminate in sequence with images captured by a camera in an alternating pattern ( 1402 ).
  • multiple illumination sources may be positioned at an opposite side of a display screen from a camera.
  • Each illumination source may be positioned at a different angle to illuminate a different illuminated area in front of the display screen.
  • FIGS. 15A-C illustrate various illumination source configurations.
  • FIG. 15A illustrates a system 1510 in which an illumination source 1512 is positioned to produce an illuminated area 1514 in front of a display screen 1516 .
  • An intersection region 1518 is formed by the intersection of the illuminated area 1514 and a wide-angle field-of-view 1520 of a camera 1522 . Most of the area of the intersection region 1518 is located near the top of the display screen 1516 .
  • FIG. 15B illustrates a system 1530 in which an illumination source 1532 is positioned to produce an illuminated area 1534 angled further away from a display screen 1536 (e.g., as compared to the distance between the illuminated area 1514 and the display screen 1516 ).
  • An intersection region 1538 located near the center of the display screen 1536 is formed by the intersection of the illuminated area 1534 and a medium-angle field-of-view 1540 of a camera 1522 .
  • FIG. 15C illustrates a system 1550 in which an illumination source 1552 is positioned to produce an illuminated area 1554 angled even further away from a display screen 1556 (e.g., as compared to the distance between the illuminated area 1514 and the display screen 1516 ).
  • An intersection region 1558 located near the bottom of the display screen 1556 is formed by the intersection of the illuminated area 1554 and a narrow-angle field-of-view 1560 of a camera 1562 .
  • FIG. 16 illustrates a system 1600 which includes multiple illumination sources.
  • the system 1600 includes illumination sources 1602 - 1606 producing illuminated areas 1608 - 1610 , respectively.
  • the illumination sources 1602 - 1606 may correspond, for example, to illumination sources 1512 , 1532 , and 1552 , respectively, and the illuminated areas 1608 - 1612 may correspond to illuminated areas 1514 , 1534 , and 1554 , respectively (e.g., as described above with respect to FIGS. 15A-C ).
  • the illumination sources 1602 - 1606 may be controlled to illuminate, one at a time, in sequence with images captured by a camera 1614 .
  • the illumination source 1602 may be controlled to illuminate the illuminated area 1608 while the camera 1614 captures a first camera image
  • the illumination source 1604 may be controlled to illuminate the illuminated area 1610 while the camera 1614 captures a second camera image
  • the illumination source 1606 may be controlled to illuminate the illuminated area 1612 while the camera captures a third camera image.
  • the system 300 identifies an image captured when the corresponding illumination source was illuminated and the other illumination sources were not ( 1404 ). For example and as shown in FIG. 16 , a first camera image may be identified which corresponds to when the illumination source 1602 was illuminated, a second camera image may be identified which corresponds to when the illumination source 1604 was illuminated, and a third camera image may be identified which corresponds to when the illumination source 1606 was illuminated.
  • the system 300 analyzes each of the identified images in combination to determine an enhanced position of an object within an intersection region defined by the multiple illumination sources ( 1406 ). For instance, in the example of FIG. 16 , a finger 1616 of a user 1618 reaching towards the bottom of a display screen 1620 may be detected in a camera image captured when the illumination source 1606 is illuminated. If the user reaches farther forward, closer to the display screen 1620 , the finger 1616 may be detected when either the illumination source 1604 or the illumination source 1602 is illuminated.
  • An approximately rectangular intersection region 1622 is formed by the combination of the intersection of the illuminated areas 1608 - 1612 and one or more field-of-views of the camera 1614 . That is, the overlapping of the intersection of the illuminated area 1612 and a field-of-view of the camera 1614 with the intersection of the illuminated area 1610 and a field-of-view of the camera 1614 with the intersection of the illuminated area 1608 and a field-of-view of the camera 1614 nearly fills the rectangular area 1622 .
  • the use of illuminators 1602 - 1606 to form the rectangular intersection region 1622 allows for an object (e.g., the finger 1616 ) to be detected at close to a constant distance (e.g., six inches) from the display 1620 . Additionally, the use of multiple illuminators 1602 - 1606 allows for a depth detection of the finger 1616 (e.g., distance from the display screen 1620 ), as well as for detection of a horizontal and vertical position of the finger 1616 .
  • FIG. 17 illustrates a process 1700 for determining a position of one or more objects.
  • the operations of the process 1700 are described generally as being performed by the system 300 .
  • the operations of the process 1700 may be performed exclusively by the system 300 , may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system.
  • operations of the process 1700 may be performed by one or more processors included in one or more electronic devices.
  • the system 300 captures a grayscale image ( 1702 ).
  • the system 300 controls the illumination source 106 while the camera 104 is capturing grayscale images (e.g., with respect to FIG. 1 ).
  • the system 300 compares pixels of the grayscale image to a brightness threshold to produce a corresponding binary image ( 1704 ). For example, pixels in the camera image having a brightness value above a threshold may be identified in the binary image with a value of one and pixels having a brightness value below the threshold may be identified in the binary image with a value of zero.
  • the system 300 groups pixels within the binary image into one or more blobs ( 1706 ). For example, pixels may be clustered into one or more blobs based on proximity of the pixels together.
  • the system 300 references the grayscale image while clustering blobs within the binary images into one or more clusters ( 1708 ).
  • Grayscale images may be referenced, for example, if two or more blobs are adjacent to one another in a binary image. Grayscale images might not be referenced if, for example, one blob exists in a binary image.
  • a binary clustering of pixels may result in two blobs (e.g., one blob for the thumb and one blob for the rest of the hand).
  • Grayscale images may be referenced to determine whether the blob for the thumb and the blob for the hand should be connected, or whether they are in fact two distinct objects.
  • pixels located between the two blobs e.g., where the thumb connects to the hand
  • had brightness values which were close to the brightness threshold which indicates that the area between the blob for the thumb and the blob for the hand might be part of a single object (e.g., the hand along with the thumb) and that the area between the thumb and the hand was illuminated, but not highly illuminated by the illumination source.
  • the two objects may be connected and treated as a single cluster.
  • the system 300 determines a position of one or more objects in the captured images based on results of the clustering ( 1710 ). For example, for each cluster, a position may be computed. In this example, the position may be a centroid of a single blob in the cluster or a weighted combination of centroids from multiple blobs in the cluster. For blobs that were clustered based on referencing grayscale images, one position may be computed for the clustered blobs and used as the position of a corresponding detected object.
  • FIG. 18 illustrates an example of a tracking system 1800 .
  • the system 1800 may be used, for example, in a museum.
  • the system 1800 may be targeted, for example, for use by blind patrons of the museum.
  • the system 1800 includes a painting 1802 , a camera 1804 , a speaker 1805 , and an illumination source 1806 .
  • the speaker 1805 may play a repeating sound, such as a “chirp” or “beep” to direct patrons to the vicinity of the painting 1802 .
  • a blind patron 1808 may hear the beeping and may walk up to the painting 1802 .
  • the camera 1804 is configured to capture images and is positioned at the top side of the painting 1802 and is angled downward with respect to the painting 1802 .
  • a field-of-view 1809 of the camera 1804 is located in front of the painting 1802 .
  • the illumination source 1806 is positioned at the bottom side of the painting 1802 .
  • the illumination source 1806 is configured to illuminate an illuminated area 1810 located in front of the painting 1802 .
  • the illuminated area 1810 intersects the field-of-view 1809 to define an intersection region 1812 in front of the painting 1802 .
  • Captured camera images may be analyzed to detect an object such as a hand 1816 of the patron 1808 within the intersection region 1812 .
  • a user input may be determined based on the detection of the object within the intersection region 1812 .
  • a “play audio recording” user input may be determined based on the presence of an object within the intersection region 1812 .
  • the speaker 1805 may be controlled to play an audio recording providing details about the painting 1802 .
  • a gesture may be detected based on one or more determined positions of the detected object. For example, a “swipe” gesture may be determined, a “stop audio playback” user input may be determined based on the recognized gesture, and the speaker 1805 may be controlled to turn off playback of the audio recording.
  • FIG. 19 illustrates a system 1900 for object tracking.
  • a camera 1902 is included in (e.g., embedded in or mounted on) a car dashboard 1904 .
  • the field-of-view of the camera 1902 may be in front of the dashboard 1904 (e.g., extending from the camera 1902 towards the back of the vehicle).
  • a radio 1905 may be positioned below the camera 1902 .
  • the field-of-view of the camera 1902 may be angled downward to capture images of an area in front of the radio 1905 .
  • an illumination source 1909 may be positioned below the camera 1902 .
  • the illumination source 1909 may be a row of infrared LEDs.
  • the row of infrared LEDs may be angled upward such that infrared light emitted by the row of infrared LEDs intersects the field-of-view of the camera 1902 to define an intersection region.
  • the intersection region may be positioned about eight inches away from a front surface of the radio 1905 and may have a height that is similar to the height of the radio 1905 .
  • the intersection region may be defined a sufficient distance above the gear shift such that a driver's movements to control the gear shift are not within the intersection region. In this configuration, the driver's movements to control the gear shift are not interpreted as control inputs to the radio 1905 , even though the driver's movements to control the gear shift are within the field-of-view of the camera 1902 .
  • a user's hand 1906 may be detected in one or more camera images captured by the camera 1902 .
  • the user may perform a right-to-left swipe gesture within the intersection region defined in front of the radio 1905 , as illustrated by a hand 1908 .
  • the swipe gesture may be detected if multiple detected positions of the user's hand within a series of images indicate a fast side-to-side horizontal movement of the hand 1906 .
  • a “change station” user input may be determined based on the detected swipe gesture.
  • the radio 1905 may be controlled to change to a different radio station (e.g., change to a next radio station in a list of predefined radio stations) based on the detected swipe gesture. Allowing a user to change a radio station of a car radio by swiping the user's hand in front of the car radio may increase the safety of using the car radio because the user may control the car radio without diverting his or her eyes from the road.
  • FIG. 20 is a schematic diagram of an example of a generic computer system 2000 .
  • the system 2000 can be used for the operations described in association with the processes 400 , 500 , 600 , 900 , 1400 , and 1700 , according to one implementation.
  • the system 2000 includes a processor 2010 , a memory 2020 , a storage device 2030 , and an input/output device 2040 .
  • Each of the components 2010 , 2020 , 2030 , and 2040 are interconnected using a system bus 2050 .
  • the processor 2010 is capable of processing instructions for execution within the system 2000 .
  • the processor 2010 is a single-threaded processor.
  • the processor 2010 is a multi-threaded processor.
  • the processor 2010 is capable of processing instructions stored in the memory 2020 or on the storage device 2030 to display graphical information for a user interface on the input/output device 2040 .
  • the memory 2020 stores information within the system 2000 .
  • the memory 2020 is a computer-readable medium.
  • the memory 2020 is a volatile memory unit.
  • the memory 2020 is a non-volatile memory unit.
  • the storage device 2030 is capable of providing mass storage for the system 2000 .
  • the storage device 2030 is a computer-readable medium.
  • the storage device 2030 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
  • the input/output device 2040 provides input/output operations for the system 2000 .
  • the input/output device 2040 includes a keyboard and/or pointing device.
  • the input/output device 2040 includes a display unit for displaying graphical user interfaces.
  • the features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.
  • the apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output.
  • the described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
  • a computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data.
  • a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
  • Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • magneto-optical disks and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
  • ASICs application-specific integrated circuits
  • the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • the features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them.
  • the components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
  • the computer system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a network, such as the described one.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Abstract

Object tracking technology, in which controlling an illumination source is controlled to illuminate while a camera is capturing an image to define an intersection region within the image captured by the camera. The image captured by the camera is analyzed to detect an object within the intersection region. User input is determined based on the object detected within the intersection region and an application is controlled based on the determined user input.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 12/578,530, filed Oct. 13, 2009 and entitled “Proximity Object Tracker”, which claims priority from U.S. Provisional Patent Application Ser. No. 61/249,527, filed Oct. 7, 2009, entitled “Hover Detection.” The entire contents of the previous applications are incorporated herein by reference for all purposes.
FIELD
The present disclosure generally relates to object tracking.
BACKGROUND
Cameras have been used to capture images of objects. Techniques have been developed to analyze one or more images of an object present within one or more images to detect a position of the object. For example, optical flow has been used to detect motion of an object by analyzing multiple images of the object taken successively in time.
SUMMARY
In one aspect, an electronic system includes a camera having a field of view of a first area and an illumination source that is angled with respect to the camera and that is configured to illuminate a second area. The second area intersects the first area to define an intersection region within the field of view of the camera. The electronic system also includes a processing unit configured to perform operations. The operations includes capturing an image from the camera and analyzing the image captured by the camera to detect an object within the intersection region. The operations also include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
Implementations may include one or more of the following features. For example, the electronic system may include a display screen configured to display a graphical user interface. In this example, the camera may be positioned at a first side of the display screen, may be angled with respect to the display screen, and the field of view of the camera may be of the first area in front of the display screen. Further, in this example, the illumination source may be positioned at a second side of the display screen, may be angled with respect to the display screen, and may be configured to illuminate the second area in front of the display screen. The second side of the display screen may be opposite of the first side of the display screen and the second area in front of the display screen may intersect the first area in front of the display screen to define the intersection region in front of the display screen.
In some examples, the operations performed by the processing unit may include comparing pixels of the image captured by the camera to a brightness threshold to produce a binary image. Pixels in the binary image may indicate whether or not the corresponding pixels in the image captured by the camera meet the brightness threshold. In these examples, the operations also may include grouping pixels within the binary image into one or more blobs, grouping the one or more blobs into one or more clusters, and determining a position of one or more objects in the binary image based on the one or more clusters.
In some implementations, the operations may include clustering blobs within the binary image into one or more clusters based on a tracking mode of the electronic system. For instance, when the electronic system is configured in a single object tracking mode, the operations may include clustering blobs within the binary image into a single cluster, determining a position of the single object based on the single cluster; and determining user input based on the position of the single object.
When the electronic system is configured in a two object adjacent tracking mode, the operations may include clustering blobs in a horizontal direction from an outer edge of first and second sides of the binary image to a center of the binary image to identify a first cluster at the first side of the image and a second cluster at the second side of the image. The operations also may include determining a position of a first object based on the first cluster, determining a position of a second object based on the second cluster, and determining user input based on the position of the first object and the position of the second object. The operations further may include weighting proximity of blobs in the horizontal direction higher than proximity of blobs in a vertical direction in clustering blobs together.
When the electronic system is configured in a two object stacked tracking mode, the operations may include clustering blobs in a vertical direction from an outer edge of a top and a bottom of the binary image to a center of the binary image to identify a first blob at a top portion of the image and a second blob at a bottom portion of the image. The operations also may include determining a position of a first object based on the first cluster, determining a position of a second object based on the second cluster, and determining user input based on the position of the first object and the position of the second object. The operations further may include weighting proximity of blobs in the vertical direction higher than proximity of blobs in a horizontal direction in clustering blobs together.
In some examples, the operations may include determining a tracking mode of the electronic system from among at least a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode. In response to the determined tracking mode of the electronic system being the single hand tracking mode, the operations may include clustering blobs within the binary image into a single cluster and computing a position of the single object based on the single cluster. In response to the determined tracking mode of the electronic system being the two hand adjacent tracking mode, the operations may include clustering blobs in a horizontal direction from an outer edge of first and second sides of the binary image to a center of the binary image to identify a first cluster at the first side of the image and a second cluster at the second side of the image, computing a position of a first object based on the first cluster, and computing a position of a second object based on the second cluster. In response to the determined tracking mode of the electronic system being the two hand stacked tracking mode, the operations may include clustering blobs in a vertical direction from an outer edge of a top and a bottom of the binary image to a center of the binary image to identify a first cluster at a top portion of the image and a second cluster at a bottom portion of the image, computing a position of a first object based on the first cluster, and computing a position of a second object based on the second cluster.
In some implementations, the operations may include mapping a position of the detected object to an interface displayed by the application being controlled and determining user input based on the mapped position of the detected object to the interface displayed by the application being controlled. In these implementations, the operations may include determining whether the mapped position of the detected object corresponds to an element displayed in the interface displayed by the application being controlled. In addition, in these implementations, the operations may include mapping the position of the detected object to a cursor position in the interface displayed by the application being controlled and determining user input based on the cursor position in the interface displayed by the application being controlled.
In some examples, the operations may include detecting performance of a gesture by the detected object based on positions of the detected object determined within a series of images captured by the camera and determining user input based on the detected gesture. In these examples, the operations may include detecting a swipe gesture and determining user input based on the detected swipe gesture. Further, in these examples, the operations may include detecting a gesture in which two detected objects are moving horizontally together or apart and determining user input based on the detected gesture in which two detected objects are moving horizontally together or apart.
The illumination source may be a first illumination source and the electronic system may include a second illumination source that is angled with respect to the camera differently than the first illumination source and that is configured to illuminate a third area. The third area may intersect the first area, may be different than the second area, and, in combination with the second area, may define a combined intersection region within the field of view of the camera. When the electronic system includes a second illumination source, the operations may include controlling the first and second illumination sources to illuminate in sequence with images captured by the camera in an alternating pattern, identifying a first image captured when the first illumination source was illuminated and the second illumination sources was not illuminated, identifying a second image captured when the first illumination source was not illuminated and the second illumination sources was illuminated, and analyzing the first and second images in combination to determine a position of an object within the combined intersection region defined by the first and second illumination sources.
In some implementations, the operations may include capturing a grayscale image and comparing pixels of the grayscale image to a brightness threshold to produce a binary image. Pixels in the binary image may indicate whether or not the corresponding pixels in the grayscale image captured by the camera meet the brightness threshold. In these implementations, the operations also may include grouping pixels within the binary image into blobs, referencing the grayscale image in clustering blobs within the binary image into one or more clusters, and determining a position of one or more objects in the binary image based on results of the clustering.
The illumination source may be an infrared emitter. The operations may include ignoring objects that are within the camera's field of view and outside of the intersection region. The operations also may include using motion information to detect a moving object within the intersection region. The motion information may include motion history data and/or optical flow data.
In some examples, the operations may include controlling the illumination source to illuminate while the camera is capturing the image to define the intersection region within the image captured by the camera. In these examples, the operations may include controlling the illumination source to turn on prior to capturing the image from the camera. In addition, in these examples, the operations may include controlling the illumination source to illuminate in sequence with images captured by the camera in an alternating pattern such that a first image captured by the camera is captured when the illumination source is illuminated and a second image captured by the camera is captured when the illumination source is not illuminated. The operations may include subtracting the second image from the first image to produce a resulting image and analyzing the resulting image to detect the object within the intersection region.
In another aspect, at least one computer-readable storage medium is encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to perform operations. The operations include capturing an image from a camera and analyzing the image captured by the camera to detect an object within an intersection region defined within the camera's field of view by an illumination source. The operations also include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
In yet another aspect, a method includes capturing an image from a camera and analyzing the image captured by the camera to detect an object within an intersection region defined within the camera's field of view by an illumination source. The method also may include determining user input based on the object detected within the intersection region and controlling an application based on the determined user input.
The details of one or more implementations are set forth in the accompanying drawings and the description, below. Other potential features and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1, 2A-B, 3, 15A-C, 16, 18, 19, and 20 are diagrams of exemplary systems.
FIGS. 4, 5, 6, 9, 14, and 17 are flowcharts of exemplary processes.
FIGS. 7 and 8 are diagrams of exemplary clusters.
FIGS. 10, 11, 12, and 13 are diagrams of exemplary gestures and associated user interfaces.
Like reference numbers represent corresponding parts throughout.
DETAILED DESCRIPTION
Techniques are described for tracking one or more objects (e.g., one or more hands) in front of a display surface. In some implementations, a system includes a light source placed to one of the four sides of a display (e.g., top side of the user-facing surface of the display) with its light oriented towards a tracking region in front of the display. The system also includes a camera placed on the opposite side of the display (e.g., bottom side of the user-facing surface of the display) and oriented towards the tracking region in front of the display. The light source may be a row of infrared emitters (which may or may not be flashing) and the one or more objects (e.g., one or more hands) are tracked within camera images as blobs either individually or as a group.
In these implementations, the placement of the camera and the infrared emitters and their angle relative to each other create an intersection region that defines the tracking region and limits the potential for errors. The tracking region may be moved around or redefined to a certain degree, as long as the infrared emitters do not illuminate other objects beyond the tracked object that are still in the view of the camera. By creating an intersection region and tracking objects within the intersection region, more accurate object tracking may be possible at a lower cost.
FIG. 1 illustrates an example of a tracking system 100. The system 100 includes a display screen 102, a camera 104, and an illumination source 106. The display screen 102 may be, for example, a computer monitor, digital picture frame, or a television screen, or a non-electric screen upon which an image is projected. In some examples, the display screen 102 may be behind a glass window. The display screen 102 may be configured to display a graphical user interface for an application which includes one or more interface controls.
The camera 104 captures images. The camera 104 is positioned at the top side of the display screen 102 and is angled downward with respect to the display screen 102. A field-of-view 108 of the camera 104 is located in front of the display screen 102. In other configurations, the camera 104 may be positioned at a different side of the display screen 102 (e.g., the bottom, left, or right side) or may be embedded within or included in the display screen 102. The camera 104 also may be positioned behind the display screen 102.
The illumination source 106 is positioned at the bottom edge of the display screen 102 (e.g., at the opposite edge of the display screen 102 as compared to the position of the camera 104) and is angled upward with respect to the display screen 102. In other configurations, the illumination source 106 may be positioned at a different edge of the display screen 102 (e.g., the top edge or a side edge). The illumination source 106 may be, for example, a set of one or more infrared LEDs (Light Emitting Diodes). The illumination source 106 is configured to illuminate an illuminated area 110 located in front of the display screen 102.
The illuminated area 110 intersects the field-of-view 108 to define an intersection region 112 in front of the display screen 102. The illumination source 106 may be controlled to illuminate while the camera 104 is capturing one or more images (e.g., the illumination source 106 may be turned on before the camera 104 captures images). The captured camera images may be analyzed (e.g., by one or more processors) to detect one or more illuminated objects within the intersection region 112. An object may be, for example, a hand, finger, other body part, a stylus, pointer, remote control device, game controller, etc. Objects within the field-of-view 108 but outside of the intersection region 112 may be ignored. That is, the camera 104 and the illumination source 106 are positioned such that control objects such as a user's hand are included in the intersection region 112 but other objects such as a user's head or torso are not included in the intersection region 112 (even if the other objects are included in the field-of-view 108 and/or the illuminated area 110).
For example, as shown in FIG. 1, a user 114 is standing in front of the display screen 102. The user 114 extends a hand 116 so that it is positioned within the intersection region 112. The illumination source 106 illuminates the hand 116 and may illuminate other objects, such as the head of the user 114. The camera 104 may capture one or more images while the hand 116 is within the intersection region 112. A processor may analyze the camera images for illuminated objects. Because the hand 116 is within the intersection region 112, the processor detects the hand 116 as an illuminated object.
In addition, by analyzing the camera images for illuminated objects, the processor is able to ignore objects that are outside of the intersection region 112 and that are unrelated to the input being provided by the user 114 with the user's hand 116. For instance, the illumination source 106 illuminates a portion of the user's head and the camera 104 captures images of the arm and torso of the user 114. However, because the portion of the user's head is not within the field of view of the camera 104 and the arm and torso of the user 114 are not illuminated by the illumination source 106, the processor ignores these objects when attempting to detect an object providing user input.
As will be described in more detail below, objects may be detected, for example, by comparing pixels of the camera images to a brightness threshold to produce a binary image and by clustering pixels within the binary image into one or more blobs based on whether a tracking mode is a single hand tracking mode, a two hand adjacent tracking mode, or a two hand stacked tracking mode. One or more object positions may be determined based on the results of the clustering.
A user input may be determined based on the detection of one or more objects within the intersection region 112. For example, the position of a detected object may be mapped to a user interface of an application displayed on the display screen 102. As another example, movement of the object in a horizontal and/or vertical direction may be detected. In addition, a gesture may be detected based on one or more determined positions of the detected object. A “swipe” gesture, a “clap” gesture, a “pounding” gesture, a “chopping” gesture, or a “grab” gesture may be detected, to name a few examples.
As used herein throughout, a “gesture” is intended to refer to a form of non-verbal communication made with a whole or part of a human body or multiple human bodies, and is contrasted with verbal communication such as speech. For instance, a gesture may be defined by a movement, change or transformation between a first position, pose, or expression and a second pose, position or expression. Example gestures include for instance, an “air quote” gesture, a bowing gesture, a curtsey, a cheek-kiss, a finger or hand motion, a genuflection, a head bobble or movement, a high-five, a raised fist, a salute, a swiping or wave motion, a thumbs-up motion, or a finger pointing gesture.
Accordingly, from a sequence of images, a gesture may be derived that defines an idea, opinion, emotion, communication, command, demonstration or expression of the user. For instance, the user's gesture may be a single or multiple finger gesture; a single hand gesture; a single hand and arm gesture; a single hand and arm, and body gesture; a bimanual gesture; or a transformation of any other expressive body state.
For brevity, the body part or parts used to perform relevant gestures are generally referred to as an “object.” For instance, the user may express a command using their entire body or with other physical objects, in which case their entire body or the other physical objects may be the object. A user may more subtly express a command by wiggling a finger, in which case the finger may be the object. The user's gesture in a single image or between two images may be expressive of an enabling or “engagement” gesture. An object may also be a physical device, such as an infrared finger light, a retro-reflector, or a remote control.
An application displayed on the display screen 102 may be controlled based on the determined user input. For example, if a swipe gesture is detected, a next picture may be displayed in a photo viewing application. As another example, if a “pounding” gesture is detected, a drum noise may be played based on a detected object position matching a corresponding position of a graphic of a drum displayed on the user interface displayed on the display screen 102. As yet another example, a television channel may be changed based on a detected change in vertical position (e.g., up, down) of the detected object.
FIG. 2A illustrates a front view of a tracking system 200. The system 200 includes a camera 202 positioned at a top side of a display screen 204 and an illumination source 206 positioned at a bottom side of the display screen 204. The lens of the camera 202 may be positioned a particular distance (e.g., five centimeters, ten centimeters) above the top side of the display screen 204. The illumination source 206 may include a row of multiple illuminators (e.g., multiple infrared LEDs and may be positioned a particular distance (e.g., five centimeters, ten centimeters) below the bottom side of the display screen 204. The display screen 202 may be, for example, a twenty one inch computer monitor (e.g., the distance from one corner of the display screen 204 to the opposite corner may be twenty one inches).
FIG. 2B illustrates a side view of a tracking system 215. The system 215 includes a camera 220 positioned at a top side of a display screen 222. The camera 220 is angled downward relative to the display screen 222. The camera 220 may be positioned, for example, at a thirty degree angle. An illumination source 224 is positioned at the bottom side of the display screen 222 and is angled upward towards the display screen 222. The illumination source 224 may be positioned, for example, at a thirty degree angle. The positions (e.g., distances from the display screen 222, angles) of the illumination source 224 and the camera 220 may be configured such that control objects (e.g., hand, pointer) used by typical users are captured within an intersection region defined by the intersection of the field-of-view of the camera 220 and an illuminated area illuminated by the illumination source 224 and so that objects not intended as control objects are not captured in the intersection region.
The angle of the camera 220 and/or the angle of the illumination source 224 may affect the size and location of an intersection region defined by the intersection of the field-of-view of the camera 220 and an illuminated area illuminated by the illumination source 224. Additionally, the size and location of the intersection region may affect detection of objects in the intersection region. For example, if the angle of the camera 220 is configured so that the camera 220 is facing relatively straight out (e.g., at a small angle relative to a horizontal plane), an object may not be detected (e.g., may not be in the field-of-view of the camera 220) if the object is close to the display screen 222 and/or near the bottom of the display screen 222. Additionally, in such a configuration it may be difficult to detect an object such as a user's hand because the hand may be in front of other objects such as the user's head or torso in the captured camera image, making it difficult to distinguish the hand.
As another example, if the angle of the camera 220 is about forty-five degrees to a display screen, it may become difficult to distinguish between a user's in-and-out movements and a user's up-and-down movements (e.g., both movements may appear similar in a sequence of captured camera images). In addition, if the angle of the camera 220 is configured so that the camera 220 is facing relatively straight down (e.g., at a small angle relative to a vertical plane), a user's up-and-down movements may be difficult to track. An example configuration of camera angle and illumination source angle creates an intersection region that is close enough to the display screen 222 so that a user's outstretched hand or finger may be detected and so that the user's arm, torso, or head are not detected (e.g., such as the intersection region 112 shown in FIG. 1).
In some implementations, a “sharp edge” (e.g., steep change in intensity over a short distance) to the illuminated area is created by the illumination source 224 in order to reduce (e g, minimize) the area where an object may be detected unreliably because it is illuminated by weak or extraneous infrared light from the illumination source 224. To create a sharp edge to the illuminated area, LEDs with a narrow angle lens may be used, such as LEDs with a small angle of half intensity (e.g., where the angle of half intensity defines how far from center an LED drops to half of its maximum intensity). An LED with a narrow-angled lens produces a relatively steep drop-off of light intensity as compared to LEDs with wider-angled lens. Using an LED with a narrow-angled lens forms a narrower but more sharply defined illumination region as compared with using an LED with a wider-angled lens.
An LED lens that produces an oval illumination pattern may be used, so that the angle of half intensity of the lens in one dimension is narrow, and the angle of half intensity in the other dimension for the lens is wider. Using lenses that produce oval illumination patterns, may allow for LEDs to be spaced further apart than if other types of LEDs are used.
Multiple rows of LEDs may be used. For example, the illumination source 224 may be a first row of LEDs and an illumination source 226 may be a second row of LEDs. The illumination source 224 may be a row of narrow-angled LEDs which produce a sharp illuminated edge and the illumination source 226 may be a row of wider-angled LEDs which illuminate an area between the illuminated area created by the illumination source 224 and the display screen 222.
A sharp illuminated edge may also be created by using a channel, shield, or mirror which blocks emitted light on one side, thereby producing an illumination region where the edge nearest the channel, shield, or mirror is sharp while the edge near the surface of the display screen 222 is a softer illuminated edge. As another example, a sharp edge may be created by using a custom asymmetric lens.
FIG. 3 illustrates an example of a tracking system 300. The system 300 includes a display screen 301, a storage medium 302, a camera 304, a processor 305, and an illumination source 309. The system 300 may be included in or used in conjunction with a digital picture frame, a television, a monitor, a product display unit, or any type of media system.
The display screen 301 renders a visual display image. For example, the display screen 301 may be a monitor display, a television display, a liquid crystal display (LCD), a plasma display device, a projector with a projector screen, an auto-stereoscopic display, a cathode ray tube (CRT) display, a digital light processing (DLP) display, a digital picture frame display, or any other type of display device configured to render a visual display image. The display screen 301 may include one or more display devices. The display screen 301 may display images associated with an application. For instance, the display screen 301 may render display images generated by an application (e.g., a photo viewing application). The display images generated by the application may include a user interface with interface controls.
The camera 304 is a device that captures images. For example, the camera 304 may be a digital camera, a digital video camera, or any other type of device that captures images. In some implementations, the camera 304 may be a single camera and the system 300 may include only the single camera. In other implementations, multiple cameras may be used. The camera 304 may capture images of an object interacting with an interface displayed on the display screen 301. For instance, the camera 304 may capture images of a user or person physically interacting (e.g., with a finger or hand) with an interface displayed on the display screen 301. The camera 304 may be any type of image sensor and may be a line scan sensor.
The illumination source 309 is a device that provides a light source. For example, the illumination source 309 may be a flash device, an incandescent light bulb, a fluorescent light bulb, an LED, a halogen light source, a neon light source, a xenon light source, an infrared light source, or any other type of device configured to illuminate an object being imaged by the camera 304. A flash device may, over one or more cycles, project electromagnetic radiation and then extinguish the projected electromagnetic radiation.
The illumination source 309 may include one or more illuminators. The illumination source 309 may generate light to assist in capturing a high quality image of an object being captured by the camera 304. In some implementations, the illumination source 309 may be used in particular situations. For instance, the illumination source 309 may be used at nighttime or in dark rooms. The illumination source 309 may be positioned to define an intersection region within the field of view of the camera 304. Defining an intersection region using the illumination source 309 may increase the accuracy of object detection with a single camera and also may increase the number of control objects that may be detected by a single camera. Using a single camera may help reduce costs of the system and enable gesture-based input control to be realized in less expensive devices.
The storage medium 302 stores and records information or data, and may be an optical storage medium, magnetic storage medium, flash memory, or any other storage medium type. The storage medium 302 includes a vocabulary 310 and a gesture recognition module 314. The vocabulary 310 includes information regarding gestures that the system 300 may recognize. For example, the vocabulary 310 may include gesture definitions which describe, for each recognized gesture, a set of movements included in a gesture. The gesture recognition module 314 receives captured images from the camera 304, maps a position of a detected object to an interface displayed on the display screen 301, and detects a gesture based on comparing positions of the detected object within a series of images to gesture definitions stored in the vocabulary 310 to determine whether a recognizable gesture has been performed.
The processor 305 may accept input from a user interface displayed on the display screen 301 and may analyze images captured by the camera 304. The processor 305 may execute applications and operating systems being run on the system 300. The system 300 may include multiple processors (or other control circuitry) and may include memory (or other computer-readable storage media) that stores application programs, operating systems, user input programs, and data used by the application programs, operating systems, and user input programs.
In some implementations, the system 300 does not include the display screen 301. For example, the system 300 may be configured to detect objects in an intersection region where the intersection region is located in front of a different physical object such as a door, elevator, machine, radio, media player, or other object. In some examples, the system 300 is located in front of an area of space, such as a doorway or entryway.
FIG. 4 illustrates a process 400 for controlling an application. The operations of the process 400 are described generally as being performed by the system 300. The operations of the process 400 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 400 may be performed by one or more processors included in one or more electronic devices.
The system 300 captures an image from a camera (402). For example, in reference to FIG. 1, an image may be captured by the camera 104. In some implementations, the system 300 may control the illumination source 106 to illuminate while the camera 104 is capturing an image (e.g., the illumination source 106 may be turned on prior to the capturing of images by the camera 104).
In some implementations, the illumination source is controlled to illuminate in sequence with images captured by the camera in an alternating pattern such that a first image captured by the camera is captured when the illumination source is illuminated and a second image captured by the camera is captured when the illumination source is not illuminated. The captured camera image may include an intersection region which is defined by the intersection of the field-of-view of the camera and an area illuminated by an illumination source.
In some implementations, the intersection region is located in front of a display screen. In other implementations, the intersection region is located in front of another type of object, such as a radio, elevator, painting, manufacturing device, automatic teller machine, light switch, vending machine, beverage dispenser, or any other physical object. In some implementations, the intersection region is located in front of an area of space, such as a doorway.
The system 300 analyzes the image captured by the camera to detect an object within the intersection region (404). For example, in reference to FIG. 1, the hand 116 located within the intersection region 112 is detected while the head, arm, or torso of the user 114 which are located outside of the intersection region 112 are not detected. The system 300 may ignore objects that are within the camera's field of view and outside of the intersection region by analyzing the image for illuminated objects. Because any objects within the camera's field of view and outside of the intersection region are not illuminated, the system 300 ignores (e.g., does not detect) these objects.
In implementations where alternating camera images are captured while an illumination source is turned on, a camera image captured while the illumination source is turned off may be subtracted from a camera image captured while the illumination source was turned on to produce a resulting image. The resulting image may be analyzed to determine whether one or more objects are illuminated in the camera image captured when the illumination source was turned on. Subtracting the camera image captured when the illumination source was turned off may remove ambient light which was present in both camera images.
In some implementations, the system 300 detects an object within an image by analyzing multiple images taken over time to detect moving objects. The system 300 may use an optical flow process or examine a motion history image to detect objects in motion. In these implementations, the system 300 tracks the objects in motion and ignores static objects. For example, in a situation in which a user's hand and the user's face are present within an intersection region and the user is moving his or her hand while keeping his or her face stationary, the system 300 detects and tracks the moving hand as an object of interest, but does not track the user's face as an object of interest.
In some examples, the system 300 detects an object within an image by analyzing shapes within the image. In these examples, the system 300 may attempt to detect a hand within the intersection region of the image. In attempting to detect a hand, the system 300 may compare shapes of illuminated objects within the image to a shape of a typical hand. When the system determines that a shape of an illuminated object matches the shape of the typical hand, the system 300 detects and tracks the object as an object of interest. When the system determines that a shape of an illuminated object does not match the shape of the typical hand, the system 300 does not track the object as an object of interest. Analyzing a camera image to detect an object within the intersection region is described in more detail below with respect to FIG. 5.
FIG. 5 illustrates a process 500 for analyzing a camera image to detect an object within an intersection region. The operations of the process 500 are described generally as being performed by the system 300. The process 500 may used in analyzing an image captured by the camera to detect an object within the intersection region referenced above with respect to reference numeral 404. The operations of the process 500 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 500 may be performed by one or more processors included in one or more electronic devices.
The system 300 compares pixels of the image captured by the camera to a brightness threshold to produce a binary image (502). For example, pixels in the camera image having a brightness value above a threshold may be identified in the binary image with a value of one and pixels having a brightness value below the threshold may be identified in the binary image with a value of zero.
The system 300 groups pixels within the binary image into one or more blobs (504). For example, pixels may be clustered into one or more blobs based on proximity of the pixels together.
The system 300 groups one or more blobs within the binary image into one or more clusters based on a tracking mode (506). For example, blobs may be clustered into one or more clusters based on whether a tracking mode is a single object tracking mode, a two object adjacent tracking mode, or a two object stacked tracking mode.
The system 300 determines a position of one or more objects in the image captured by the camera based on the one or more clusters (508). For example, a position of a user's hand or finger, two hands, a stylus or other pointing device, a game controller, a remote control, or some other object may be determined Determining the position of one or more objects based on a tracking mode is discussed in more detail below with respect to FIG. 6.
FIG. 6 illustrates a process 600 for determining a position of one or more hands based on a tracking mode. The process 600 may used in grouping blobs within the binary image into one or more clusters based on a tracking mode referenced above with respect to reference numeral 506 and in determining a position of one or more objects in the image captured by the camera based on the one or more clusters referenced above with respect to reference numeral 508. The process 600 may be used to detect objects other than hands, such as fingers, pointing devices, etc.
The operations of the process 600 are described generally as being performed by the system 300. The operations of the process 600 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 600 may be performed by one or more processors included in one or more electronic devices.
The system 300 determines a tracking mode from among at least a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode (602). For example, a current tracking mode setting may be retrieved from the storage medium 302 referenced above with respect to FIG. 3. In this example, the current tracking mode setting may be set based on user input (e.g., input setting the tracking mode as one of a single hand tracking mode, a two hand adjacent tracking mode, and a two hand stacked tracking mode).
In another example, the current tracking mode setting may be set based on which application is being controlled by the system 300. In this example, the system 300 may provide multiple applications (e.g., multiple games) that are controlled using different types of user input. The current tracking mode, therefore, is set based on the application being used and the type of input expected by that application.
In a single hand tracking mode, the system 300 may detect the position of a single hand (or other object). In a two hand adjacent tracking mode, the system 300 may detect the position of two hands, where the two hands are held side by side in a horizontal orientation, with a gap between the two hands. In a two hand stacked tracking mode, the system 300 may detect the position of two hands, where the two hands are stacked vertically, one on top of the other, with a gap between the two hands.
In response to the determined tracking mode being the single hand tracking mode, the system 300 clusters blobs within the binary image into a single cluster and determines a position of the single hand based on the single cluster (604). For example, the system 300 may cluster blobs within a binary image which was created based on performing a threshold brightness test as discussed above with respect to reference numeral 502. Blobs may be clustered in the binary image using a k-means process, with a desired cluster count equal to one.
The system 300 may determine a position of the single hand based on the single cluster by computing a centroid of one or more blobs within the image. For example, when the single cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the single hand.
In another example, when the single cluster includes multiple blobs, the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the single hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
In response to the determined tracking mode being the two hand adjacent tracking mode, the system 300 clusters blobs in a horizontal direction from an outer edge of first and second sides of the image to a center of the image to identify a first cluster at the first side of the image and a second cluster at the second side of the image, determines a position of a first hand based on the first cluster, and determines a position of a second hand based on the second cluster (606). Blobs may be clustered, for example, using a k-means process with a desired cluster count equal to two. In some scenarios, such as if one hand is within the intersection region or if the user's two hands are placed close together, one blob may be detected and a centroid of the one blob may be computed as a position of a single detected hand. In these scenarios, the system 300 may indicate that, even though a two hand tracking mode is set, only a single hand was found.
In situations in which only a first blob and a second blob are present within the image, the system 300 computes a first centroid of the first blob as a position of the first hand and computes a second centroid of the second blob as a position of the second hand. In situations in which more than two blobs are present within the image, the blobs may be clustered into a first cluster and a second cluster, for example, using a k-means process with a desired cluster count equal to two.
When the first cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the first hand. When the first cluster includes multiple blobs, the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the first hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
When the second cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the second hand. When the second cluster includes multiple blobs, the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the second hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
In a two hand adjacent tracking mode, proximity of blobs in the horizontal direction may be weighted higher than proximity of blobs in a vertical direction in clustering blobs into clusters. A distance function which weights proximity of blobs in the horizontal direction higher than proximity of blobs in the vertical direction may be provided to a k-means clustering process. For example, FIG. 7 illustrates a binary image map 700 which includes blob centroids 702-706 at coordinates (2,2), (3,20), and (7,2), respectively. As indicated by a dashed oval 708, the blob centroids 702 and 704 may be clustered together in a two hand adjacent tracking mode. The blob centroid 702 may be clustered with the blob centroid 704 rather than with the blob centroid 706 despite the fact that the distance between the blob centroid 702 and the blob centroid 706 is less than the distance between the blob centroid 702 and the blob centroid 704 and despite the fact that the blob centroid 702 and the blob centroid 706 share the same Y coordinate. The blob centroid 702 may be clustered with the blob centroid 704 rather than with the blob centroid 706 because the difference in the horizontal direction between the blob centroid 702 and the blob centroid 704 (i.e., one pixel) is less than the difference in the horizontal direction between the blob centroid 702 and the blob centroid 706 (i.e., five pixels).
Returning to FIG. 6, in response to the determined tracking mode being the two hand stacked tracking mode, the system 300 clusters blobs in a vertical direction from an outer edge of a top and a bottom of the image to a center of the image to identify a first cluster at the top of the image and a second cluster at the bottom of the image, determines a position of a first hand based on the first cluster, and determines a position of a second hand based on the second cluster (608). Similar to the two hand adjacent tracking mode, blobs may be clustered using a k-means process with a desired cluster count equal to two and in some scenarios, such as if one hand is within the intersection region or if the user's two hands are placed close together, one blob may be detected and a centroid of the one blob may be computed as a position of a single detected hand.
In situations in which only a first blob and a second blob are present within the image, the system 300 computes a first centroid of the first blob as a position of the first hand and computes a second centroid of the second blob as a position of the second hand. In situations in which more than two blobs are present within the image, the blobs may be clustered into a first cluster and a second cluster, for example, using a k-means process with a desired cluster count equal to two.
When the first cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the first hand. When the first cluster includes multiple blobs, the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the first hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
When the second cluster includes a single blob, the system 300 computes a centroid of the single blob and uses the computed centroid as the position of the second hand. When the second cluster includes multiple blobs, the system 300 computes a centroid of each of the multiple blobs and computes a weighted combination of the computed centroids as the position of the second hand. In this example, the system 300 determines a weighting for the centroids based on a size of the corresponding blob and applies the determined weighting in combining the centroids to a position.
In a two hand stacked tracking mode, proximity of blobs in the vertical direction may be weighted higher than proximity of blobs in a horizontal direction in clustering blobs into clusters. A distance function which weights proximity of blobs in the vertical direction higher than proximity of blobs in the horizontal direction may be provided to a k-means clustering process. For example, FIG. 8 illustrates a binary image map 800 which includes blob centroids 802-806 at coordinates (2,2), (20,3), and (2,7), respectively. As indicated by a dashed oval 808, the blob centroids 802 and 804 may be clustered together in a two hand stacked tracking mode. The blob centroid 802 may be clustered with the blob centroid 804 rather than with the blob centroid 806 despite the fact that the distance between the blob centroid 802 and the blob centroid 806 is less than the distance between the blob centroid 802 and the blob centroid 804 and despite the fact that the blob centroid 802 and the blob centroid 806 share the same X coordinate. The blob centroid 802 may be clustered with the blob centroid 804 rather than with the blob centroid 806 because the difference in the vertical direction between the blob centroid 802 and the blob centroid 804 (i.e., one pixel) is less than the difference in the vertical direction between the blob centroid 802 and the blob centroid 806 (i.e., five pixels).
Returning to FIG. 4, the system 300 determines user input based on the object detected within the intersection region (406). For example, a gesture may be detected based on positions of the object detected within a series of images and a user input may be determined based on the recognized gesture. For example, a “swipe” user input may be detected and a “change station” user input may be determined based on the recognized swipe gesture. As another example, the position of the detected object may be mapped to a user interface control displayed by an application on a display screen. Determining user input for an application user interface is discussed in more detail below with respect to FIG. 9.
For example, FIG. 9 illustrates a process 900 for determining user input based on an object detected within an intersection region. The operations of the process 900 are described generally as being performed by the system 300. The process 900 may used in determining user input based on the object detected within the intersection region referenced above with respect to reference numeral 406. The operations of the process 900 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 900 may be performed by one or more processors included in one or more electronic devices.
The system 300 maps a position of a detected object to an interface displayed by the application being controlled (902). For example, the position of the detected object in a binary image may be mapped to a user interface displayed on a display screen. The position of the detected object may be mapped to a user interface control or graphic displayed on the user interface. For some user interface controls, such as a slider control, the position of the detected object may be mapped to a particular location on the user interface control. As another example, the position of the detected object may be mapped to the position of a cursor displayed on the user interface.
The system 300 detects a gesture based on positions of a detected object with a series of images (904). For example, if the position of the detected object is mapped to a cursor position while in a single object tracking mode, a movement gesture may be detected within the series of images to detect movement of the object from a first position to a second position. As another example, in a single object tracking mode, a swipe gesture may be detected if multiple detected positions of the object within a series of images indicate a fast side-to-side horizontal movement of the object.
Other gestures may be detected if a tracking mode is, for example, a two hand adjacent tracking mode. For example, a “chop” gesture may be detected if positions of two objects within a series of images indicate that one detected object remains stationary and the other object moves quickly up and down in a vertical direction. As another example, a “drumming” gesture may be detected if positions of two objects within a series of images indicate that both objects move up and down. As yet another example, a “grab”, or “move hands together” gesture may be detected if positions of two objects within a series of images indicate that both objects start side-by-side a particular distance apart and then move inward towards each other resulting in the two objects being close to one another. A “move hands apart” gesture may be detected if the positions of the two objects indicate that the objects start side-by-side and then move outward away from each other, resulting in the two objects being farther apart.
The system 300 determines user input based on the mapped position of the detected object and/or the detected gesture (906). For instance, in the example where the object is mapped to a cursor position and where a movement gesture is detected, a cursor movement user input may be determined. In the example where the mapped position of the detected object corresponds to an element displayed in the user interface displayed by the application being controlled, a command to select the user interface element may be determined.
Regarding detected gestures, in the example where a hand swipe gesture is detected, a “next photo” user input may be detected. In the example where a “chop” gesture is detected, a user input for a game may be determined which indicates a “swing” of a hammer or other object for a character within the game. In the example where “move hands together” or “move hands apart” gestures are detected, a decrease volume or increase volume user input may be determined, respectively, or a zoom in or a zoom out user input may be determined, respectively.
Returning to FIG. 4, the system 300 controls an application based on the determined user input (408). For instance, FIG. 10 illustrates an example where movement of a cursor is controlled. In the example of FIG. 10, an object is mapped to a cursor position, a movement gesture is detected, a cursor movement user input is determined, and then movement of a cursor is controlled. For example, a hand 1002 is detected in a camera image captured by a camera 1004 at a first position and the position of the hand 1002 is mapped to a first cursor position 1005 on a user interface 1006 displayed on a display screen 1008. Movement of the hand 1002 is detected within a series of camera images captured by the camera 1004 and a second position of the hand 1002 is determined, as indicated by a hand 1010. A cursor movement user input is determined based on the detected movement gesture, and the position of the cursor is moved from the first cursor position 1005 to a second cursor position 1012 in a direction and magnitude corresponding to the difference in the detected positions of the hand 1002 and the hand 1010.
FIG. 11 illustrates an example where a photo viewing application is controlled to display a different photo. In the example of FIG. 11, a hand-swipe gesture is detected, a “next-photo” user input is determined, and a displayed photo is replaced with a new photo. For example, a hand 1102 is detected in a camera image captured by a camera 1104. The user moves their hand 1102 to the left in a swiping motion, as indicated by a hand 1105. Movement of the hand 1102 is detected within a series of camera images captured by the camera 1104 and a left swipe gesture is determined. A “next photo” user input is determined based on the detected left swipe gesture. A photo 1106 is displayed on a user interface 1108 displayed on a display screen 1110. Based on the determined “next photo” user input, the photo 1106 is removed from the user interface 1108 and a different, next photo 1112 is displayed in place of the photo 1106 on the user interface 1108.
FIG. 12 illustrates an example where a game is controlled. For example, a left hand 1202 and a right hand 1204 are detected in one or more images captured by a camera 1206. The positions of the hands 1202 and 1204 are mapped to cursor positions 1210 and 1212, respectively, on a user interface 1214 displayed on a display screen 1216. The user makes a “chopping” gesture with their left hand 1202, as indicated by a hand 1218. Movement of the hand 1202 is detected within a series of images captured by the camera 1206 and a “chop” gesture is determined. A “pound character” user input is determined based on the chop gesture. A game animation is controlled based on the “pound character” user input.
For example, a state of an animated character graphic 1220 may be determined corresponding to the time of the detected chop gesture (e.g., the character graphic 1220 may be alternating between an “in the hole” state and an “out of the hole” state). If the character graphic 1220 is in an “out of the hole” state at the time of the detected chop gesture, it may be determined that the character associated with the character graphic 1220 has been “hit”. The game may be controlled accordingly, such as to change the character graphic 1220 to a different graphic or to otherwise change the user interface 1214 (e.g., the character may “yell”, make a face, get a “bump on the head”, disappear, appear “knocked out”, etc.), and/or a score may be incremented, or some other indication of success may be displayed.
FIG. 13 illustrates an example where volume of a media player is controlled. For example, a left hand 1302 and a right hand 1304 are detected in one or more images captured by a camera 1306. The user makes a “move hands together” gesture by moving the hands 1302 and 1304 inward towards each other. The change in positions of the hands 1302 and 1304 are detected within a series of images captured by the camera 1306 and a “move hands together” gesture is detected. A decrease-volume user input command is determined based on the detected gesture. Volume of a media player application is decreased in a magnitude corresponding to the amount of horizontal movement of the hands 1302 and 1304 (e.g., a larger inward movement results in a larger decrease in volume). A volume indicator control 1308 on a user interface 1310 displayed on a display screen 1312 is updated accordingly to indicate the decreased volume. As another example, if the user makes a “move hands apart” gesture using the hands 1302 and 1304, the gesture may be detected, an increase-volume user input command may be determined, the volume of the media player application may be increased in a magnitude corresponding to the amount of outward movement of the hands 1302 and 1304, and the volume indicator control 1308 may be updated accordingly.
An application or system without a corresponding display screen may be controlled based on the determined user input. For example, the user input may be a “change station” user input determined based on a recognized swipe gesture performed in front of a car radio player and the car radio player may be controlled to change to a next station in a list of defined stations. As another example, the user input may be a “summon elevator” user input determined based on an object (e.g., hand) detected in front of an elevator door, and an elevator system may be controlled to transfer an elevator from another floor to the floor where the elevator door is located. As yet another example, the user input may be an “open door” user input based on a detected object (e.g., person) in front of a doorway, and a door may be opened in response to the user input.
FIG. 14 illustrates a process 1400 for determining a position of an object. The operations of the process 1400 are described generally as being performed by the system 300. The operations of the process 1400 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 1400 may be performed by one or more processors included in one or more electronic devices.
The system 300 controls multiple illumination sources to illuminate in sequence with images captured by a camera in an alternating pattern (1402). For example, multiple illumination sources may be positioned at an opposite side of a display screen from a camera. Each illumination source may be positioned at a different angle to illuminate a different illuminated area in front of the display screen. For example, FIGS. 15A-C illustrate various illumination source configurations. FIG. 15A illustrates a system 1510 in which an illumination source 1512 is positioned to produce an illuminated area 1514 in front of a display screen 1516. An intersection region 1518 is formed by the intersection of the illuminated area 1514 and a wide-angle field-of-view 1520 of a camera 1522. Most of the area of the intersection region 1518 is located near the top of the display screen 1516.
FIG. 15B illustrates a system 1530 in which an illumination source 1532 is positioned to produce an illuminated area 1534 angled further away from a display screen 1536 (e.g., as compared to the distance between the illuminated area 1514 and the display screen 1516). An intersection region 1538 located near the center of the display screen 1536 is formed by the intersection of the illuminated area 1534 and a medium-angle field-of-view 1540 of a camera 1522. As another example, FIG. 15C illustrates a system 1550 in which an illumination source 1552 is positioned to produce an illuminated area 1554 angled even further away from a display screen 1556 (e.g., as compared to the distance between the illuminated area 1514 and the display screen 1516). An intersection region 1558 located near the bottom of the display screen 1556 is formed by the intersection of the illuminated area 1554 and a narrow-angle field-of-view 1560 of a camera 1562.
FIG. 16 illustrates a system 1600 which includes multiple illumination sources. The system 1600 includes illumination sources 1602-1606 producing illuminated areas 1608-1610, respectively. The illumination sources 1602-1606 may correspond, for example, to illumination sources 1512, 1532, and 1552, respectively, and the illuminated areas 1608-1612 may correspond to illuminated areas 1514, 1534, and 1554, respectively (e.g., as described above with respect to FIGS. 15A-C). The illumination sources 1602-1606 may be controlled to illuminate, one at a time, in sequence with images captured by a camera 1614. For example, the illumination source 1602 may be controlled to illuminate the illuminated area 1608 while the camera 1614 captures a first camera image, the illumination source 1604 may be controlled to illuminate the illuminated area 1610 while the camera 1614 captures a second camera image, and the illumination source 1606 may be controlled to illuminate the illuminated area 1612 while the camera captures a third camera image.
Returning to FIG. 14, for each of the multiple illumination sources, the system 300 identifies an image captured when the corresponding illumination source was illuminated and the other illumination sources were not (1404). For example and as shown in FIG. 16, a first camera image may be identified which corresponds to when the illumination source 1602 was illuminated, a second camera image may be identified which corresponds to when the illumination source 1604 was illuminated, and a third camera image may be identified which corresponds to when the illumination source 1606 was illuminated.
Returning to FIG. 14, the system 300 analyzes each of the identified images in combination to determine an enhanced position of an object within an intersection region defined by the multiple illumination sources (1406). For instance, in the example of FIG. 16, a finger 1616 of a user 1618 reaching towards the bottom of a display screen 1620 may be detected in a camera image captured when the illumination source 1606 is illuminated. If the user reaches farther forward, closer to the display screen 1620, the finger 1616 may be detected when either the illumination source 1604 or the illumination source 1602 is illuminated.
An approximately rectangular intersection region 1622 is formed by the combination of the intersection of the illuminated areas 1608-1612 and one or more field-of-views of the camera 1614. That is, the overlapping of the intersection of the illuminated area 1612 and a field-of-view of the camera 1614 with the intersection of the illuminated area 1610 and a field-of-view of the camera 1614 with the intersection of the illuminated area 1608 and a field-of-view of the camera 1614 nearly fills the rectangular area 1622. The use of illuminators 1602-1606 to form the rectangular intersection region 1622 allows for an object (e.g., the finger 1616) to be detected at close to a constant distance (e.g., six inches) from the display 1620. Additionally, the use of multiple illuminators 1602-1606 allows for a depth detection of the finger 1616 (e.g., distance from the display screen 1620), as well as for detection of a horizontal and vertical position of the finger 1616.
FIG. 17 illustrates a process 1700 for determining a position of one or more objects. The operations of the process 1700 are described generally as being performed by the system 300. The operations of the process 1700 may be performed exclusively by the system 300, may be performed exclusively by another system, or may be performed by a combination of the system 300 and another system. In some implementations, operations of the process 1700 may be performed by one or more processors included in one or more electronic devices.
The system 300 captures a grayscale image (1702). For example, the system 300 controls the illumination source 106 while the camera 104 is capturing grayscale images (e.g., with respect to FIG. 1).
The system 300 compares pixels of the grayscale image to a brightness threshold to produce a corresponding binary image (1704). For example, pixels in the camera image having a brightness value above a threshold may be identified in the binary image with a value of one and pixels having a brightness value below the threshold may be identified in the binary image with a value of zero.
The system 300 groups pixels within the binary image into one or more blobs (1706). For example, pixels may be clustered into one or more blobs based on proximity of the pixels together.
The system 300 references the grayscale image while clustering blobs within the binary images into one or more clusters (1708). Grayscale images may be referenced, for example, if two or more blobs are adjacent to one another in a binary image. Grayscale images might not be referenced if, for example, one blob exists in a binary image.
For example, if the user makes a “thumbs up” pose with their hand while the hand is in the intersection region, a binary clustering of pixels may result in two blobs (e.g., one blob for the thumb and one blob for the rest of the hand). Grayscale images may be referenced to determine whether the blob for the thumb and the blob for the hand should be connected, or whether they are in fact two distinct objects. For example, it may be determined that pixels located between the two blobs (e.g., where the thumb connects to the hand) had brightness values which were close to the brightness threshold, which indicates that the area between the blob for the thumb and the blob for the hand might be part of a single object (e.g., the hand along with the thumb) and that the area between the thumb and the hand was illuminated, but not highly illuminated by the illumination source. If it is determined that two nearby objects should be connected, the two objects may be connected and treated as a single cluster.
The system 300 determines a position of one or more objects in the captured images based on results of the clustering (1710). For example, for each cluster, a position may be computed. In this example, the position may be a centroid of a single blob in the cluster or a weighted combination of centroids from multiple blobs in the cluster. For blobs that were clustered based on referencing grayscale images, one position may be computed for the clustered blobs and used as the position of a corresponding detected object.
FIG. 18 illustrates an example of a tracking system 1800. The system 1800 may be used, for example, in a museum. The system 1800 may be targeted, for example, for use by blind patrons of the museum. The system 1800 includes a painting 1802, a camera 1804, a speaker 1805, and an illumination source 1806. The speaker 1805 may play a repeating sound, such as a “chirp” or “beep” to direct patrons to the vicinity of the painting 1802. For example, a blind patron 1808 may hear the beeping and may walk up to the painting 1802.
The camera 1804 is configured to capture images and is positioned at the top side of the painting 1802 and is angled downward with respect to the painting 1802. A field-of-view 1809 of the camera 1804 is located in front of the painting 1802. The illumination source 1806 is positioned at the bottom side of the painting 1802. The illumination source 1806 is configured to illuminate an illuminated area 1810 located in front of the painting 1802. The illuminated area 1810 intersects the field-of-view 1809 to define an intersection region 1812 in front of the painting 1802.
Captured camera images may be analyzed to detect an object such as a hand 1816 of the patron 1808 within the intersection region 1812. A user input may be determined based on the detection of the object within the intersection region 1812. For example, a “play audio recording” user input may be determined based on the presence of an object within the intersection region 1812. In response to the determined “play audio recording” user input, the speaker 1805 may be controlled to play an audio recording providing details about the painting 1802. As another example, a gesture may be detected based on one or more determined positions of the detected object. For example, a “swipe” gesture may be determined, a “stop audio playback” user input may be determined based on the recognized gesture, and the speaker 1805 may be controlled to turn off playback of the audio recording.
FIG. 19 illustrates a system 1900 for object tracking. A camera 1902 is included in (e.g., embedded in or mounted on) a car dashboard 1904. The field-of-view of the camera 1902 may be in front of the dashboard 1904 (e.g., extending from the camera 1902 towards the back of the vehicle). A radio 1905 may be positioned below the camera 1902. The field-of-view of the camera 1902 may be angled downward to capture images of an area in front of the radio 1905.
In some implementations, an illumination source 1909 may be positioned below the camera 1902. The illumination source 1909 may be a row of infrared LEDs. The row of infrared LEDs may be angled upward such that infrared light emitted by the row of infrared LEDs intersects the field-of-view of the camera 1902 to define an intersection region. The intersection region may be positioned about eight inches away from a front surface of the radio 1905 and may have a height that is similar to the height of the radio 1905. In the example shown in FIG. 19, the intersection region may be defined a sufficient distance above the gear shift such that a driver's movements to control the gear shift are not within the intersection region. In this configuration, the driver's movements to control the gear shift are not interpreted as control inputs to the radio 1905, even though the driver's movements to control the gear shift are within the field-of-view of the camera 1902.
A user's hand 1906 may be detected in one or more camera images captured by the camera 1902. The user may perform a right-to-left swipe gesture within the intersection region defined in front of the radio 1905, as illustrated by a hand 1908. The swipe gesture may be detected if multiple detected positions of the user's hand within a series of images indicate a fast side-to-side horizontal movement of the hand 1906. A “change station” user input may be determined based on the detected swipe gesture. The radio 1905 may be controlled to change to a different radio station (e.g., change to a next radio station in a list of predefined radio stations) based on the detected swipe gesture. Allowing a user to change a radio station of a car radio by swiping the user's hand in front of the car radio may increase the safety of using the car radio because the user may control the car radio without diverting his or her eyes from the road.
FIG. 20 is a schematic diagram of an example of a generic computer system 2000. The system 2000 can be used for the operations described in association with the processes 400, 500, 600, 900, 1400, and 1700, according to one implementation.
The system 2000 includes a processor 2010, a memory 2020, a storage device 2030, and an input/output device 2040. Each of the components 2010, 2020, 2030, and 2040 are interconnected using a system bus 2050. The processor 2010 is capable of processing instructions for execution within the system 2000. In one implementation, the processor 2010 is a single-threaded processor. In another implementation, the processor 2010 is a multi-threaded processor. The processor 2010 is capable of processing instructions stored in the memory 2020 or on the storage device 2030 to display graphical information for a user interface on the input/output device 2040.
The memory 2020 stores information within the system 2000. In one implementation, the memory 2020 is a computer-readable medium. In one implementation, the memory 2020 is a volatile memory unit. In another implementation, the memory 2020 is a non-volatile memory unit.
The storage device 2030 is capable of providing mass storage for the system 2000. In one implementation, the storage device 2030 is a computer-readable medium. In various different implementations, the storage device 2030 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
The input/output device 2040 provides input/output operations for the system 2000. In one implementation, the input/output device 2040 includes a keyboard and/or pointing device. In another implementation, the input/output device 2040 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.

Claims (30)

What is claimed is:
1. An electronic system comprising:
an image sensor having a field of view of a first area;
an illumination source that is configured to illuminate a second area, the second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor; and
a processing unit configured to perform operations comprising:
receiving an image from the image sensor;
analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and
determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object.
2. The electronic system of claim 1, wherein the image sensor is located on a first side of a surface of a display screen, and wherein the field of view of the first area is in front of the display screen.
3. The electronic system of claim 2, wherein the illumination source is configured to illuminate the second area in front of the display screen intersecting the first area in front of the display screen to define the intersection region.
4. The electronic system of claim 2, wherein the illumination source and the image sensor are positioned such that the intersection region is within six inches from the display screen.
5. The electronic system of claim 2, wherein the object is a finger and the user input comprises a depth of the finger in relation to the display screen.
6. The electronic system of claim 1, wherein the operation of determining the user input based on the object detected within the intersection region comprises:
mapping a position of the object detected within the intersection region to a cursor of a user interface.
7. The electronic system of claim 1, wherein the processing unit is further configured to perform operations comprising:
controlling an application based on the determined user input.
8. The electronic system of claim 7, wherein controlling the application based on the determined user input comprises moving a cursor from a first cursor position to a second cursor position based on the determined user input.
9. The electronic system of claim 1, wherein analyzing the image captured by the image sensor to detect the object within the intersection region comprises:
comparing pixels of the image to a brightness threshold to produce a binary image, wherein pixels in the binary image indicate whether or not the corresponding pixels in the image captured by the image sensor meet the brightness threshold; and
determining a position of the object in the binary image.
10. The electronic system of claim 1, further comprising multiple illumination sources configured to illuminate in sequence with images captured by one or more image sensors in an alternating pattern with the illumination.
11. A method for determining a user input, comprising:
receiving an image from an image sensor, the image sensor having a field of view of a first area; and
illuminating, via an illumination source, a second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor;
analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and
determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object.
12. The method of claim 11, wherein the image sensor is located on a first side of a surface of a display screen, and wherein the field of view of the first area is in front of the display screen.
13. The method of claim 12, wherein the illuminating comprises illuminating the second area in front of the display screen intersecting the first area in front of the display screen to define the intersection region.
14. The method of claim 12, wherein the illumination source and the image sensor are positioned such that the intersection region is within six inches from the display screen.
15. The method of claim 12, wherein the object is a finger and the user input comprises a depth of the finger in relation to the display screen.
16. The method of claim 11, wherein the determining comprises mapping a position of the object detected within the intersection region to a cursor of a user interface.
17. The method of claim 11, further comprising controlling an application based on the determined user input.
18. The method of claim 17, wherein controlling the application based on the determined user input comprises moving a cursor from a first cursor position to a second cursor position based on the determined user input.
19. The method of claim 11, wherein analyzing the image captured by the image sensor to detect the object within the intersection region comprises:
comparing pixels of the image to a brightness threshold to produce a binary image, wherein pixels in the binary image indicate whether or not the corresponding pixels in the image captured by the image sensor meet the brightness threshold; and
determining a position of the object in the binary image.
20. The method of claim 11, further comprising illuminating, via multiple illumination sources, in sequence with images captured by one or more image sensors in an alternating pattern with the illumination.
21. An apparatus for determining a user input, comprising:
means for receiving an image from an image sensor, the image sensor having a field of view of a first area; and
means for illuminating a second area intersecting the first area to define (a) an intersection region illuminated by the illuminating means and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illuminating means and within the field of view of the image sensor;
means for analyzing the image to detect an object within the intersection region and exclude objects within the non-intersection region; and
means for determining user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object.
22. The apparatus of claim 21, wherein the image sensor is located on a first side of a surface of a display screen, and wherein the field of view of the first area is in front of the display screen.
23. The apparatus of claim 22, wherein the means for illuminating comprises means for illuminating the second area in front of the display screen intersecting the first area in front of the display screen to define the intersection region.
24. The apparatus of claim 21, wherein the means for analyzing the image captured by the image sensor to detect the object within the intersection region comprises:
means for comparing pixels of the image to a brightness threshold to produce a binary image, wherein pixels in the binary image indicate whether or not the corresponding pixels in the image captured by the image sensor meet the brightness threshold; and
means for determining a position of the object in the binary image.
25. A non-transitory storage medium comprising processor-readable instructions configured to cause a processor to:
receive an image from an image sensor, the image sensor having a field of view of a first area; and
illuminate, via an illumination source, a second area intersecting the first area to define (a) an intersection region illuminated by the illumination source and within the field of view of the image sensor and (b) a non-intersection region not illuminated by the illumination source and within the field of view of the image sensor;
analyze the image to detect an object within the intersection region and exclude objects within the non-intersection region; and
determine user input based on the object detected within the intersection region, wherein the user input is determined based on a mapped position of the detected object.
26. The non-transitory storage medium of claim 25, wherein the image sensor is located on a first side of a surface of a display screen, and wherein the field of view of the first area is in front of the display screen.
27. The non-transitory storage medium of claim 26, wherein the illuminating comprises illuminating the second area in front of the display screen intersecting the first area in front of the display screen to define the intersection region.
28. The non-transitory storage medium of claim 26, wherein the object is a finger and the user input comprises a depth of the finger in relation to the display screen.
29. The non-transitory storage medium of claim 25, wherein the determining comprises mapping a position of the object detected within the intersection region to a cursor of a user interface.
30. The non-transitory storage medium of claim 25, wherein the analyzing the image captured by the image sensor to detect the object within the intersection region comprises:
comparing pixels of the image to a brightness threshold to produce a binary image, wherein pixels in the binary image indicate whether or not the corresponding pixels in the image captured by the image sensor meet the brightness threshold; and
determining a position of the object in the binary image.
US13/972,064 2009-10-07 2013-08-21 Proximity object tracker Active 2031-05-17 US9317134B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/972,064 US9317134B2 (en) 2009-10-07 2013-08-21 Proximity object tracker

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US24952709P 2009-10-07 2009-10-07
US12/578,530 US8547327B2 (en) 2009-10-07 2009-10-13 Proximity object tracker
US13/972,064 US9317134B2 (en) 2009-10-07 2013-08-21 Proximity object tracker

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/578,530 Continuation US8547327B2 (en) 2009-10-07 2009-10-13 Proximity object tracker

Publications (2)

Publication Number Publication Date
US20150363009A1 US20150363009A1 (en) 2015-12-17
US9317134B2 true US9317134B2 (en) 2016-04-19

Family

ID=43822907

Family Applications (4)

Application Number Title Priority Date Filing Date
US12/578,530 Active 2032-07-17 US8547327B2 (en) 2009-10-07 2009-10-13 Proximity object tracker
US12/900,183 Active 2031-03-25 US8515128B1 (en) 2009-10-07 2010-10-07 Hover detection
US13/934,734 Active US8897496B2 (en) 2009-10-07 2013-07-03 Hover detection
US13/972,064 Active 2031-05-17 US9317134B2 (en) 2009-10-07 2013-08-21 Proximity object tracker

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US12/578,530 Active 2032-07-17 US8547327B2 (en) 2009-10-07 2009-10-13 Proximity object tracker
US12/900,183 Active 2031-03-25 US8515128B1 (en) 2009-10-07 2010-10-07 Hover detection
US13/934,734 Active US8897496B2 (en) 2009-10-07 2013-07-03 Hover detection

Country Status (1)

Country Link
US (4) US8547327B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018148219A1 (en) * 2017-02-07 2018-08-16 Oblong Industries, Inc. Systems and methods for user input device tracking in a spatial operating environment
US10235412B2 (en) 2008-04-24 2019-03-19 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US10354131B2 (en) * 2014-05-12 2019-07-16 Fujitsu Limited Product information outputting method, control device, and computer-readable recording medium
US10599226B2 (en) * 2015-05-21 2020-03-24 Audi Ag Operating system and method for operating an operating system for a motor vehicle
US10884507B2 (en) 2018-07-13 2021-01-05 Otis Elevator Company Gesture controlled door opening for elevators considering angular movement and orientation

Families Citing this family (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9908767B2 (en) * 2008-11-10 2018-03-06 Automatic Bar Controls, Inc. Beverage dispensing apparatus with presence sensing
US10086262B1 (en) 2008-11-12 2018-10-02 David G. Capper Video motion capture for wireless gaming
US9586135B1 (en) 2008-11-12 2017-03-07 David G. Capper Video motion capture for wireless gaming
US9383814B1 (en) 2008-11-12 2016-07-05 David G. Capper Plug and play wireless video game
GB2466497B (en) 2008-12-24 2011-09-14 Light Blue Optics Ltd Touch sensitive holographic displays
US9498718B2 (en) * 2009-05-01 2016-11-22 Microsoft Technology Licensing, Llc Altering a view perspective within a display environment
KR100936666B1 (en) * 2009-05-25 2010-01-13 전자부품연구원 Apparatus for touching reflection image using an infrared screen
US8547327B2 (en) 2009-10-07 2013-10-01 Qualcomm Incorporated Proximity object tracker
KR100974894B1 (en) * 2009-12-22 2010-08-11 전자부품연구원 3d space touch apparatus using multi-infrared camera
JP2011209019A (en) * 2010-03-29 2011-10-20 Sony Corp Robot device and method of controlling the same
US9383864B2 (en) * 2010-03-31 2016-07-05 Smart Technologies Ulc Illumination structure for an interactive input system
US20110317871A1 (en) * 2010-06-29 2011-12-29 Microsoft Corporation Skeletal joint recognition and tracking system
US8988508B2 (en) * 2010-09-24 2015-03-24 Microsoft Technology Licensing, Llc. Wide angle field of view active illumination imaging system
JP5703703B2 (en) 2010-11-11 2015-04-22 ソニー株式会社 Information processing apparatus, stereoscopic display method, and program
JP5300825B2 (en) * 2010-11-17 2013-09-25 シャープ株式会社 Instruction receiving device, instruction receiving method, computer program, and recording medium
US10025388B2 (en) * 2011-02-10 2018-07-17 Continental Automotive Systems, Inc. Touchless human machine interface
US20120249468A1 (en) * 2011-04-04 2012-10-04 Microsoft Corporation Virtual Touchpad Using a Depth Camera
JP5853394B2 (en) * 2011-04-07 2016-02-09 セイコーエプソン株式会社 Cursor display system, cursor display method, and projector
US8928589B2 (en) * 2011-04-20 2015-01-06 Qualcomm Incorporated Virtual keyboards and methods of providing the same
US9230220B2 (en) * 2011-05-11 2016-01-05 Ari M. Frank Situation-dependent libraries of affective response
US9348466B2 (en) * 2011-06-24 2016-05-24 Hewlett-Packard Development Company, L.P. Touch discrimination using fisheye lens
WO2013005868A1 (en) 2011-07-01 2013-01-10 Empire Technology Development Llc Safety scheme for gesture-based game
WO2013018333A1 (en) * 2011-07-29 2013-02-07 パナソニック株式会社 Apparatus for controlling vehicle opening/closing element
KR101566807B1 (en) 2011-08-31 2015-11-13 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Position-setup for gesture-based game system
US9037354B2 (en) 2011-09-09 2015-05-19 Thales Avionics, Inc. Controlling vehicle entertainment systems responsive to sensed passenger gestures
DE102011116122A1 (en) * 2011-10-15 2013-04-18 Volkswagen Aktiengesellschaft Method for providing an operating device in a vehicle and operating device
KR101566812B1 (en) * 2011-11-04 2015-11-13 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Ir signal capture for images
KR20130055118A (en) * 2011-11-18 2013-05-28 전자부품연구원 Space touch apparatus using single-infrared camera
US8657681B2 (en) 2011-12-02 2014-02-25 Empire Technology Development Llc Safety scheme for gesture-based game system
DE102012000201A1 (en) * 2012-01-09 2013-07-11 Daimler Ag Method and device for operating functions displayed on a display unit of a vehicle using gestures executed in three-dimensional space as well as related computer program product
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US8790179B2 (en) 2012-02-24 2014-07-29 Empire Technology Development Llc Safety scheme for gesture-based game system
US20130229345A1 (en) * 2012-03-01 2013-09-05 Laura E. Day Manual Manipulation of Onscreen Objects
TW201337649A (en) * 2012-03-02 2013-09-16 Pixart Imaging Inc Optical input device and input detection method thereof
US10503373B2 (en) * 2012-03-14 2019-12-10 Sony Interactive Entertainment LLC Visual feedback for highlight-driven gesture user interfaces
GB2500416B8 (en) * 2012-03-21 2017-06-14 Sony Computer Entertainment Europe Ltd Apparatus and method of augmented reality interaction
US9239624B2 (en) 2012-04-13 2016-01-19 Nokia Technologies Oy Free hand gesture control of automotive user interface
KR101196751B1 (en) * 2012-04-23 2012-11-07 엘지전자 주식회사 Mobile terminal and control method thereof
DE102012206960B4 (en) * 2012-04-26 2019-10-17 Bayerische Motoren Werke Aktiengesellschaft Method and device for detecting at least one predetermined actuating movement for an operating device of a vehicle
US9747306B2 (en) * 2012-05-25 2017-08-29 Atheer, Inc. Method and apparatus for identifying input features for later recognition
CN102740029A (en) * 2012-06-20 2012-10-17 深圳市联建光电股份有限公司 Light emitting diode (LED) display module, LED television and LED television system
TWI490755B (en) * 2012-06-20 2015-07-01 Pixart Imaging Inc Input system
US10073541B1 (en) * 2012-06-22 2018-09-11 Amazon Technologies, Inc. Indicators for sensor occlusion
US9336302B1 (en) 2012-07-20 2016-05-10 Zuci Realty Llc Insight and algorithmic clustering for automated synthesis
DE102012015255A1 (en) * 2012-08-01 2014-02-06 Volkswagen Aktiengesellschaft Display and operating device and method for controlling a display and control device
US10234941B2 (en) 2012-10-04 2019-03-19 Microsoft Technology Licensing, Llc Wearable sensor for tracking articulated body-parts
DE102012020607B4 (en) * 2012-10-19 2015-06-11 Audi Ag A motor vehicle with a gesture control device and method for controlling a selection element
DE102012021220A1 (en) * 2012-10-27 2014-04-30 Volkswagen Aktiengesellschaft Operating arrangement for detection of gestures in motor vehicle, has gesture detection sensor for detecting gestures and for passing on gesture signals, and processing unit for processing gesture signals and for outputting result signals
DE102012110460A1 (en) 2012-10-31 2014-04-30 Audi Ag A method for entering a control command for a component of a motor vehicle
CN103809734B (en) * 2012-11-07 2017-05-24 联想(北京)有限公司 Control method and controller of electronic device and electronic device
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US20140139632A1 (en) * 2012-11-21 2014-05-22 Lsi Corporation Depth imaging method and apparatus with adaptive illumination of an object of interest
JP2014106878A (en) * 2012-11-29 2014-06-09 Toshiba Corp Information processor, extension equipment and input control method
TWI456430B (en) * 2012-12-07 2014-10-11 Pixart Imaging Inc Gesture recognition apparatus, operating method thereof, and gesture recognition method
US20140340498A1 (en) * 2012-12-20 2014-11-20 Google Inc. Using distance between objects in touchless gestural interfaces
TW201426463A (en) * 2012-12-26 2014-07-01 Pixart Imaging Inc Optical touch system
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
DE102013000081B4 (en) * 2013-01-08 2018-11-15 Audi Ag Operator interface for contactless selection of a device function
US9626015B2 (en) 2013-01-08 2017-04-18 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US8973149B2 (en) * 2013-01-14 2015-03-03 Lookout, Inc. Detection of and privacy preserving response to observation of display screen
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
JP6154148B2 (en) * 2013-01-31 2017-06-28 富士通テン株式会社 Input operation device, display device, and command selection method
TWI471757B (en) * 2013-01-31 2015-02-01 Pixart Imaging Inc Hand posture detection device for detecting hovering and click
KR102040288B1 (en) * 2013-02-27 2019-11-04 삼성전자주식회사 Display apparatus
US9625995B2 (en) * 2013-03-15 2017-04-18 Leap Motion, Inc. Identifying an object in a field of view
US9702977B2 (en) 2013-03-15 2017-07-11 Leap Motion, Inc. Determining positional information of an object in space
US9304594B2 (en) * 2013-04-12 2016-04-05 Microsoft Technology Licensing, Llc Near-plane segmentation using pulsed light source
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
CN104143075A (en) * 2013-05-08 2014-11-12 光宝科技股份有限公司 Gesture judging method applied to electronic device
WO2014194148A2 (en) * 2013-05-29 2014-12-04 Weijie Zhang Systems and methods involving gesture based user interaction, user interface and/or other features
US20140368434A1 (en) * 2013-06-13 2014-12-18 Microsoft Corporation Generation of text by way of a touchless interface
DE102013010932B4 (en) * 2013-06-29 2015-02-12 Audi Ag Method for operating a user interface, user interface and motor vehicle with a user interface
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US9377866B1 (en) * 2013-08-14 2016-06-28 Amazon Technologies, Inc. Depth-based position mapping
US9772679B1 (en) * 2013-08-14 2017-09-26 Amazon Technologies, Inc. Object tracking for device input
CN105473482A (en) 2013-08-15 2016-04-06 奥的斯电梯公司 Sensors for conveyance control
JP6202942B2 (en) * 2013-08-26 2017-09-27 キヤノン株式会社 Information processing apparatus and control method thereof, computer program, and storage medium
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
IL228332A0 (en) * 2013-09-10 2014-08-31 Pointgrab Ltd Feedback method and system for interactive systems
US9451062B2 (en) * 2013-09-30 2016-09-20 Verizon Patent And Licensing Inc. Mobile device edge view display insert
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
RU2013147803A (en) * 2013-10-25 2015-04-27 ЭлЭсАй Корпорейшн GESTURE RECOGNITION SYSTEM WITH FINITE AUTOMATIC CONTROL OF INDICATOR DETECTION UNIT AND DYNAMIC GESTURE DETECTION UNIT
WO2015065341A1 (en) * 2013-10-29 2015-05-07 Intel Corporation Gesture based human computer interaction
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
KR102241764B1 (en) * 2013-11-01 2021-04-19 삼성전자 주식회사 Method and apparatus for processing a input of electronic device
KR20150051278A (en) * 2013-11-01 2015-05-12 삼성전자주식회사 Object moving method and electronic device implementing the same
CN105593786B (en) * 2013-11-07 2019-08-30 英特尔公司 Object's position determines
IL229563A (en) * 2013-11-21 2016-10-31 Elbit Systems Ltd Compact optical tracker
US9329727B2 (en) * 2013-12-11 2016-05-03 Microsoft Technology Licensing, Llc Object detection in optical sensor systems
WO2015092905A1 (en) * 2013-12-19 2015-06-25 日立マクセル株式会社 Projection image display device and projection image display method
CN105027031A (en) * 2013-12-19 2015-11-04 谷歌公司 Using distance between objects in touchless gestural interfaces
KR101582726B1 (en) * 2013-12-27 2016-01-06 재단법인대구경북과학기술원 Apparatus and method for recognizing distance of stereo type
US20150185851A1 (en) * 2013-12-30 2015-07-02 Google Inc. Device Interaction with Self-Referential Gestures
US9262012B2 (en) 2014-01-03 2016-02-16 Microsoft Corporation Hover angle
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9430095B2 (en) 2014-01-23 2016-08-30 Microsoft Technology Licensing, Llc Global and local light detection in optical sensor systems
CN104866073B (en) * 2014-02-21 2018-10-12 联想(北京)有限公司 The electronic equipment of information processing method and its system including the information processing system
GB2524473A (en) * 2014-02-28 2015-09-30 Microsoft Technology Licensing Llc Controlling a computing-based device using gestures
EP2916209B1 (en) * 2014-03-03 2019-11-20 Nokia Technologies Oy Input axis between an apparatus and a separate apparatus
TWI517005B (en) * 2014-03-20 2016-01-11 國立交通大學 Touch display apparatus and touch sensing method
US10261634B2 (en) * 2014-03-27 2019-04-16 Flexterra, Inc. Infrared touch system for flexible displays
DE102014004675A1 (en) 2014-03-31 2015-10-01 Audi Ag Gesture evaluation system, gesture evaluation method and vehicle
US10127783B2 (en) 2014-07-07 2018-11-13 Google Llc Method and device for processing motion events
US9224044B1 (en) * 2014-07-07 2015-12-29 Google Inc. Method and system for video zone monitoring
US10140827B2 (en) 2014-07-07 2018-11-27 Google Llc Method and system for processing motion event notifications
US9501915B1 (en) 2014-07-07 2016-11-22 Google Inc. Systems and methods for analyzing a video stream
JP2016038889A (en) 2014-08-08 2016-03-22 リープ モーション, インコーポレーテッドLeap Motion, Inc. Extended reality followed by motion sensing
US9009805B1 (en) 2014-09-30 2015-04-14 Google Inc. Method and system for provisioning an electronic device
USD782495S1 (en) 2014-10-07 2017-03-28 Google Inc. Display screen or portion thereof with graphical user interface
TWI531954B (en) * 2014-11-14 2016-05-01 中強光電股份有限公司 Touch and gesture control system and touch and gesture control method
US9454235B2 (en) * 2014-12-26 2016-09-27 Seungman KIM Electronic apparatus having a sensing unit to input a user command and a method thereof
CN105808016A (en) * 2014-12-31 2016-07-27 中强光电股份有限公司 Optical touch apparatus and touch sensing method therefor
US9840407B2 (en) * 2015-02-10 2017-12-12 Cornelius, Inc. Gesture interface for beverage dispenser
US10101817B2 (en) * 2015-03-03 2018-10-16 Intel Corporation Display interaction detection
TWI544388B (en) * 2015-06-03 2016-08-01 廣達電腦股份有限公司 Overhanging touch control system and touch control method thereof
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
US10289239B2 (en) 2015-07-09 2019-05-14 Microsoft Technology Licensing, Llc Application programming interface for multi-touch input detection
US10881713B2 (en) * 2015-10-28 2021-01-05 Atheer, Inc. Method and apparatus for interface control with prompt and feedback
US10592717B2 (en) * 2016-01-29 2020-03-17 Synaptics Incorporated Biometric imaging with hover detection
CN105787322B (en) * 2016-02-01 2019-11-29 北京京东尚科信息技术有限公司 The method and device of fingerprint recognition, mobile terminal
KR101667510B1 (en) * 2016-02-11 2016-10-18 윤일식 Apparatus and Method for controlling the Motion of an Elevator using a Monitor
US20210181892A1 (en) * 2016-02-26 2021-06-17 The Coca-Cola Company Touchless control graphical user interface
US10290194B2 (en) * 2016-02-29 2019-05-14 Analog Devices Global Occupancy sensor
KR101809925B1 (en) * 2016-04-25 2017-12-20 엘지전자 주식회사 Display apparatus for Vehicle and Vehicle
US10078483B2 (en) 2016-05-17 2018-09-18 Google Llc Dual screen haptic enabled convertible laptop
US10506237B1 (en) 2016-05-27 2019-12-10 Google Llc Methods and devices for dynamic adaptation of encoding bitrate for video streaming
US10380429B2 (en) 2016-07-11 2019-08-13 Google Llc Methods and systems for person detection in a video feed
US10957171B2 (en) 2016-07-11 2021-03-23 Google Llc Methods and systems for providing event alerts
US10192415B2 (en) 2016-07-11 2019-01-29 Google Llc Methods and systems for providing intelligent alerts for events
US9958951B1 (en) * 2016-09-12 2018-05-01 Meta Company System and method for providing views of virtual content in an augmented reality environment
WO2018052894A1 (en) * 2016-09-14 2018-03-22 Sears Brands, Llc Refrigeration device with gesture-controlled dispenser
CN106502418B (en) * 2016-11-09 2019-04-16 南京阿凡达机器人科技有限公司 A kind of vision follower method based on monocular gesture identification
US11205103B2 (en) 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
US10599950B2 (en) 2017-05-30 2020-03-24 Google Llc Systems and methods for person recognition data management
US11783010B2 (en) 2017-05-30 2023-10-10 Google Llc Systems and methods of person recognition in video streams
DE102017113763B4 (en) * 2017-06-21 2022-03-17 SMR Patents S.à.r.l. Method for operating a display device for a motor vehicle and motor vehicle
US10481736B2 (en) * 2017-06-21 2019-11-19 Samsung Electronics Company, Ltd. Object detection and motion identification using electromagnetic radiation
CN107562203A (en) * 2017-09-14 2018-01-09 北京奇艺世纪科技有限公司 A kind of input method and device
US11134227B2 (en) 2017-09-20 2021-09-28 Google Llc Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment
US10664688B2 (en) 2017-09-20 2020-05-26 Google Llc Systems and methods of detecting and responding to a visitor to a smart home environment
US10229313B1 (en) 2017-10-23 2019-03-12 Meta Company System and method for identifying and tracking a human hand in an interactive space based on approximated center-lines of digits
US10701247B1 (en) 2017-10-23 2020-06-30 Meta View, Inc. Systems and methods to simulate physical objects occluding virtual objects in an interactive space
WO2019083501A1 (en) * 2017-10-24 2019-05-02 Hewlett-Packard Development Company, L.P. Generating a three-dimensional visualization of a split input device
EP3556702A1 (en) 2018-03-13 2019-10-23 Otis Elevator Company Augmented reality car operating panel
CN108513414B (en) * 2018-03-26 2023-12-19 中国地质大学(武汉) Stage light-following lamp system and method with self-tracking focus
US10623743B1 (en) * 2018-05-22 2020-04-14 Facebook Technologies, Llc Compression of captured images including light captured from locations on a device or object
IL260438B (en) * 2018-07-05 2021-06-30 Agent Video Intelligence Ltd System and method for use in object detection from video stream
US20200012350A1 (en) * 2018-07-08 2020-01-09 Youspace, Inc. Systems and methods for refined gesture recognition
US10818028B2 (en) * 2018-12-17 2020-10-27 Microsoft Technology Licensing, Llc Detecting objects in crowds using geometric context
KR20200090403A (en) * 2019-01-21 2020-07-29 삼성전자주식회사 Electronic apparatus and the control method thereof
DE102019204481A1 (en) * 2019-03-29 2020-10-01 Deere & Company System for recognizing an operating intention on a manually operated operating unit
JP2021064320A (en) * 2019-10-17 2021-04-22 ソニー株式会社 Information processing device, information processing method, and program
US11893795B2 (en) 2019-12-09 2024-02-06 Google Llc Interacting with visitors of a connected home environment
WO2022241328A1 (en) * 2022-05-20 2022-11-17 Innopeak Technology, Inc. Hand gesture detection methods and systems with hand shape calibration

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982352A (en) 1992-09-18 1999-11-09 Pryor; Timothy R. Method for providing human input to a computer
US6157368A (en) 1994-09-28 2000-12-05 Faeger; Jan G. Control equipment with a movable control member
US20020036617A1 (en) 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
US6393136B1 (en) 1999-01-04 2002-05-21 International Business Machines Corporation Method and apparatus for determining eye contact
US20020097218A1 (en) 2001-01-22 2002-07-25 Philips Electronics North America Corporatin Single camera system for gesture-based input and target indication
US6624833B1 (en) * 2000-04-17 2003-09-23 Lucent Technologies Inc. Gesture-based input interface system with shadow detection
US20040032398A1 (en) 2002-08-14 2004-02-19 Yedidya Ariel Method for interacting with computer using a video camera image on screen and system thereof
US6707444B1 (en) 2000-08-18 2004-03-16 International Business Machines Corporation Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems
US20040070565A1 (en) 2001-12-05 2004-04-15 Nayar Shree K Method and apparatus for displaying images
US20050122308A1 (en) 2002-05-28 2005-06-09 Matthew Bell Self-contained interactive video display system
US6961443B2 (en) 2000-06-15 2005-11-01 Automotive Systems Laboratory, Inc. Occupant sensor
US7098956B2 (en) 2000-12-19 2006-08-29 Heraeus Med Gmbh Process and device for the video recording of an illuminated field
US20070091178A1 (en) 2005-10-07 2007-04-26 Cotter Tim S Apparatus and method for performing motion capture using a random pattern on capture surfaces
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
US7292711B2 (en) 2002-06-06 2007-11-06 Wintriss Engineering Corporation Flight parameter measurement system
US20080122786A1 (en) 1997-08-22 2008-05-29 Pryor Timothy R Advanced video gaming methods for education and play using camera based inputs
US20090103780A1 (en) 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
US7526131B2 (en) 2003-03-07 2009-04-28 Martin Weber Image processing apparatus and methods
US20090122146A1 (en) 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US7542072B2 (en) 2004-07-28 2009-06-02 The University Of Maryland Device using a camera and light polarization for the remote displacement of a cursor on a display
US20090303176A1 (en) 2008-06-10 2009-12-10 Mediatek Inc. Methods and systems for controlling electronic devices according to signals from digital camera and sensor modules
US7650015B2 (en) 1997-07-22 2010-01-19 Image Processing Technologies. LLC Image processing method
US7692625B2 (en) 2000-07-05 2010-04-06 Smart Technologies Ulc Camera-based touch system
US20100277412A1 (en) 1999-07-08 2010-11-04 Pryor Timothy R Camera Based Sensing in Handheld, Mobile, Gaming, or Other Devices
US20110080490A1 (en) 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US20110134237A1 (en) 2008-08-04 2011-06-09 Koninklijke Philips Electronics N.V. Communication device with peripheral viewing means
US8068641B1 (en) 2008-06-19 2011-11-29 Qualcomm Incorporated Interaction interface for controlling an application
US8144123B2 (en) 2007-08-14 2012-03-27 Fuji Xerox Co., Ltd. Dynamically controlling a cursor on a screen when using a video camera as a pointing device

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982352A (en) 1992-09-18 1999-11-09 Pryor; Timothy R. Method for providing human input to a computer
US6157368A (en) 1994-09-28 2000-12-05 Faeger; Jan G. Control equipment with a movable control member
US7650015B2 (en) 1997-07-22 2010-01-19 Image Processing Technologies. LLC Image processing method
US20060033713A1 (en) 1997-08-22 2006-02-16 Pryor Timothy R Interactive video based games using objects sensed by TV cameras
US20080122786A1 (en) 1997-08-22 2008-05-29 Pryor Timothy R Advanced video gaming methods for education and play using camera based inputs
US20020036617A1 (en) 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
US6393136B1 (en) 1999-01-04 2002-05-21 International Business Machines Corporation Method and apparatus for determining eye contact
US20100277412A1 (en) 1999-07-08 2010-11-04 Pryor Timothy R Camera Based Sensing in Handheld, Mobile, Gaming, or Other Devices
US6624833B1 (en) * 2000-04-17 2003-09-23 Lucent Technologies Inc. Gesture-based input interface system with shadow detection
US6961443B2 (en) 2000-06-15 2005-11-01 Automotive Systems Laboratory, Inc. Occupant sensor
US7692625B2 (en) 2000-07-05 2010-04-06 Smart Technologies Ulc Camera-based touch system
US6707444B1 (en) 2000-08-18 2004-03-16 International Business Machines Corporation Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems
US7098956B2 (en) 2000-12-19 2006-08-29 Heraeus Med Gmbh Process and device for the video recording of an illuminated field
US20020097218A1 (en) 2001-01-22 2002-07-25 Philips Electronics North America Corporatin Single camera system for gesture-based input and target indication
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
US20040070565A1 (en) 2001-12-05 2004-04-15 Nayar Shree K Method and apparatus for displaying images
US20050122308A1 (en) 2002-05-28 2005-06-09 Matthew Bell Self-contained interactive video display system
US7292711B2 (en) 2002-06-06 2007-11-06 Wintriss Engineering Corporation Flight parameter measurement system
US20090122146A1 (en) 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US20040032398A1 (en) 2002-08-14 2004-02-19 Yedidya Ariel Method for interacting with computer using a video camera image on screen and system thereof
US7526131B2 (en) 2003-03-07 2009-04-28 Martin Weber Image processing apparatus and methods
US7542072B2 (en) 2004-07-28 2009-06-02 The University Of Maryland Device using a camera and light polarization for the remote displacement of a cursor on a display
US20070091178A1 (en) 2005-10-07 2007-04-26 Cotter Tim S Apparatus and method for performing motion capture using a random pattern on capture surfaces
US20090103780A1 (en) 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
US8144123B2 (en) 2007-08-14 2012-03-27 Fuji Xerox Co., Ltd. Dynamically controlling a cursor on a screen when using a video camera as a pointing device
US20090303176A1 (en) 2008-06-10 2009-12-10 Mediatek Inc. Methods and systems for controlling electronic devices according to signals from digital camera and sensor modules
US8068641B1 (en) 2008-06-19 2011-11-29 Qualcomm Incorporated Interaction interface for controlling an application
US20110134237A1 (en) 2008-08-04 2011-06-09 Koninklijke Philips Electronics N.V. Communication device with peripheral viewing means
US20110080490A1 (en) 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US8515128B1 (en) * 2009-10-07 2013-08-20 Qualcomm Incorporated Hover detection
US8547327B2 (en) * 2009-10-07 2013-10-01 Qualcomm Incorporated Proximity object tracker
US20140028628A1 (en) 2009-10-07 2014-01-30 Quallcomm Incorporated Hover detection
US8897496B2 (en) * 2009-10-07 2014-11-25 Qualcomm Incorporated Hover detection

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235412B2 (en) 2008-04-24 2019-03-19 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US10521021B2 (en) 2008-04-24 2019-12-31 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US10354131B2 (en) * 2014-05-12 2019-07-16 Fujitsu Limited Product information outputting method, control device, and computer-readable recording medium
US10599226B2 (en) * 2015-05-21 2020-03-24 Audi Ag Operating system and method for operating an operating system for a motor vehicle
WO2018148219A1 (en) * 2017-02-07 2018-08-16 Oblong Industries, Inc. Systems and methods for user input device tracking in a spatial operating environment
US10509513B2 (en) 2017-02-07 2019-12-17 Oblong Industries, Inc. Systems and methods for user input device tracking in a spatial operating environment
US10884507B2 (en) 2018-07-13 2021-01-05 Otis Elevator Company Gesture controlled door opening for elevators considering angular movement and orientation

Also Published As

Publication number Publication date
US20140028628A1 (en) 2014-01-30
US20110080490A1 (en) 2011-04-07
US8547327B2 (en) 2013-10-01
US20150363009A1 (en) 2015-12-17
US8515128B1 (en) 2013-08-20
US8897496B2 (en) 2014-11-25

Similar Documents

Publication Publication Date Title
US9317134B2 (en) Proximity object tracker
US10990189B2 (en) Processing of gesture-based user interaction using volumetric zones
US10831278B2 (en) Display with built in 3D sensing capability and gesture control of tv
US9030564B2 (en) Single camera tracker
US8923562B2 (en) Three-dimensional interactive device and operation method thereof
US8818040B2 (en) Enhanced input using flashing electromagnetic radiation
US8843857B2 (en) Distance scalable no touch computing
KR102335132B1 (en) Multi-modal gesture based interactive system and method using one single sensing system
US9996197B2 (en) Camera-based multi-touch interaction and illumination system and method
JP2004246578A (en) Interface method and device using self-image display, and program
Haubner et al. Gestural input on and above an interactive surface: Integrating a depth camera in a tabletop setup
Haubner et al. Integrating a Depth Camera in a Tabletop Setup for Gestural Input on and above the Surface

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GESTURETEK, INC.;REEL/FRAME:031051/0531

Effective date: 20110719

Owner name: GESTURETEK, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CLARKSON, IAN;HILDRETH, EVAN;REEL/FRAME:031051/0334

Effective date: 20091013

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8