WO2014151054A2 - Systems and methods for vehicle user interface - Google Patents

Systems and methods for vehicle user interface Download PDF

Info

Publication number
WO2014151054A2
WO2014151054A2 PCT/US2014/024852 US2014024852W WO2014151054A2 WO 2014151054 A2 WO2014151054 A2 WO 2014151054A2 US 2014024852 W US2014024852 W US 2014024852W WO 2014151054 A2 WO2014151054 A2 WO 2014151054A2
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
vehicle
user
identifying
component
Prior art date
Application number
PCT/US2014/024852
Other languages
French (fr)
Other versions
WO2014151054A3 (en
Inventor
Pedram Vaghefinazari
Tarek A. EL DOKOR
Jordan Cluster
James E. Holmes
Stuart M. Yamamoto
Original Assignee
Honda Motor Co., Ltd.
Edge 3 Technologies Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/834,007 external-priority patent/US8886399B2/en
Priority claimed from US13/835,252 external-priority patent/US8818716B1/en
Application filed by Honda Motor Co., Ltd., Edge 3 Technologies Llc filed Critical Honda Motor Co., Ltd.
Publication of WO2014151054A2 publication Critical patent/WO2014151054A2/en
Publication of WO2014151054A3 publication Critical patent/WO2014151054A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/10
    • B60K35/28
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities
    • B60K2360/146
    • B60K2360/166

Definitions

  • a driver typically interacts with a vehicle-based computing system by inputting commands via a touchscreen or physical buttons on the center console of the vehicle.
  • FIG. 6 is a flow chart illustrating a process for selecting a component of a vehicle and controlling the component based on a gesture, according to one embodiment.
  • the in-vehicle system 112 may provide information to the mobile communication device 102.
  • the mobile communication device 102 may use that information to obtain additional information from the network 120 and/or the server 122.
  • the additional information may also be obtained in response to providing information with respect to a prompt on wireless mobile communication device 102 from in-vehicle system 112.
  • the data output module 216 receives information related to one or more POIs from one of the search modules 210, 211 and sends the information to the display 138, the speaker 140, or some other output device in the MCD 102 or the in- vehicle communications system 112. In one embodiment, the data output module 216 sends an audio representation of a portion of the information for a point of interest while showing additional information to the user via the display 138. For example, if the data output module 216 receives information related to a restaurant, the module 216 may have the speaker 140 speak out the name of the restaurant while reviews of the restaurant are sent to the display 138.
  • FIG. 2B is a block diagram illustrating components of the gesture control module 146 of the in- vehicle computing system 112 of FIG. 1, according to one embodiment.
  • the gesture control module 146 includes a gesture recognition module 252, a voice recognition module 254, a component identification module 256, a gesture angle module 258, a command generation module 260, and a command execution module 262.
  • the gesture control module 146 may include additional, fewer, or different components, and the functionality of the components 252 through 262 described herein may be distributed among components of the gesture control module 146 in a different manner.
  • the voice recognition module 254 receives an output signal from the microphone 134 and performs a voice recognition algorithm on the received signal to recognize spoken words and other audio captured by the microphone 134.
  • the voice recognition module 254 generates and outputs voice data representing words in the audio input. Similar to the gesture data, the voice data is a high-level machine -readable representation of the audio captured by the microphone.
  • the voice data may be a character string containing words that were spoken by the user.
  • the command execution module 262 receives a command from the command generation module 260 and sends control signals to the identified component to cause the component to perform the command.
  • the control signals directly control devices that perform the command. For example, if the command is to rotate the right side mirror to a particular orientation, as described above, the command execution module 262 sends control signals to motors that adjust the orientation of the mirror.
  • some or all of the modules 252 through 262 of the gesture control module 146 are positioned external to the in- vehicle system 1 12.
  • the modules 252 through 262 are implemented as an application downloaded to the MCD 102 (e.g., applications available from iTunes).
  • the modules 252 through 258 are implemented on the remote server 122, and data from the camera system 132 and microphone 134 are sent over the network 120 to the remote server 122 to be analyzed.
  • the gesture recognition module 202 After receiving the data signal from the camera system 132, the gesture recognition module 202 performs 304 gesture recognition on the data signal to determine a direction vector corresponding to the identifying gesture. In one embodiment, the gesture recognition module 202 uses depth information in the data signal to generate a 3D depth reconstruction of the identifying gesture. The 3D depth reconstruction is then used to determine the direction vector 404.
  • An example direction vector 404 is illustrated in FIG. 4B. In the illustrated example, the direction vector 404 is a two-dimensional vector that represents the direction of the identifying gesture relative to a pair of axes 406 that are parallel and perpendicular to the vehicle's direction of travel.
  • the gesture recognition module 202 may also be configured to recognize other types of identifying gestures and determine a direction vector upon detecting one of these other gestures.
  • the gesture recognition module 202 may also be configured to determine a direction vector for an identifying gesture comprising an outstretched arm without a specific arrangement of fingers, or an identifying gesture comprising a hand on the steering wheel with an outstretched finger pointing at an exterior object.
  • the command generation module 260 also receives gesture data directly from the gesture recognition module 252.
  • the module 260 may use the gesture data to calculate one or more parameters without measuring any gesture angles.
  • the module 260 may calculate a parameter based on a pinching gesture performed by the user.

Abstract

A user, such as the driver of a vehicle, to retrieve information related to a point of interest (POI) near the vehicle by pointing at the POI or performing some other gesture to identify the POI. Gesture recognition is performed on the gesture to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server-based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.

Description

SYSTEMS AND METHODS FOR VEHICLE USER INTERFACE
BACKGROUND
1. Field of the Invention
[0001] The present invention relates generally to gesture recognition and in particular to searching for a point of interest based on a gesture. Further, this disclosure relates to controlling different components of a vehicle with gestures.
2. Description of the Related Arts
[0002] Vehicle technologies and features available to and controlled by a driver have advanced in recent years. For example, many vehicles feature integrated computing systems with network connections that can be used to retrieve and display a wide range of
information. One key function of vehicle-based computing systems is the ability to retrieve information related to points of interest (POI) near the vehicle. This can be useful, for example, when the driver wishes to identify a nearby building or view information (e.g., ratings and reviews) for a restaurant or store.
[0003] A driver typically interacts with a vehicle-based computing system by inputting commands via a touchscreen or physical buttons on the center console of the vehicle.
However, using a touchscreen or buttons to request POI information by navigating a map or typing in a search term can be cumbersome and frustrating, especially when the driver is requesting information about a POI that he can see through the vehicle's windows.
[0004] Moreover, conventionally, a user in a vehicle can interact with features in a vehicle by interacting with physical controls such as knobs, dials, and switches on a console inside the vehicle. Physical controls are commonly used to perform adjustments like tilting the side mirrors or air conditioning vents or to interact with a multimedia system in the vehicle. Alternatively, a vehicle may include an integrated computing system that allows a user to control various components of the vehicle by performing physical gestures on a touchscreen that displays a user interface. However, it is often cumbersome and inconvenient for the user to reach forward or sideways to interact with a touchscreen or manipulate a physical control, and these conventional devices frequently present the user with a large number of functions that can be confusing and difficult to use.
SUMMARY
[0005] A computing system retrieves information associated with a point of interest based on an identifying gesture that a user performs inside a vehicle. The identifying gesture is oriented so that it identifies an object outside the vehicle. The computing system receives a data signal representing the identifying gesture and performs gesture recognition on the data signal to determine a direction vector corresponding to the direction of the identifying gesture. The system also accesses location data to identify the vehicle's current location and orientation. The direction vector, location, and orientation are then analyzed to generate a target region that corresponds to the object that was identified by the gesture, and the system retrieves information associated with one or more POIs in the target region. The retrieved information is provided to the user via an output device, such as a speaker or display.
[0006] A computing system allows a user to control a component of a vehicle by performing a gesture. The system identifies a first component of the vehicle based on a first selecting input performed by the user. After the user performs a gesture, the system receives a first data signal representing the first gesture. Gesture recognition is performed on the first data signal to generate a first command for controlling the first component. After the first command is generated, the process can be repeated for a second component. The system identifies the second component of the vehicle based on a second selecting input, and the user performs a second gesture. The system receives a second data signal representing the second gesture and performs gesture recognition on the second data signal to generate a second command.
[0007] The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The teachings of the embodiments of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings.
[0009] Figure (FIG.) 1 illustrates an exemplary operating environment 100 for various embodiments of the gesture-based POI search system.
[0010] FIG. 2 A is a high-level block diagram illustrating components of the POI information retrieval module of FIG. 1, according to one embodiment.
[0011] FIG. 2B is a block diagram illustrating components of the gesture control module of FIG. 1, according to one embodiment. [0012] FIG. 3 is a flow chart illustrating a process for retrieving information about a POI based on an identifying gesture, according to one embodiment.
[0013] FIG. 4A-4D illustrate an example of a gesture-based POI search.
[0014] FIG. 5 is a flow chart illustrating a process for maintaining micromaps in the gesture-based POI search system, according to one embodiment.
[0015] FIG. 6 is a flow chart illustrating a process for selecting a component of a vehicle and controlling the component based on a gesture, according to one embodiment.
[0016] FIGS. 7A-7D illustrate an example of selecting a component of a vehicle and controlling the component based on a gesture.
[0017] FIG. 8 is a flow chart illustrating a process for measuring gesture angles, according to one embodiment.
[0018] FIGS. 9A-9C illustrate examples of gesture angles.
DETAILED DESCRIPTION OF EMBODIMENTS
[0019] Embodiments are now described with reference to the accompanying figures. Like reference numbers indicate identical or functionally similar elements. Also in the figures, the left most digit of each reference number corresponds to the figure in which the reference number is first used.
OVERVIEW
[0020] A POI information retrieval module allows a user, such as the driver of a vehicle, to retrieve information related to a point of interest near the vehicle by pointing at the POI or performing some other gesture to identify the POI. A camera system in the vehicle captures the gesture and sends a data signal representing the gesture to the POI information retrieval module. The POI information retrieval module performs gesture recognition on the data signal to generate a target region that includes the POI that the user identified. After generating the target region, information about the POI can be retrieved by querying a server- based POI service with the target region or by searching in a micromap that is stored locally. The retrieved POI information can then be provided to the user via a display and/or speaker in the vehicle. This process beneficially allows a user to rapidly identify and retrieve information about a POI near the vehicle without having to navigate a user interface by manipulating a touchscreen or physical buttons.
[0021] The user may optionally issue a voice command along with a gesture. If a microphone in the vehicle detects a voice command, the POI information retrieval module performs voice recognition on the voice command to generate a character string representing the words that were spoken as part of the command. The character string can then be used to help identify the POI that the user pointed at. For example, if the user says "building" while pointing at a building, the POI information retrieval module can ignore information for non- building objects (e.g., playgrounds, parking lots, etc.) when retrieving information for POIs in the target region.
OPERATING ENVIRONMENT
[0022] Figure 1 illustrates an exemplary operating environment 100 for various embodiments. The operating environment 100 may include an in- vehicle communications system 112. One example of such a system is an in- vehicle hands free telephone (HFT) controller 113 which will be used as an example herein for ease of discussion. The operating environment 100 may also include a wireless mobile communication device (MCD) 102, a communication link 105 for communications between the in- vehicle system 112 and a network 120, a short-range communication link 109 for communication between the in- vehicle system 112 and the wireless mobile communication device 102 , a wireless networking communication link 107 between the wireless mobile communication device 102 and the network 120, and a POI data server 122 connected to the network 120. The communication links described herein can directly or indirectly connect these devices. The network 120 can be a wireless communication network such as a cellular network comprised of multiple base stations, controllers, and a core network that typically includes multiple switching entities and gateways, for example.
[0023] The functions described herein are set forth as being performed by a device in the operating environment 100 (e.g., the in-vehicle communication system 112, the MCD 102, and/or the remote server 122). In embodiments, these functions can be performed in any of these devices or in any combination of these devices and/or other devices.
[0024] The operating environment 100 includes input devices, such as a camera system
132, location sensors 133, and a microphone 134. The camera system 132, location sensors
133, and/or microphone 134 can be part of the in-vehicle system 112 (as shown in FIG. 1) or can be in the MCD 102 (not shown), for example. In one embodiment, the camera system 132 includes a sensor that captures physical signals from within the vehicle (e.g., a time of flight camera, an infrared sensor, a traditional camera, etc). The camera system 132 is positioned to capture physical signals from a user such as hand or arm gestures from a driver or passenger. The camera system 132 can include multiple cameras positioned to capture physical signals from various positions in the vehicle, e.g., driver seat, front passenger seat, second row seats, etc. Alternatively, the camera system 132 may be a single camera which is focused on one position (e.g., the driver), has a wide field of view, and can receive signals from more than one occupant of the vehicle, or can change its field of view to receive signals from different occupant positions.
[0025] In another embodiment, the camera system 132 is part of the MCD 102 (e.g., a camera incorporated into a smart phone), and the MCD 102 can be positioned so that the camera system 132 captures gestures performed by the occupant. For example, the camera system 132 can be mounted so that it faces the driver and can capture gestures by the driver. The camera system 132 may be positioned in the cabin or pointing toward the cabin and can be mounted on the ceiling, headrest, dashboard or other locations in/on the in-vehicle system 112 or MCD 102.
[0026] After capturing a physical signal, the camera system 132 outputs a data signal representing the physical signal. The format of the data signal may vary based on the type sensor(s) that were used to capture the physical signals. For example, if a traditional camera sensor was used to capture a visual representation of the physical signal, then the data signal may be an image or a sequence of images (e.g., a video). In embodiments where a different type of sensor is used, the data signal may be a more abstract or higher-level representation of the physical signal.
[0027] The location sensors 133 are physical sensors and communication devices that output data associated with the current location and orientation of the vehicle. For example, the location sensors 133 may include a device that receives signals from a global navigation satellite system (GNSS) or an electronic compass (e.g., a teslameter) that measures the orientation of the vehicle relative to the four cardinal directions. The location sensors 133 may also operate in conjunction with the communication unit 116 to receive location data associated with connected nodes in a cellular tower or wireless network. In another embodiment, some or all of the location sensors 133 may be incorporated into the MCD 102 instead of the vehicle.
[0028] The microphone 134 captures audio signals from inside the vehicle. In one embodiment, the microphone 134 can be positioned so that it is more sensitive to sound emanating from a particular position (e.g., the position of the driver) than other positions (e.g., other occupants). The microphone 134 can be a standard microphone that is incorporated into the vehicle, or it can be a microphone incorporated into the MCD 102. The microphone 134 can be mounted so that it captures voice signals from the driver. For example, the microphone 138 may be positioned in the cabin or pointing toward the cabin and can be mounted on the ceiling, headrest, dashboard or other locations in/on the vehicle or MCD 102.
[0029] The POI information retrieval module 136 retrieves information related to one or more POIs based on input from the camera system 132 and (optionally) the microphone 134. After performing the search, the module 136 sends the result to the display 138 and/or speaker 140 so that the result can be provided to the user. A detailed description of the components and operation of the POI information retrieval module 136 is presented below with reference to FIGS. 2-5.
[0030] The controllable components 142 include components of the vehicle that can be controlled with gestures performed by the user. For example, the components 142 may include devices with an adjustable orientation, such as a rearview mirror, exterior side mirrors, and air conditioning outlets. The components 142 may also include physical controls that are used to control functions of the vehicle. For example, the components 142 may include buttons and knobs for controlling the air conditioning, multimedia system, or navigation system of the vehicle. The controllable components 142 may also include a screen in the vehicle that displays a gesture-controlled user interface.
[0031] Some or all of the controllable components 142 may provide the user with an additional control method that does not involve gesture recognition. For example, components with an adjustable orientation (e.g., a mirror or an air conditioning vent) may include a mechanical interface that allows the user to change the component's orientation by adjusting one or more levers.
[0032] The gesture control module 146 sends control signals to the controllable components 142 based on inputs from the camera system 132 and (optionally) the microphone 134. After receiving one or more inputs, the module 136 may provide feedback to the user via the display 138 and/or the speaker 140 to provide confirmation that the user has performed a gesture or voice command correctly and/or prompt the user to provide an additional input. A detailed description of the components and operation of the control module 146 is presented below.
[0033] The operating environment 100 also includes output devices, such as a display 138 and a speaker 140. The display 138 receives and displays a video signal. The display 138 may be incorporated into the vehicle (e.g., an LCD screen in the central console, a HUD on the windshield), or it may be part of the MCD 102 (e.g., a touchscreen on a smartphone). The speaker 140 receives and plays back an audio signal. Similar to the display 138, the speaker 140 may be incorporated into the vehicle, or it can be a speaker incorporated into the MCD 102.
[0034] The in- vehicle hands-free telephone (HFT) controller 113 and wireless mobile communication device (MCD) 102 may communicate with each other via a short-range communication link 109 which uses short-range communication technology, such as, for example, Bluetooth® technology or other short-range communication technology, for example, Universal Serial Bus (USB). The HFT controller 113 and mobile communications device 102 may connect, or pair, with each other via short-range communication link 109. In an embodiment the vehicle 113 can include a communications unit 116 that interacts with the HFT controller 113 to engage in the short range communications, a memory unit device 114, and a processor 118. The HFT controller 113 can be part of a vehicle's telematics system which includes memory/storage, processor(s) and communication unit(s). The HFT controller 113 can utilize the vehicle's telematics unit to assist in performing various functions. For example, the communications unit 116 and/or processor 118 can be part of the vehicle's telematics unit or can be a separate unit in the vehicle.
[0035] The processors 108, 118 and/or 128 process data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown in each device in FIG. 1, multiple processors may be included in each device. The processors can comprise an arithmetic logic unit, a microprocessor, a general purpose computer, or some other information appliance equipped to transmit, receive and process electronic data signals from the memory 104, 114, 124, and other devices both shown and not shown in the figures.
[0036] Examples of a wireless mobile communication device (MCD) 102 include a cellular phone, personal device assistant (PDA), smart phone, pocket personal computer (PC), laptop computer, tablet computer, smart watch or other devices having a processor, communications capability and are easily transportable, for example. The MCD 102 includes a communications unit 106, a memory unit device 104, and a processor 108. The MCD 102 also includes an operating system and can include various applications either integrated into the operating system or stored in memory/storage 104 and executed by the processor 108. In a common form, an MCD application can be part of a larger suite of vehicle features and interactions. Examples of applications include applications available for the iPhone™ that is commercially available from Apple Computer, Cupertino, California, applications for phones running the Android™ operating system that is commercially available from Google, Inc., Mountain View, California, applications for BlackBerry devices, available from Research In Motion Ltd., Waterloo, Ontario, Canada, and/or applications available for Windows Mobile devices, available from Microsoft Corp., Redmond, Washington.
[0037] In alternate embodiments, the mobile communication device 102 can be used in conjunction with a communication device embedded in the vehicle, such as a vehicle- embedded phone, a wireless network card, or other device (e.g., a Wi-Fi capable device). For ease of discussion, the description herein describes the operation of the embodiments with respect to an embodiment using a mobile communication device 102. However, this is not intended to limit the scope of the embodiments and it is envisioned that other embodiments operate using other communication systems between the in-vehicle system 112 and the network 120, examples of which are described herein.
[0038] The mobile communication device 102 and the in-vehicle system 112 may exchange information via short-range communication link 109. The mobile communication device 102 may store information received from the in-vehicle system 112, and/or may provide the information (such as voice and/or gesture signals) to a remote processing device, such as, for example, the remote server 122, via the network 120. The remote server 122 can include a communications unit 126 to connect to the network 120, for example, a
memory/storage unit 124 and a processor 128.
[0039] In some embodiments, the in-vehicle system 112 may provide information to the mobile communication device 102. The mobile communication device 102 may use that information to obtain additional information from the network 120 and/or the server 122. The additional information may also be obtained in response to providing information with respect to a prompt on wireless mobile communication device 102 from in-vehicle system 112.
[0040] The network 120 may include a wireless communication network, for example, a cellular telephony network, as well as one or more other networks, such as, the Internet, a public-switched telephone network (PSTN), a packet-switching network, a frame-relay network, a fiber-optic network, and/or other types of networks.
PERFORMING GESTURE-BASED POINT OF INTEREST SEARCHES
[0041] FIG. 2 A is a high-level block diagram illustrating components of the POI information retrieval module 136 of FIG. 1, according to one embodiment. The POI information retrieval module 136 includes a gesture recognition module 202, a location module 206, a voice recognition module 204, and input analysis module 208, a POI search module 210, a micromap management module 212, micromap storage 214, and an information output module 216. In alternative embodiments, the POI information retrieval module 136 may include additional, fewer, or different components, and the functionality of the components 202 through 216 described herein may be distributed among components of the information retrieval module 136 in a different manner.
[0042] The gesture recognition module 202 receives a data signal from the camera system 132 and performs a gesture recognition algorithm on the received data signal to identify and interpret the gesture that was captured by the camera system 132. As described above with reference to the camera system 132, the data signal is an electronic representation of a gesture that the user performed in the vehicle. For example, the data signal may be an image of the gesture, a sequence of images, or some other representation of the gesture. In one embodiment, the gesture recognition module 202 is configured to automatically detect an identifying gesture that identifies an object exterior to the vehicle. When an identifying gesture is detected, the gesture recognition module 202 analyzes the gesture to determine a direction vector representing the direction of the gesture.
[0043] The voice recognition module 204 receives an output signal from the microphone 134 and performs a voice recognition algorithm on the received signal to identify voice commands received by the microphone 134. The voice recognition module 204 generates a computer-readable output representing words in the voice command. For example, the voice recognition module 204 may output the words as a character string.
[0044] The location module 206 receives data from the location sensors 133 and uses the data to determine the current location and orientation of the vehicle. If the location module 206 receives multiple types of data for determining the vehicle's current location (e.g., a combination of GNSS data and location data for connected cell towers), then the module 206 may perform averaging or some other aggregation technique to combine the data into a single location (e.g., a single set of lat/long coordinates). The location module 206 may similarly perform aggregation to combine multiple types of orientation data.
[0045] The input analysis module 208 receives input data from the gesture recognition module 202, the voice recognition module 204, and the location module 206 and analyzes the input data to determine a target region corresponding to an identifying gesture that was captured by the camera system 132. After determining the target region, the input analysis module 208 queries the POI search module 210 and/or the micromap search module 211 to retrieve information related to points of interest inside the target region. The operation of the input analysis module 208 is described below in greater detail.
[0046] The POI search module 210 receives a target region from the input analysis module 208 and performs a point of interest search in the target region by querying a remote server. In addition to the target region, the POI search module 210 may also receive character strings representing voice commands issued by the user. In this case, the POI search module 210 may include the character strings in the query in order to obtain more accurate results. To perform the search, the POI search module 210 may access a database on the server 122. Alternatively, the module 210 may access a service operating on a third-party server (e.g., Yelp™, Google Local).
[0047] The micromap search module 211 receives a target region from the input analysis module 208 and searches a corresponding micromap in the micromap storage 214 for information related to POIs in the target region. As used herein, a micromap is a map of a region that contains one or more POIs. Each micromap also includes information related to the POIs in the micromapped region. Since micromaps are stored locally in the micromap storage 214 in some embodiments, POI information that is stored in a micromap can be accessed with less latency than POI information that is retrieved from a remote server (e.g., by the POI search module 210). This can be particularly beneficial in regions with a high density of POIs, such as a downtown region in a major city.
[0048] The micromap management module 212 retrieves micromaps of regions that the vehicle is likely to enter and stores the retrieved micromaps in the micromap storage. In one embodiment, the micromap management module 212 monitors the location, orientation, and speed of the vehicle to automatically identify micromaps for retrieval. An example process for automatically identifying micromaps in this manner is described in detail with reference to FIG. 5.
[0049] The data output module 216 receives information related to one or more POIs from one of the search modules 210, 211 and sends the information to the display 138, the speaker 140, or some other output device in the MCD 102 or the in- vehicle communications system 112. In one embodiment, the data output module 216 sends an audio representation of a portion of the information for a point of interest while showing additional information to the user via the display 138. For example, if the data output module 216 receives information related to a restaurant, the module 216 may have the speaker 140 speak out the name of the restaurant while reviews of the restaurant are sent to the display 138.
[0050] In other embodiments, some or all of the components 202 through 216 of the POI information retrieval module 136 are positioned external to the in- vehicle system 112. In one embodiment, the components 202 through 216 are implemented as an application
downloaded to the MCD 102 (e.g., applications available from iTunes). In another embodiment, the components 202 through 216 are implemented on the remote server 122, and data from the camera system 132, location sensors 133, and microphone 134 are sent over the network 120 to the remote server 122 to be analyzed.
[0051] FIG. 2B is a block diagram illustrating components of the gesture control module 146 of the in- vehicle computing system 112 of FIG. 1, according to one embodiment. The gesture control module 146 includes a gesture recognition module 252, a voice recognition module 254, a component identification module 256, a gesture angle module 258, a command generation module 260, and a command execution module 262. In alternative embodiments, the gesture control module 146 may include additional, fewer, or different components, and the functionality of the components 252 through 262 described herein may be distributed among components of the gesture control module 146 in a different manner.
[0052] The gesture recognition module 252 receives a data signal from the camera system 132 and performs a gesture recognition algorithm on the received data signal. The gesture recognition algorithm generates gesture data representing the gesture that was captured by the camera system 132. As described above with reference to the camera system 132, the data signal is an electronic representation of a gesture that the user performed in the vehicle. For example, the data signal may be an image of the gesture, a sequence of images, or some other representation of the gesture.
[0053] The gesture data generated by the gesture recognition module 252 is a high-level machine-readable representation of the gesture captured by the camera system 132. In one embodiment, the gesture includes three-dimensional coordinates of the extremities and joints in the user's hand and forearm. For example, the gesture data may include coordinates representing the three-dimensional positions of user's elbow, wrist, and the fingertip and knuckles of each of the user's finger.
[0054] In another embodiment, the gesture recognition module 252 determines three- dimensional coordinates as described above and performs additional processing to determine a position of the hand, a plane representing the orientation of the hand, and the angle at which each joint is bent. In this embodiment, the gesture recognition module 252 outputs the hand position, orientation plane, and joint angles as the gesture data. For example, the gesture recognition module 252 can determine the position of the hand by calculating a midpoint between the coordinates representing the positions of the knuckles and the wrist. The orientation plane and the joint angles may be determined by performing similar arithmetic calculations on the coordinate data for the hand and forearm.
[0055] The voice recognition module 254 receives an output signal from the microphone 134 and performs a voice recognition algorithm on the received signal to recognize spoken words and other audio captured by the microphone 134. The voice recognition module 254 generates and outputs voice data representing words in the audio input. Similar to the gesture data, the voice data is a high-level machine -readable representation of the audio captured by the microphone. For example, the voice data may be a character string containing words that were spoken by the user.
[0056] The component identification module 256 analyzes data from the gesture recognition module 202 and/or the voice recognition module 254 to identify a component of the vehicle. After identifying the component, the module 256 preferably outputs a component identifier. In one embodiment, the component identification module 256 analyzes gesture data representing an identifying gesture. For example, the gesture data may represent a pointing gesture directed toward one of the controllable components 142 of the vehicle. In this embodiment, the component identification module 256 stores three-dimensional coordinates representing the position of each component 142, and the module 256 identifies the component 142 by generating a line matching the direction of the pointing gesture and finding the component 142 whose coordinates are closest to the line. Processing may continue without the output of such a component identifier.
[0057] In another embodiment, the component identification module 256 analyzes voice data representing a voice command. For example, if the user speaks the name of the component 142 that the user wishes to control, then the received voice data is a character string containing the name that was spoken. In this embodiment, the component
identification module 256 stores a name for each component 142 and identifies the component 142 by matching the voice data to the closest stored name. In still another embodiment, the component identification module 256 receives a combination of gesture data and voice data and analyzes both types of data to identify a component 142. For example, the user may speak the name of a component 142 while pointing at the component 142.
[0058] The gesture angle module 258 analyzes gesture data from the gesture recognition module 252 to measure one or more gesture angles associated with a gesture performed by the user. In one embodiment, the gesture angle module 258 first establishes a reference position of the gesture (e.g., the starting position of a hand or finger) and measures one or more gesture angles as the hand or finger is tilted relative to the reference position. The operation of the gesture angle module is described in greater detail below.
[0059] The command generation module 260 generates a command for a component based on a component identifier from the component identification module 256 and one or more gesture angles from the gesture angle module 258. The command is a high-level instruction to adjust the identified component in a particular manner. In one embodiment, the command includes a function and one or more parameters for the function. For example, in a command to rotate the right side mirror to a particular orientation, the function is to rotate the right side mirror, and the parameters are the angles defining the desired orientation of the mirror.
[0060] In an embodiment where the command includes a function and one or more parameters, the command generation module 260 may calculate the parameters based on the gesture angles. For example, in a command to rotate the side mirror, the module 260 may calculate parameters that cause the orientation of the side mirror to mimic the orientation of the user's hand (as defined by the gesture angles). Meanwhile, the module 260 selects the function based on the component identifier. For example, the module 260 would select a function to rotate the right side mirror if it receives an identifier for the right side mirror.
[0061] In another embodiment, the command generation module 260 preferably receives gesture data directly from the gesture recognition module 252 either in addition to or in place of receiving gesture angles from the gesture angle module 258. In this embodiment, the module 260 may select the function based on a combination of the component identifier, the gesture data, and the gesture angles. For example, suppose the module 260 receives an identifier for an air conditioning vent. If the module 260 also receives a gesture angle (thus indicating that the user has tilted his hand), it selects a function to adjust the direction of the vent and calculates parameters that represent the orientation of the user's hand.
Alternatively, if the module 260 receives gesture data representing a pinch gesture between the user's thumb and forefinger, then it selects a function to adjust the flow rate through the identified vent and calculates a parameter representing the distance between the thumb and forefinger. The parameter is then used to set the new flow rate of the vent. The ability to select a function based on a combination of the component identifier and the gesture beneficially allows a user to perform gestures to control multiple aspects of the same component 142.
[0062] The command execution module 262 receives a command from the command generation module 260 and sends control signals to the identified component to cause the component to perform the command. The control signals directly control devices that perform the command. For example, if the command is to rotate the right side mirror to a particular orientation, as described above, the command execution module 262 sends control signals to motors that adjust the orientation of the mirror. [0063] In other embodiments, some or all of the modules 252 through 262 of the gesture control module 146 are positioned external to the in- vehicle system 1 12. In one embodiment, the modules 252 through 262 are implemented as an application downloaded to the MCD 102 (e.g., applications available from iTunes). In another embodiment, the modules 252 through 258 are implemented on the remote server 122, and data from the camera system 132 and microphone 134 are sent over the network 120 to the remote server 122 to be analyzed.
[0064] FIG. 3 is a flow chart illustrating a process for retrieving information about a POI based on an identifying gesture, according to one embodiment. For ease of discussion, the process 300 shown in FIG. 3 will be described below in conjunction with the example shown in FIGS. 4A-4D. The process 300 begins when a user performs an identifying gesture inside a vehicle. The identifying gesture is directed toward the exterior to the vehicle to identify an object outside the vehicle and request information about the object. For example, the user shown in FIG. 4A is performing an identifying gesture to request information for a building near the vehicle 402 by pointing at the building with an outstretched arm and forefinger. As the user performs the identifying gesture, the camera system 132 captures 302 the identifying gesture and sends a data signal representing the gesture to the gesture recognition module 202.
[0065] After receiving the data signal from the camera system 132, the gesture recognition module 202 performs 304 gesture recognition on the data signal to determine a direction vector corresponding to the identifying gesture. In one embodiment, the gesture recognition module 202 uses depth information in the data signal to generate a 3D depth reconstruction of the identifying gesture. The 3D depth reconstruction is then used to determine the direction vector 404. An example direction vector 404 is illustrated in FIG. 4B. In the illustrated example, the direction vector 404 is a two-dimensional vector that represents the direction of the identifying gesture relative to a pair of axes 406 that are parallel and perpendicular to the vehicle's direction of travel.
[0066] In addition to the outstretched arm and forefinger gesture shown in the example of FIG. 4A, the gesture recognition module 202 may also be configured to recognize other types of identifying gestures and determine a direction vector upon detecting one of these other gestures. For example, the gesture recognition module 202 may also be configured to determine a direction vector for an identifying gesture comprising an outstretched arm without a specific arrangement of fingers, or an identifying gesture comprising a hand on the steering wheel with an outstretched finger pointing at an exterior object.
[0067] The user may optionally issue a voice command to provide additional information about the object being identified. In the example illustrated in FIG. 4A, the user says "building" while performing the identifying gesture to indicate that he is pointing at a building. A voice command may be particularly helpful in situations where the object being identified (e.g., a building) is adjacent to a different type of object (e.g., a park). Although the user shown in FIG. 4A issues the voice command at the same time as he performs identifying gesture, the user may alternatively issue a voice command before or after performing the identifying gesture.
[0068] If the microphone 134 captures a voice command with the identifying gesture, then the voice recognition module 204 analyzes the voice command to generate a computer- readable representation of the command. In the example of FIG. 4A, the voice recognition module 204 generates the character string "building" after receiving the corresponding audio signal from the microphone 134.
[0069] Meanwhile, the location module 206 receives data from the location sensors 133 to determine 306 the current location and orientation of the vehicle. In one embodiment, the location module 206 polls the location sensors 133 to determine a current location and orientation only after detecting that the user has performed an identifying gesture. In another embodiment, the location module 206 polls the location sensors 133 at regular intervals to maintain a constantly updated location and orientation for the vehicle.
[0070] Next, the input analysis module 208 receives the direction vector and the current location and orientation of the vehicle and generates 308 a target region that is likely to contain the object that the user identified. In one embodiment, the target region is generated to align with the direction vector. In the example shown in FIG. 4B, the target region 408 has an elongated triangular shape that follows the direction of the direction vector 404 and has one corner anchored at the location of vehicle 402. The triangular shape shown in the example of FIG. 4B is a particularly convenient shape because it can be defined as a geo- fence with three pairs of lat/long coordinates (i.e., defining the vertices of the triangle).
Alternatively, the target region may be some other shape that corresponds to the direction vector. In one embodiment, the input analysis module 208 also uses the current speed of the vehicle when determining the target region. For example, the target region can extend farther from the vehicle when the vehicle is traveling at a faster speed.
[0071] After generating 308 the target region, the input analysis module 208 accesses the micromap storage 214 to determine 310 whether the target region overlaps with any micromaps that have been stored in the micromap storage 214. If the target region does not overlap with any micromapped regions, then the input analysis module 208 sends the target region to the POI search module 210, and the POI search module 210 performs a search to retrieve 312 information for the POI that was identified with the identifying gesture. As described above with reference to FIG. 2, the POI search module 210 may access the remote server 122 or a third-party service (e.g., Yelp™, Google Local) to perform a POI search in the target region. The POI information may include, for example, a name for the POI, a short description, images, hours of operation, contract information, ratings and reviews, and other information.
[0072] If a voice command was received with the gesture, then the input analysis module 208 also passes a character string representing the voice command to the POI search module 210 so that the character string can be used to narrow the results of the POI search. For example, the POI search module 210 would perform a search for the term "building" within the target region 408 after receiving the inputs shown in FIG. 4A.
[0073] Since the user typically performs the identifying gesture with the intention of retrieving information about a single POI, the input analysis module 208 may use an iterative process to adjust the target region until the POI search module 210 returns a single POI. For example, a triangular target region (e.g., the example target region 408 shown in FIG. 4B) can be adjusted by changing the length of the triangle in the direction of the direction vector 404 and/or changing the angle of the triangle at the corner corresponding to the vehicle 402. Thus, if the POI search finds multiple POIs, the input analysis module 208 may iteratively decrease the size of the target region until one POI is found. Similarly, if the search does not return any POIs, the input analysis module 208 may iteratively increase the size of the target region until a POI is returned. The single POI is then sent to the data output module 216 to be provided to the user. Alternatively, the input analysis module 208 may merely use the iterative process to reduce the number of POIs but still send multiple POIs to the data output module 216. This may be useful in cases where there is uncertainty over which POI the user was attempting to identify. For example, if there are two buildings in close proximity to each other in the example target region 408 of FIG. 4B, then the input analysis module 208 may send both POIs to the data output module 216.
[0074] If the input analysis module 208 determines that the target region overlaps with a micromapped region, then the input analysis module 208 sends the target region to the micromap management module 212 so that the micromap search module 211 can search the corresponding micromap to retrieve 314 POI information for the identified POI. The input analysis module 208 and the micromap search module 211 may operate in conjunction to perform an iterative process similar to the process described above with reference to the POI search module 210 to narrow the POI information that is sent to the data output module 216. Since the micromap is stored locally, the iterative process can be performed more quickly. In addition, the micromap search module 211 can advantageously perform a search in a locally stored micromap in situations where the communication links 105, 107 to the network 120 are unreliable or unavailable. Micromaps are also advantageous because they provide increased granularity in identifying and localizing POIs, and such POIs may reference various types of establishments. With localization, micromapping also enables more accurate reconstruction. In one embodiment, the range of reconstruction is limited to the range of micromapped objects. Hence, using a micromap may also change the range and overall number of accessible POIs. In one embodiment, the input analysis module 208 retrieves POI information from both search modules 210, 211 in parallel and merges the two sets of retrieved POI information into a single set of results. For example, the input analysis module 208 uses an artificial intelligence unit to merge the retrieved POI information.
[0075] After receiving the POI information from the input analysis module 208, the data output module 216 provides 316 the POI information to the user using the various output devices in the vehicle or the MCD 102. The data output module 216 may be configured to use one output device to output a portion of the POI information and use a different output device to output additional information. For example, the name of a POI may be spoken out to the user using the speaker 140 (shown in FIG. 4C) while the display 138 is used to show more detailed information about the POI, such as a description, photos, and contact information (shown in FIG. 4D).
[0076] In the example shown in FIGS. 4C-4D, the data output module 216 is merely outputting information for a single POI. However, if the input analysis module 208 send information for multiple POIs to the data output module 216, the data output module 216 may first output a list of POIs (e.g., by showing a visual interface on the display 138 or by using the speaker 140 to speak out the names of the POIs). The user can then select a POI from the list (e.g., by performing a pointing gesture at the display 138 or speaking out a voice command) to view additional information for the POI.
[0077] In another embodiment, the gesture recognition module 202 determines 304 a three-dimensional direction vector instead of a two-dimensional vector, and the rest of the process 300 is expanded into three dimensions. Thus, location module 206 also determines 306 the vehicle's altitude, and the input analysis module 208 generates 308 a three- dimensional target region (e.g., a cone). Using a three-dimensional process 300 can beneficially provide more accurate POI information in locations where multiple POIs have the same lat/long coordinates but are located a different altitudes. For example, suppose the vehicle is driving through a city. If the user points toward the top of a skyscraper, the POI information retrieval module 136 would provide information about the skyscraper's observation deck instead of information about restaurants in the skyscraper's lobby.
[0078] FIG. 5 is a flow chart illustrating a process 500 for maintaining micromaps in the gesture -based POI search system, according to one embodiment. The micromap management module 212 begins by obtaining 502 the current speed, orientation, and location of the vehicle. The location and orientation of the vehicle may be obtained from the location module 206, as described above with reference to FIG. 2. In one embodiment, the micromap management module 212 obtains the vehicle's speed by accessing a component of the vehicle that directly measures its speed (e.g., a speedometer). Alternatively, the micromap management module 212 may determine the vehicle's speed by analyzing the location data of the vehicle over a known period of time.
[0079] Next, the micromap management module 212 analyzes the speed, orientation, and location data of the vehicle to identify 504 upcoming micromapped regions that the vehicle is likely to pass by or travel through. In one embodiment, the module 212 generates a retrieval region in front of the vehicle based on the vehicle's speed, orientation, and location, and any micromapped regions inside the retrieveal region are identified as upcoming micromapped regions. The retrieval region may have a triangular shape similar to the target region described above with reference to FIGS. 3 and 4B. Alternatively, the retrieval region may have some other shape (e.g., a corridor centered on the road that the vehicle is currently traveling on, an ellipse that extends in front of the vehicle, etc).
[0080] After identifying 504 one or more upcoming micromapped regions, the micromap management module retrieves 506 the corresponding micromaps from the remote server 122. As described above with reference to the micromap management module 212, a micromap is a map of a region that contains one or more POIs and contains information related to the POIs in the micromapped region. The POI information in a micromap may include, for example, a name for the POI, a short description, images, hours of operation, contract information, ratings and reviews, performance schedules, and other information. The retrieved micromaps 506 are stored 508 in the micromap storage 214 so that they can be rapidly accessed when the user performs an identifying gesture for a POI in one of the stored micromaps.
[0081] In addition to automatically adding micromaps according to the process 500 described above, the micromap management module 212 may also be configured to delete micromaps from the micromap storage 214. In one embodiment, the module 212 deletes micromaps based on a similar analysis of the vehicle's speed, orientation, and location. For example, the micromap management module 212 may automatically delete a micromap if the vehicle is moving away from the corresponding region. In another embodiment, the module 212 may delete a micromap if the micromap has not been accessed for a certain period of time.
[0082] FIG. 6 is a flow chart illustrating a process 600 for selecting a component of the vehicle and controlling the component 142 with a gesture, according to one embodiment. For ease of discussion, the process 600 shown in FIG. 6 will be described below in conjunction with the example shown in FIGS. 7A-7D.
[0083] The process 600 begins when the user performs a selecting input to identify one of the controllable components 142. The selecting input can be any combination of voice input, gesture input, and any other user input that can be captured by input devices within the vehicle. In the example shown in FIG. 7A, the selecting input includes a voice command 702 with the name of the component and a pointing gesture 704 directed toward the component. Although the pointing gesture 704 shown in FIG. 7A includes the user's entire arm, a pointing gesture 704 may alternatively be a different gesture that defines a direction. For example, the user may perform a pointing gesture 704 with a single finger while keeping the rest of his hand on the steering wheel.
[0084] The input devices in the vehicle capture the selecting input and send signals representing the selecting input to the gesture control module 146, where the signals are received 602 by the gesture recognition module 252 and the voice recognition module 254. As described above with reference to FIG. 2B, the gesture recognition module 252 performs gesture recognition on a data signal received from the camera system 132. Meanwhile, the voice recognition module 254 performs voice recognition on a voice signal received from the microphone 134.
[0085] The component identification module 256 receives data representing the selecting input (e.g., the gesture data and voice data) and analyzes the data to identify 604 the selected component. As described above with reference to the component identification module 256, the module 256 outputs a component identifier after identifying the component.
[0086] In one embodiment, the in- vehicle computing system 112 outputs a confirmation signal using the display 138 or the speaker 140 after identifying the component. The confirmation signal indicates to the user that the component has been successfully identified and that the user can proceed to perform a gesture to control the component. The confirmation signal may also indicate a function that will be executed after the user performs the gesture. In the example shown in FIG. 7B, the speakers 140 play back an audio confirmation signal 706 to indicate that the rearview mirror has been selected and that the user can begin performing a gesture to adjust the orientation of the mirror. Although not shown in FIG. 7B, the system 112 may additionally be configured to show an image or animation on the display 138 to convey similar information (e.g., an image of the rearview mirror surrounded by arrows).
[0087] In this embodiment, the system 112 may also be configured to receive and process an input from the user indicating that an incorrect component was identified. For example, the system 112 reverts to step 602 if the user performs a voice command to say "incorrect component" after the confirmation signal is output. This allows the user to confirm that the correct component was identified before performing a gesture to control the component, which is beneficial because it prevents the user from accidentally performing a gesture to control the wrong component.
[0088] After the component is successfully identified, the user performs a gesture within the capture region of the camera system 132 so that the camera system 132 can capture the gesture. As shown in the example of FIG. 7C, the gesture 708 can include an angular motion in the horizontal direction 709A and/or the vertical direction 709B. The gesture recognition module 146 receives 606 a data signal representing the gesture and performs 608 gesture recognition on the data signal.
[0089] The gesture angle module 258 and the command generation module 260 operate together to determine 610 a command corresponding to the gesture based on the component identifier and the gesture data. As described above, a command contains a function and one or more parameters. For example, the function generated in the example illustrated in FIGS. 7A-7D is to rotate the rearview mirror (because the rearview mirror is the identified component), while the parameters are angles defining the desired orientation of the mirror.
[0090] In one embodiment, the gesture angle module 258 analyzes the gesture data to measure one or more gesture angles, and command generation module 260 uses the gesture angles to generate the parameters. For example, the command generation module 260 may generate angles that cause the mirror to rotate in a manner that mimics the movement of the user's hand.
[0091] In another embodiment, the command generation module 260 also receives gesture data directly from the gesture recognition module 252. In this embodiment, the module 260 may use the gesture data to calculate one or more parameters without measuring any gesture angles. For example, the module 260 may calculate a parameter based on a pinching gesture performed by the user.
[0092] The command execution module 262 executes 612 the command by generating control signals for the appropriate devices. In the example of FIG. 7D, the command execution module 262 generates control signals for the motors that cause the rearview mirror 710 to rotate in the horizontal and vertical directions 711 A, 711B.
[0093] In one embodiment, the process of executing a command based on a gesture (i.e., steps 606 through 612) operates in real-time. For example, as the user performs the tilting gesture 708 shown in FIG. 7C, the rearview mirror 710 shown in FIG. 7D moves
simultaneously in order to mimic the changing orientation of the user's hand. This beneficially provides the user with real-time feedback as the command is being executed so that the user can make more accurate adjustments. For example, if the user accidentally tilts his hand too far and causes the mirror 710 to rotate farther than desired, the user can simply tilt his hand in the opposite direction until the mirror 710 reaches its desired position.
[0094] In one embodiment, the gesture control module 146 presents the user with an option to invert the identified component's direction of motion relative to the user's hand gestures. For example, in order to have the rearview mirror 710 mimic the motion of the user's hand, the user can configure the module 146 to tilt the mirror 710 upward when the user tilts his hand upward. Alternatively, the user can configure to module 146 to tilt the rearview mirror 710 downward when the user tilts his hand upward to give the illusion that the user's hand defines the normal vector of the mirror 710. The motion of a component in the horizontal direction can similarly be inverted in this manner. The option to invert a direction of motion is beneficial because different users will find different settings to be more intuitive.
[0095] The gesture control module 146 may additionally present the user with an option to adjust sensitivity when controlling a component. For example, when the user tilts his hand by ten degrees when performing a gesture, the component can be configured to rotate by 5 degrees, 8 degrees, 10 degrees, 15 degrees, or some other angular displacement.
[0096] Although the process 600 of FIG. 6 was described with reference to an example in which the rearview mirror was adjusted, the process 600 can be used to control a wide range of components within the vehicle. For example, the user can adjust the volume of a particular speaker in the vehicle's sound system by identifying the speaker (e.g., by pointing at the speaker or issuing a voice command such as "passenger door speaker") and performing a gesture to indicate a desired volume level. The gesture can be a pinching motion, a twirling motion performed with a finger (to simulate rotating a real-life volume knob), a tilting motion performed with a hand (e.g., the motion shown in FIG. 7C), or some other motion that can be recognized by the gesture recognition module 252.
[0097] In another example, the user can navigate a user interface on the display 138 by identifying the display and performing gestures. In this example, the gestures may control a cursor or some other position indicator on the user interface. Alternatively, the gestures may be used to navigate a menu structure. For example, the system may be configured to move between items in the same level of the menu structure when the user tilts his hand up or down, move to a higher level when the user tilts his hand to the left, and select a menu item when the user tilts his hand to the right.
[0098] Since the process 600 described with reference to FIG. 6 begins with steps 602 and 604 for identifying a component, the same gesture performed in the same capture area can be used to issue different commands to different components. For example, depending on the component that was selected, the same hand tilting gesture 708 (shown in FIG. 7C) can be used to control the rearview mirror 710, one of the side mirrors, a user interface on the display 138, or one of the air conditioning vents. This improves the ease of use of the gesture control system described herein because the user does not have to learn a separate set of gestures to control each component of the car.
MEASUREMENT OF GESTURE ANGLES
[0099] FIG. 8 is a flow chart illustrating a process 800 for measuring gesture angles, according to one embodiment. The process 800 begins when the gesture angle module 258 uses gesture data from the gesture recognition module 252 to determine 802 a reference position of the gesture. The reference position is the initial position of the user's hand and forearm within the capture region of the camera system 132, and gesture angles are measured relative to the reference position. The gesture angle module 258 saves the reference position (e.g., by storing the gesture data representing the reference position) to be used later in the process 600 to measure the gesture angles.
[00100] After the user begins performing a gesture (e.g., by tilting his hand), the gesture angle module 258 determines 804 a current position of the gesture by analyzing updated gesture data from the gesture recognition module 252. The current position is the
instantaneous position of the user's hand and forearm at some point in time after the reference position was determined 802.
[00101] After determining 804 the current position of the gesture, the gesture angle module 258 measures 806, 808, 810 gesture angles by comparing the current position to the reference position. A description of how gesture angles are measured in different spatial directions is described below with reference to FIGS. 9A-9C.
[00102] FIGS. 9A-9C illustrate examples of gesture angles in three spatial directions. For ease of description, the examples in FIGS. 9A-9C are illustrated with respect to a set of three- dimensional axes that are used consistently throughout the three figures. The same set of axes are also shown in the example gesture 708 of FIG. 7C. In the examples illustrated in FIGS. 9A-9C, the gesture angles are rotational displacements of the hand in three spatial directions. However, the gesture angles may also be measured in a different manner. For example, instead of measuring a rotational displacement of the entire hand, the gesture angle module 258 may measure rotational displacement of one or more outstretched fingers (e.g., a curling motion performed by the index finger). Alternatively, the module 258 may measure rotational displacement of the user's entire forearm.
[00103] FIG. 9A illustrates an example of a horizontal gesture angle 902. The gesture angle module 258 measures 806 the horizontal gesture angle 902 by determining an angular displacement in the x-z plane between the reference position and the current position.
Similarly, FIG. 9B illustrates an example of a vertical gesture angle 904, and the gesture angle module 258 measures 808 the vertical gesture angle 904 by determining an angular displacement in the y-x plane.
[00104] In one embodiment, the gesture angle module 258 measures 806, 808 the horizontal and vertical gesture angles by calculating a centerline of the reference position and a centerline of the current position. Each centerline can be calculated, for example, by drawing a line from the middle of the wrist to the tip of the middle finger (known in the art as the proximo-distal axis). To measure 806 the horizontal gesture angle 902, the two centerlines are projected onto the x-z plane and the gesture angle module 258 determines the angle between the two projections. Similarly, the vertical gesture angle 904 can be measured 808 by projecting the two centerlines onto the y-z plane and determining the angle between the two projections.
[00105] FIG. 9C illustrates an example of a rotational gesture angle 906. The gesture angle module 258 measures 810 the rotational gesture angle 906 by determining a rotational displacement about an axis along the length of the user's hand (shown in FIGS. 9A-9C as the z-axis). In one embodiment, the gesture angle module 258 measures 810 the rotational gesture angle 906 by measuring a change in orientation of a plane representing the palm of the user's hand. ADDITIONAL CONSIDERATIONS
[00106] Although the description herein is presented with reference to an in-vehicle communications system 112, the systems and processes described in this specification may also be implemented in mobile devices such as smartphones and tablet computers
independently of a vehicle. For example, a magnetometer and location module integrated into a mobile device can be used to determine the location, speed, and orientation of a mobile device, while a camera in the mobile device can be used to capture the identifying gesture.
[00107] Reference in the specification to "one embodiment" or to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. The appearances of the phrase "in one embodiment" or "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
[00108] Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations or transformation of physical quantities or representations of physical quantities as modules or code devices, without loss of generality.
[00109] However, all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device (such as a specific computing machine), that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[00110] Certain aspects of the embodiments include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the embodiments could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. The embodiments can also be in a computer program product which can be executed on a computing system.
[00111] The embodiments also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the purposes, e.g., a specific computer, or it may comprise a general-purpose computer selectively activated or
reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. The memory/storage can be transitory or non-transitory. Memory can include any of the above and/or other devices that can store information/data/programs. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
[00112] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the method steps. The structure for a variety of these systems will appear from the description below. In addition, the embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the embodiments as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode.
[00113] In addition, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the embodiments, which are set forth in the claims.
[00114] Upon reading this disclosure, those of skill in the art will appreciate still additional alternative methods and systems for performing a gesture-based POI search. Thus, while particular embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A vehicle-based computer-implemented method for retrieving information associated with a point of interest (POI), the method comprising:
receiving, at a computing system, a data signal representing an identifying gesture performed by a user inside a vehicle, the identifying gesture oriented in a direction and identifying an object exterior to the vehicle;
performing gesture recognition on the data signal to determine a direction vector representing the direction of the identifying gesture;
accessing location data identifying a current location and orientation of the vehicle; analyzing the direction vector and the location data to generate a target region
corresponding to the object identified by the identifying gesture; retrieving information associated with one or more points of interest located in the target region; and
providing the retrieved information to the user, the retrieved information including information associated with the object identified by the identifying gesture.
2. The computer-implemented method of claim 1, wherein retrieving information
associated with one or more points of interest comprises:
sending the target region to a POI data server over a network; and
receiving, from the POI data server, information associated with points of interest located in the target region.
3. The computer-implemented method of claim 1, further comprising:
accessing movement data representing a current speed of the vehicle;
analyzing the movement data and the location data to find an upcoming micromapped region that the vehicle is likely to travel through;
retrieving, from a POI data server, a micromap corresponding to the micromapped region, the micromap including information associated with one or more points of interest inside the micromapped region; and
storing the micromap in a local micromap storage.
4. The computer-implemented method of claim 3, wherein retrieving information
associated with one or more points of interest comprises:
determining whether the target region overlaps with the micromapped region;
responsive to determining that the target region overlaps with the micromapped
region, accessing the corresponding micromap in the local micromap storage to retrieve information associated with points of interest located within the target region.
The computer-implemented method of claim 4, wherein retrieving information associated with one or more points of interest comprises:
sending the target region to a POI data server over a network;
receiving, from the POI data server, additional information associated with points of interest located in the target region; and
merging the information associated with points of interest located in the target region and the additional information associated with points of interest located in the target region into a single set of POI information.
A non-transitory computer-readable storage medium for storing computer program instructions for retrieving information associated with a point of interest (POI), the program instructions when executed by a processor cause the processor to perform steps including:
receiving a data signal representing an identifying gesture performed by a user inside a vehicle, the identifying gesture oriented in a direction and identifying an object exterior to the vehicle;
performing gesture recognition on the data signal to determine a direction vector representing the direction of the identifying gesture;
accessing location data identifying a current location and orientation of the vehicle; analyzing the direction vector and the location data to generate a target region
corresponding to the object identified by the identifying gesture;
retrieving information associated with one or more points of interest located in the target region; and
providing the retrieved information to the user, the retrieved information including information associated with the object identified by the identifying gesture. The storage medium of claim 6, wherein the program instructions further cause the processor to perform steps including:
receiving a voice command, the voice command issued by the user inside the vehicle; and
performing voice recognition on the voice command to determine one or more words spoken by the user as part of the voice command; wherein the analyzing step comprises analyzing the voice command words in conjunction with the direction vector and the location data to generate the target region.
8. The storage medium of claim 6, wherein a corner of the target region corresponds to the current location of the vehicle, and wherein the target region aligns with the direction vector.
9. The storage medium of claim 6, wherein the program instructions further cause the processor to perform steps including:
analyzing the movement data and the location data to find an upcoming micromapped region that the vehicle is likely to travel through;
retrieving, from a POI data server, a micromap corresponding to the micromapped region, the micromap including information associated with one or more points of interest inside the micromapped region; and
storing the micromap in a local micromap storage.
10. A vehicle-based computer-implemented method for controlling a component of the vehicle, the method comprising:
identifying a first component of the vehicle based on a first selecting input performed by the user within the vehicle;
receiving a first data signal representing a gesture performed by the user, the gesture performed in a capture region inside the vehicle;
performing gesture recognition on the first data signal to determine a first command for controlling the first identified component;
identifying a second component of the vehicle based on a second selecting input performed by the user within the vehicle, the second identified component different from the first identified component;
receiving a second data signal representing the same gesture performed by the user in the same capture region within the vehicle; and
performing gesture recognition on the second data signal to determine a second
command for controlling the second identified component, the second command different from the first command.
11. The method of claim 10, wherein the gesture is captured by a camera system, the camera system comprising a plurality of cameras positioned to collectively capture gestures performed in the capture region inside the vehicle.
12. The method of claim 10, wherein performing gesture recognition comprises measuring one or more gesture angles, each gesture angle representing an angular displacement between a current position of the gesture and a reference position of the gesture.
13. The method of claim 12, wherein the gesture angles comprise a horizontal angle, a vertical angle, and a rotational angle.
14. The method of claim 12, wherein the first command is a command for navigating a menu structure based on the gesture angle, and wherein the second command is a command for adjusting a mirror of the vehicle based on the gesture angle.
15. The method of claim 10, wherein the first selecting input comprises a pointing gesture directed at the first component, the pointing gesture performed in the capture region, and wherein identifying the first component comprises performing gesture recognition on an input signal representing the first selecting input.
16. A non-transitory computer-readable storage medium for storing computer program instructions for controlling a component of a vehicle, the program instructions when executed by a processor cause the processor to perform steps including:
identifying a first component of the vehicle based on a first selecting input performed by the user within the vehicle;
receiving a first data signal representing a gesture performed by the user, the gesture performed in a capture region inside the vehicle;
performing gesture recognition on the first data signal to determine a first command for controlling the first identified component;
identifying a second component of the vehicle based on a second selecting input performed by the user within the vehicle, the second identified component different from the first identified component;
receiving a second data signal representing the same gesture performed by the user in the same capture region within the vehicle; and
performing gesture recognition on the second data signal to determine a second
command for controlling the second identified component, the second command different from the first command.
17. The storage medium of claim 16, wherein the first selecting input comprises a
pointing gesture directed at the first component, the pointing gesture performed in the capture region, and wherein identifying the first component comprises performing gesture recognition on an input signal representing the first selecting input.
18. The storage medium of claim 16, wherein the first selecting input comprises a voice command containing a name of the first component, and wherein identifying the first component comprises performing voice recognition on an input signal representing the first selecting input.
19. The storage medium of claim 16, wherein the first selecting gesture is at least one of a pointing gesture or a voice command.
20. The storage medium of claim 16, further comprising instructions for:
responsive to identifying the first component, sending an output signal to an output device for playback to the user, the output signal indicating that the first component has been selected.
PCT/US2014/024852 2013-03-15 2014-03-12 Systems and methods for vehicle user interface WO2014151054A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/835,252 2013-03-15
US13/834,007 2013-03-15
US13/834,007 US8886399B2 (en) 2013-03-15 2013-03-15 System and method for controlling a vehicle user interface based on gesture angle
US13/835,252 US8818716B1 (en) 2013-03-15 2013-03-15 System and method for gesture-based point of interest search

Publications (2)

Publication Number Publication Date
WO2014151054A2 true WO2014151054A2 (en) 2014-09-25
WO2014151054A3 WO2014151054A3 (en) 2014-11-13

Family

ID=51581629

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/024852 WO2014151054A2 (en) 2013-03-15 2014-03-12 Systems and methods for vehicle user interface

Country Status (1)

Country Link
WO (1) WO2014151054A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017017938A1 (en) * 2015-07-24 2017-02-02 島根県 Gesture operating system, method, and program
CN108286985A (en) * 2017-01-09 2018-07-17 现代自动车株式会社 Device and method for the searching interest point in navigation equipment
CN111050279A (en) * 2018-10-12 2020-04-21 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle-mounted equipment and hot spot data sharing method based on map display

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636443B (en) * 2015-01-12 2018-01-23 北京中交兴路车联网科技有限公司 A kind of basic data model that POI potential informations are excavated based on lorry track

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948040A (en) * 1994-06-24 1999-09-07 Delorme Publishing Co. Travel reservation information and planning system
US20070150444A1 (en) * 2005-12-22 2007-06-28 Pascal Chesnais Methods and apparatus for organizing and presenting contact information in a mobile communication system
US20100253542A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Point of interest location marking on full windshield head-up display
US20100274480A1 (en) * 2009-04-27 2010-10-28 Gm Global Technology Operations, Inc. Gesture actuated point of interest information systems and methods
US20120264457A1 (en) * 2008-06-19 2012-10-18 Microsoft Corporation Data synchronization for devices supporting direction-based services
US20130030811A1 (en) * 2011-07-29 2013-01-31 Panasonic Corporation Natural query interface for connected car

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948040A (en) * 1994-06-24 1999-09-07 Delorme Publishing Co. Travel reservation information and planning system
US20070150444A1 (en) * 2005-12-22 2007-06-28 Pascal Chesnais Methods and apparatus for organizing and presenting contact information in a mobile communication system
US20120264457A1 (en) * 2008-06-19 2012-10-18 Microsoft Corporation Data synchronization for devices supporting direction-based services
US20100253542A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Point of interest location marking on full windshield head-up display
US20100274480A1 (en) * 2009-04-27 2010-10-28 Gm Global Technology Operations, Inc. Gesture actuated point of interest information systems and methods
US20130030811A1 (en) * 2011-07-29 2013-01-31 Panasonic Corporation Natural query interface for connected car

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017017938A1 (en) * 2015-07-24 2017-02-02 島根県 Gesture operating system, method, and program
CN108286985A (en) * 2017-01-09 2018-07-17 现代自动车株式会社 Device and method for the searching interest point in navigation equipment
CN108286985B (en) * 2017-01-09 2023-12-12 现代自动车株式会社 Apparatus and method for retrieving points of interest in a navigation device
CN111050279A (en) * 2018-10-12 2020-04-21 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle-mounted equipment and hot spot data sharing method based on map display

Also Published As

Publication number Publication date
WO2014151054A3 (en) 2014-11-13

Similar Documents

Publication Publication Date Title
US11275447B2 (en) System and method for gesture-based point of interest search
US8886399B2 (en) System and method for controlling a vehicle user interface based on gesture angle
KR102010298B1 (en) Image display apparatus and operation method of the same
US20170284822A1 (en) Input/Output Functions Related to a Portable Device In An Automotive Environment
US9733730B2 (en) Systems and methods for navigating a scene using deterministic movement of an electronic device
JP5916702B2 (en) Navigation or mapping apparatus and method
US20150066360A1 (en) Dashboard display navigation
CN113302664A (en) Multimodal user interface for a vehicle
US10209832B2 (en) Detecting user interactions with a computing system of a vehicle
US20150362988A1 (en) Systems and methods for user indication recognition
US9915547B2 (en) Enhanced navigation information to aid confused drivers
WO2011054549A1 (en) Electronic device having a proximity based touch screen
KR20100124591A (en) Mobile terminal system and control method thereof
WO2016035281A1 (en) Vehicle-mounted system, information processing method, and computer program
WO2014151054A2 (en) Systems and methods for vehicle user interface
US20140181651A1 (en) User specific help
JP6297353B2 (en) Communication system, portable communication terminal, electronic device, and program
US20150233721A1 (en) Communication system
JP6798608B2 (en) Navigation system and navigation program
US10876853B2 (en) Information presentation device, information presentation method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14770561

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 14770561

Country of ref document: EP

Kind code of ref document: A2