US20120216113A1 - Touch gestures for text-entry operations - Google Patents

Touch gestures for text-entry operations Download PDF

Info

Publication number
US20120216113A1
US20120216113A1 US13/030,623 US201113030623A US2012216113A1 US 20120216113 A1 US20120216113 A1 US 20120216113A1 US 201113030623 A US201113030623 A US 201113030623A US 2012216113 A1 US2012216113 A1 US 2012216113A1
Authority
US
United States
Prior art keywords
user input
area
user
user interface
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/030,623
Inventor
Yang Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US13/030,623 priority Critical patent/US20120216113A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, YANG
Priority to US13/250,089 priority patent/US8276101B2/en
Priority to PCT/US2012/025085 priority patent/WO2012112575A1/en
Priority to EP12705593.7A priority patent/EP2676185A1/en
Publication of US20120216113A1 publication Critical patent/US20120216113A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction

Definitions

  • This disclosure relates to a gesture-based user interface for mobile devices.
  • Touch-based interaction may be, for example, finger-based touch input.
  • a user may interact with an application via tactile interaction.
  • some computing devices with touch screens allow text-entry methods based on input by a user via touch of the finger, usually utilizing an on-screen keypad.
  • this disclosure describes techniques for providing a user of a computing device with the ability to perform text-entry operations (e.g., using a touch screen) on a computing device.
  • the techniques of this disclosure may, in some examples, allow the user to use gestures on a mobile computing device to perform text entry and editing operations.
  • a presence-sensitive user interface device e.g., the touch screen
  • the user may use gestures to enter text into applications that accept text as an input (e.g., short message service (SMS) messages, e-mail message, uniform resource locators (URLs), and the like).
  • SMS short message service
  • URLs uniform resource locators
  • the user may utilize gestures of certain patterns, relative to the defined areas, to indicate text entry and editing operations such as, for example, deleting characters and words, indicating a space or return character, and the like.
  • the disclosure is directed to a method comprising receiving, using a presence-sensitive user interface device coupled to a computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • the disclosure is directed to a computer-readable storage medium encoded with instructions that, when executed, cause one or more processors of a computing device to perform operations comprising receiving, using a presence-sensitive user interface device coupled to the computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • the disclosure is directed to a computing device comprising one or more processors, a presence-sensitive user interface device, a user interface module operable by the one or more processors to receive, using the presence-sensitive user interface device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, wherein the user interface module is further operable to receive, by the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and means for applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • Certain techniques of the present disclosure may, as one non-limiting example, allow a user of a computing device to perform certain text-editing operations using gestures on a touch screen of the computing device.
  • the user may enter different patterns using the gestures, relative to defined areas on the touch screen, to indicate the desired text-editing operation, without necessarily switching to an on-screen keypad.
  • FIG. 1 is a block diagram illustrating an example computing device that may provide a text-entry application in accordance with one or more aspects of the present disclosure.
  • FIG. 2 is a block diagram illustrating further details of the computing device shown in FIG. 1 .
  • FIGS. 3A-3G are block diagrams illustrating example screens of a computing device as a user interacts with the device, in accordance with one or more aspects of the present disclosure.
  • FIG. 4A is a flow diagram illustrating an algorithm for interpreting gestures in accordance with one or more aspects of the present disclosure.
  • FIG. 4B is an example stroke drawn by a user on a touch screen of a computing device.
  • FIG. 5 is a flow diagram illustrating a method that may be performed by a computing device in accordance with one or more aspects of the present disclosure.
  • this disclosure describes techniques for providing a user with the ability to perform text entry and editing operations using gestures (e.g., using a presence-sensitive user interface device, such as a touch screen user interface) on a computing device.
  • gestures e.g., using a presence-sensitive user interface device, such as a touch screen user interface
  • These techniques may allow the user to use gestures on a computing device to perform text entry and editing operations, for example, via simple interactions with the touch screen.
  • These techniques may be integrated with existing systems that allow for the user to utilize gestures on a touch screen to enter letters and punctuation, thus potentially obviating any issues associated with text entry using on-screen keypads (e.g., touching the wrong key or multiple keys).
  • the user may use gestures to enter text into text-based applications (e.g., SMS messages, e-mail message, URLs, and the like).
  • a portion of the touch screen may be allocated for text entry using gestures (e.g., the lower region of a touch screen).
  • gestures e.g., the lower region of a touch screen.
  • the user may utilize gestures of certain patterns relative to the defined areas to indicate text entry and editing operations such as, for example, deleting characters and words, indicating a space or return characters, and the like.
  • the touch screen may be divided into two regions, an upper region and a lower region.
  • a user may utilize the lower region to provide drawing gestures that define characters and operations, which may be displayed within the upper region.
  • the drawings gestures may be also displayed within the lower region as the user interacts with the lower region of the touch screen, as will be illustrated in more detail below.
  • finger-based touch input may suffer from low precision due to at least two known issues.
  • One issue is that the area touched by the finger is, in some situations (e.g., small mobile devices), much larger than a single pixel, sometimes referred to as “the fat finger” issue. Therefore, the low precision of finger input may become an issue, where small user interface components are often difficult to operate on a computing device, such as, using on-screen keypads to enter and edit text.
  • the issue is further amplified when the user is in motion, e.g., walking, and unable to pay close attention to the interface.
  • Some computing devices provide user interfaces that allow the user to use gestures to enter text, by defining a portion of the touch screen as a text entry region, where the user utilizes his/her finger or a stylus to draw the letters for text entry.
  • user interfaces may not always provide a user with the ability to perform text entry and editing operations not related to drawing characters (e.g., letters, punctuation, numbers) using gestures.
  • drawing characters e.g., letters, punctuation, numbers
  • the user would typically have to switch to an on-screen keypad to perform the text entry and editing operations by touching the corresponding keys. This can be inconvenient as it can make the process of entering text cumbersome and/or it can have the same issues associated with entering text using an on-screen keypad.
  • the techniques of this disclosure provide a region of the user interface (e.g., touch screen) dedicated for text entry (e.g., a lower region) to allow a user to implement text entry and editing operations, other than entry of text characters, using gestures (e.g., on the touch screen).
  • Editing operations may include entry of non-alphanumeric characters that also have associated operations such as adding SPACE and inserting the RETURN character, and may also include activating operations such as deleting the last character, deleting all the text entered, indicating ENTER, and the like.
  • the techniques define areas or sub-regions within the region dedicated for text entry, where the user may utilize gestures involving the sub-regions to effectuate the desired text entry and editing operations.
  • the areas or sub-regions may be defined using on-screen markers (e.g., horizontal and/or vertical lines), and the user may utilize gestures to interact with the different areas to produce the desired outcome (e.g., SPACE, DELETE, RETURN, and the like).
  • the defined areas and the patterns may be large enough relative to on-screen keypad buttons, such that the level of accuracy of where the user touches the screen is not as particular as when using an on-screen keypad. Additionally, these techniques allow for the user to continuously input text without having to switch back and forth between gesture-based text entry and an on-screen keypad.
  • FIG. 1 is a block diagram illustrating an example computing device 100 that may provide a text-entry application 102 in accordance with one or more aspects of the present disclosure.
  • Computing device 100 may comprise one or more stand-alone devices or may be part of a larger system.
  • computing device 100 may comprise a mobile device.
  • computing device 100 may comprise or be part of a wireless communication device (e.g., wireless mobile handset or device), a video telephone, a digital multimedia player, a personal digital assistant (PDA), a video game console, a laptop computer, a tablet computer, or other devices.
  • computing device 100 may communicate with external, distinct devices via one or more networks (not shown), such as one or more wired or wireless networks, which may, in some cases, provide access to the Internet.
  • networks not shown
  • computing device 100 may include one or more applications 104 A- 104 N and text-entry application 102 .
  • Applications 104 A- 104 N and text-entry application 102 may be executed by computing device 100 (e.g., by one or more processors included within computing device 100 , as described in more detail with respect to FIG. 2 ).
  • text-entry application 102 may be displayed on at least a portion of a presence-sensitive user interface device (e.g., user interface) associated with computing device 100 .
  • the presence-sensitive user interface device may be, for example, a touch screen of computing device 100 , responsive to tactile input via a user's finger, for example.
  • computing device 100 may comprise a screen, such as a touch screen, and the screen may include one portion that displays data associated with an application executing on computing device 100 and another portion that allows the user to provide input into the application.
  • Each of applications 104 A- 104 N is operable on computing device 100 to perform one or more functions during execution.
  • one or more of applications 104 A- 104 N may comprise a web or a communication application that interacts and/or exchanges data with a device that is external to computing device 100 .
  • a web application may, in some examples, be executable within a web browser that is operable on computing device 100 .
  • a communication application may, in some examples, be a messaging application, such as, short message service (SMS) application.
  • SMS short message service
  • Computing device 100 may, in various examples, download or otherwise obtain one or more of applications 104 A- 104 N from an external server via one or more networks (not shown).
  • a web browser hosted by computing device 100 may download one or more of applications 104 A- 104 N upon access of one or more web sites hosted by such as external server (e.g., web server).
  • external server e.g., web server
  • at least a portion of applications 104 A- 104 N may be text-based.
  • any of applications 104 A- 104 N may implement, invoke, execute, or otherwise utilize text-entry application 102 as a mechanism to obtain user input.
  • application 104 A is an e-mail application, it may invoke execution of text-entry application 102 to allow a user to enter or type in e-mail text.
  • application 104 N is a web browser application, it may invoke execution of text-entry application 102 to allow a user to enter Uniform Resource Identifier (URI) information or to provide user input during execution of one or more web applications.
  • URI Uniform Resource Identifier
  • Text-entry application 102 may, during execution, display or control a gesture interface 106 , which includes one or more areas with which a user interacts via gestures to input text.
  • gesture interface 106 may be a presence-sensitive user interface device associated with computing device 100 .
  • computing device 100 includes a touch screen user interface
  • a user may interact with the touch screen via gestures to provide text entry, where gesture interface 106 may be a portion of the touch screen.
  • Computing device 100 may display, via the user interface, a sequence of characters corresponding to the one or more characters input by the user via gestures on gesture interface 106 .
  • Computing device 100 may employ a processor to execute a gesture-interpretation algorithm that, based on the user gestures, displays characters and applies operations corresponding to the gestures.
  • the characters may correspond to letters and/or punctuation, which the user may draw using gestures on gesture interface 106 .
  • the user may input gestures on gesture interface 106 corresponding to operations such as, for example, editing operations.
  • Some example editing operations may be adding a space, inserting a return, and deleting the last character, word, paragraph, or all the text the user has input. In this manner, the user may interact with the same interface to enter text and to apply text editing operations.
  • the gesture-interpretation algorithm may be capable of differentiating between gestures representing characters and gestures representing editing operations, as will be described in more detail below.
  • a display device associated with computing device 100 may include a presence-sensitive user interface device portion (e.g., touch screen), which may be responsive to tactile input by a user.
  • a portion of the touch screen may be dedicated to text entry and editing such as, for example, gesture interface 106 .
  • gesture interface 106 may be partitioned into two or more areas, as will be described in more detail below.
  • the user may use gestures to input characters (e.g., letters, punctuation, and the like) as if handwriting in one of the areas within gesture interface 106 .
  • the user may wish to apply a text editing operation or input a character that cannot be drawn, e.g., space, return, delete, and the like.
  • the user may utilize gestures that cross from one of the areas to another of the areas within gesture interface 106 to define a desired operation.
  • the operation may depend on the direction of the gesture and the areas traversed by the gesture.
  • a gesture-interpretation algorithm may interpret the gestures and display the corresponding characters and/or apply the corresponding operation to characters already entered by the user such as, for example, adding a space after the last character, adding a return, deleting the last character, word, paragraph, or all entered text.
  • the techniques of this disclosure may, in some instances, allow a user to use gestures to input text and apply text-editing operations using the same interface, e.g., gesture interface 206 . In these instances, the user would not need to switch to a different screen when the user needs to add a special operation to the text.
  • techniques of this disclosure may provide the user with defined areas within the interface such that gestures associated with one defined area may be associated with text-entry, and gestures associated with another defined area may be associated with text-editing operations.
  • FIG. 2 is a block diagram illustrating further details of the computing device 100 shown in FIG. 1 .
  • FIG. 2 illustrates only one particular example of computing device 100 , and many other example embodiments of computing device 100 may be used in other instances.
  • computing device 100 includes one or more processors 122 , memory 124 , a network interface 126 , one or more storage devices 128 , user interface 130 , and an optional battery 132 .
  • computing device 100 may include battery 132 .
  • Each of components 122 , 124 , 126 , 128 , 130 , and 132 may be interconnected via one or more busses for inter-component communications.
  • Processors 122 may be configured to implement functionality and/or process instructions for execution within computing device 100 .
  • Processors 122 may be capable of processing instructions stored in memory 124 or instructions stored on storage devices 128 .
  • User interface 130 may include, for example, a monitor or other display device for presentation of visual information to a user of computing device 100 .
  • User interface 130 may further include one or more input devices to enable a user to input data, such as a manual keyboard, mouse, touchpad, track pad, etc.
  • user interface 130 may comprise a presence-sensitive user interface device such as, for example, a touch screen, which may be used both to receive and process user input and also to display output information.
  • User interface 130 may further include printers or other devices to output information.
  • references made to user interface 130 may refer to portions of user interface 130 (e.g., touch screen) that provide user input functionality.
  • user interface 130 may be a touch screen that is responsive to tactile input by the user.
  • Memory 124 may be configured to store information within computing device 100 during operation. Memory 124 may, in some examples, be described as a computer-readable storage medium. In some examples, memory 124 is a temporary memory, meaning that a primary purpose of memory 124 is not long-term storage. Memory 124 may also be described as a volatile memory, meaning that memory 124 does not maintain stored contents when the computer is turned off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, memory 124 may be used to store program instructions for execution by processors 122 . Memory 124 may be used by software or applications running on computing device 100 (e.g., one or more of applications 104 A- 104 N shown in FIG. 1 or the gesture-interpretation algorithm) to temporarily store information during program execution.
  • RAM random access memories
  • DRAM dynamic random access memories
  • SRAM static random access memories
  • memory 124 may be used to store program instructions for execution by processors 122
  • Storage devices 128 may also include one or more computer-readable storage media. Storage devices 128 may be configured to store larger amounts of information than memory 124 . Storage devices 128 may further be configured for long-term storage of information. In some examples, storage devices 128 may comprise non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • EPROM electrically programmable memories
  • EEPROM electrically erasable and programmable
  • Computing device 100 also includes network interface 126 .
  • Computing device 100 may utilize network interface 126 to communicate with external devices (e.g., one or more servers, web servers) via one or more networks, such as one or more wireless/wired networks.
  • external devices e.g., one or more servers, web servers
  • networks such as one or more wireless/wired networks.
  • Computing device 100 may utilize network interface 126 in response to execution of one or more applications that require transferring data to and/or from other devices (e.g., other computing devices, servers, or the like).
  • Any applications implemented within or executed by computing device 100 may be implemented or contained within, operable by, executed by, and/or be operatively coupled to processors 122 , memory 124 , network interface 126 , storage devices 128 , and/or user interface 130 .
  • Text-entry application 102 may include a display module 142 , a user interface controller 144 , a character module 146 , and an operations module 148 .
  • Text-entry application 102 may provide or display gesture interface 106 shown in FIG. 1 (e.g., via user interface 130 ).
  • Text-entry application 102 may be stored in memory 124 and/or storage devices 128 , and may be operable by processors 122 to perform various tasks during execution.
  • display module 142 may be operable by processors 122 to define a portion for text and operation entry (e.g., gesture interface 106 ) via user interface 130 .
  • User interface controller 144 may be operable by processors 122 to receive, via user interface 130 , user input specifying characters and/or operations in the form of gestures drawn on gesture interface 106 .
  • the user input may comprise a contact with user interface 130 (e.g., contact with a touch screen), and wherein each of the gestures is associated with a character or an operation.
  • Character module 146 and operations module 148 may be operable by processor 122 to determine, based on gestures the user draws on user interface 130 , the appropriate characters to display and operations to apply to the displayed characters.
  • display module 142 may define gesture interface 106 on user interface 130 .
  • Gesture interface 106 may be generally rectangular in some examples.
  • horizontal and/or vertical lines may be used to define different areas within gesture interface 106 .
  • the lines may be close to the edge of gesture interface 106 , thus defining a larger area in the middle where a user may use gestures to draw characters for text entry.
  • the characters may be letters, numbers, punctuation, and the like.
  • the lines may define smaller areas closer to the outer edges of gesture interface 106 .
  • the user may be able to apply operations to the already-entered characters.
  • the operations may be a space, a line return, or deletion of previously-entered characters, for example.
  • character module 146 may determine a character based on gestures in the larger area of gesture interface 106
  • operations module 148 may determine an operation based on gestures that cross the lines defining the smaller areas in gesture interface 106 .
  • Processor 122 may be operable to execute one or more algorithms including, for example, a gesture-interpretation algorithm.
  • the gesture-interpretation algorithm may determine whether gestures drawn by a user correspond to characters or editing operations. Based on the determination, the gesture-interpretation algorithm may process the drawn gestures to the appropriate module for further interpretation. For example, if the gesture-interpretation algorithm determines that a drawn gesture is a character, the algorithm sends the gesture data associated with the drawn gesture to character module 146 for further interpretation. If, for example, the gesture-interpretation algorithm determines that a drawn gesture is an editing operation, the algorithm sends the gesture data associated with the drawn gesture to operations module 148 for further interpretation.
  • the gesture-interpretation algorithm will be discussed in more detail below.
  • the gesture-interpretation algorithm determines whether the gestures are characters or operations, and character module 146 and operations module 148 may determine the matching desired characters and operations, respectively.
  • Display module 142 may be operable to receive the determined characters and operations for display to the user on user interface 130 .
  • the entered text and operations may be displayed in a manner that depends on the application that computing device 100 may be running, where at least a portion of the application is text based, and the user utilizes gesture interface 106 to enter text into the application (e.g., e-mail, web browser, SMS application, and the like).
  • Display module 142 may be operable to display the gestures are they user draws them on gesture interface 106 , and the corresponding characters in the application.
  • FIGS. 3A-3G are block diagrams illustrating example screens of a computing device 300 as a user interacts with the device, in accordance with one or more aspects of the present disclosure.
  • a series of screens may be shown on a computing device 300 , such as a mobile device (e.g., a smart phone).
  • Computing device 300 may operate in the same manner as computing device 100 of FIGS. 1 and 2 .
  • Computing device 300 may include one or more user interface devices that allow a user to interact with the device.
  • Example user interface devices may include, for example, a mouse, a touchpad, a track pad, a keyboard, a touch screen, or the like.
  • Computing device 300 may also include screen 302 via which computing device 300 displays to the user application-related options and screens.
  • screen 302 may be a touch screen that allows interaction by user's touch via user's finger or a device (e.g., stylus pen).
  • FIGS. 3A-3G illustrate an example progression, according to the techniques described herein, as the user provides input to computing device 300 via a user interface device such as, for example, gesture interface 306 .
  • gesture interface 306 may be an area defined within screen 302 .
  • FIG. 3A illustrates, gesture interface 306 may be displayed on touch screen 302 of computing device 300 , where a user may use his/her fingers or a device (e.g., stylus pen) to interact with the touch screen and user gestures to enter and edit text associated with a text-based application running on computing device 300 .
  • Screen 302 may display in a display portion 304 , which may comprise a graphical user interface, the text that the user inputs into the text-based application.
  • Screen 302 shows an example text-based application (e.g., e-mail, SMS, or the like) that is running on computing device 300 .
  • the text-based application allows the user to enter text into various fields, such as a “To” field and a body field. If the user wishes to enter text, the user may draw the characters (e.g., letters, numbers, punctuation, and the like) using gestures in gesture interface 306 . For example, the user may use contact (e.g., by touching or using a stylus) to draw within gesture interface 306 the words he/she desired to enter into the application. As illustrated in FIG.
  • gesture interface 306 may have a large defined area in the middle, bounded by lines 308 and 310 , where the user may use gestures to enter characters. Lines 308 and 310 may partition the area defined by gesture interface 306 by defining the boundaries for areas 318 and 320 . In this example, the user may have entered the word “HELLO” by drawing it as shown in FIG. 3A .
  • Computing device 300 may display, via the user interface, a sequence of characters corresponding to the characters drawn by the user, as shown in text display portion 304 .
  • Computing device 300 may employ a processor to execute an algorithm (e.g., a gesture-interpretation algorithm) that generates, based on the drawn gestures, characters that match the gestures drawn by the user.
  • an algorithm e.g., a gesture-interpretation algorithm
  • the gesture-interpretation algorithm may determine that the drawn gesture is associated with characters and send the gesture data associated with the drawn gestures to character module 146 .
  • Character module 146 ( FIG. 2 ) may then determine the appropriate characters to display.
  • the user may wish to enter a non-character operation (e.g., space, line return, delete, and the like), which may not be represented by a drawn character as would letters, numbers, or punctuation.
  • a non-character operation e.g., space, line return, delete, and the like
  • the user may utilize gestures in accordance with techniques of this disclosure to input operations that are not characters.
  • lines 308 and 310 may define smaller areas 318 and 320 , respectively, near the edges of gesture interface 306 .
  • the user may utilize gestures involving lines 308 and 310 to define non-character operations.
  • the user may touch gesture interface 306 somewhere between lines 308 and 310 , and swipe to the right to cross line 308 .
  • the gesture-interpretation algorithm may determine that the drawn gesture is associated with an editing operation and send the drawn gesture to operation module 148 .
  • Operations module 148 may then interpret the drawn gesture to determine the associated editing operation, e.g., a SPACE, resulting in adding a space after the last entered word “HELLO.”
  • the user may then input another word as shown in FIG. 3C .
  • the user may have intended to enter the word “WORLD,” and may wish to correct the last entry by deleting the letter S.
  • the user may utilize gestures involving lines 308 and 310 to indicate the desire to delete the last-entered letter.
  • the user may touch gesture interface 306 somewhere between lines 308 and 310 , and swipe to the left to cross line 310 .
  • Operation module 148 may interpret the gesture to determine the associated editing operation, e.g., DELETE, resulting in deleting the last entered character, as FIG. 3D illustrates.
  • the user may write anywhere within gesture interface 306 .
  • the user may start writing anywhere, even in the areas 318 and 320 defined by lines 308 and 310 .
  • character module 146 may determine the best match for each character and display it.
  • character module 146 may display the best match, and may provide the user with other candidate words 312 as shown in FIG. 3C . The user may disregard the candidate words by continuing to draw gestures, indicating that the best match is the desired word or character or may select one of the suggestions by touching the screen where the correct suggestion word is displayed.
  • one example for adding SPACE may be achieved by starting between lines 308 and 310 , and swiping right to cross line 308 , as FIG. 3B illustrates.
  • the user may start between lines 308 and 310 , and swipe left to cross line 310 , as FIG. 3D illustrates.
  • the user may start left of line 310 and swipe all the way across gesture interface 306 to cross lines 310 and 308 as illustrated in FIG. 3E .
  • the user may start by touching right of line 308 and swipe all the way across gesture interface 306 to cross lines 308 and 310 , as illustrated in FIG. 3F .
  • Yet another example may be if the user wishes to issue a DELETE of all the text entered thus far by starting right of line 308 and swiping in a C-shape, thus swiping across gesture interface to the left to cross lines 308 and 310 then returning across and crossing lines 310 and 308 , as illustrated in FIG. 3G .
  • gestures and operations are merely illustrative using the most popular operations, and that other patterns and operations may be implemented using the techniques of this disclosure. In some examples, horizontal and vertical lines may be utilized.
  • FIG. 4A is a flow diagram illustrating an algorithm for interpreting gestures in accordance with one or more aspects of the present disclosure.
  • a processor e.g., processor 122
  • the sensed drawn stroke may be added by the algorithm to a current gesture ( 402 ).
  • a stroke may correspond to a shape drawn by the user in one touch without lifting a finger or stylus off the touch screen. Referring back to FIG.
  • gestures associated with editing operations may comprise one stroke.
  • the gesture-interpretation algorithm may then determine whether the next time the user makes contact with the touch screen is less than a timeout threshold ( 404 ).
  • the timeout threshold may be a time that indicates, when exceeded, that the user has completed drawing one gesture and is drawing another gesture. In one example, the timeout threshold may be 500 ms.
  • the timeout threshold may be set by default to a certain value, may be configurable by the user, or may be automatically configured by the computing device based on user's historical touching patterns.
  • a timer may be reset, and when the user makes the next contact with the touch screen, the timer may be stopped to determine the time difference between the last stroke and the current one. The time difference may be compared to the timeout threshold to determine the whether the user's touch was less than the timeout threshold ( 404 ).
  • the stroke the user is indicating should be added to the current gesture ( 402 ). If the user makes contact with the touch screen beyond the timeout threshold, the stroke the user is indicating is not part of the current gesture and a new gesture should be started.
  • the algorithm may then interpret the current gesture on the screen to determine the appropriate module to handle it.
  • the algorithm may determine if the current gesture includes a single stroke ( 406 ). If the current gesture includes more than one stroke, the algorithm then determines that the current gesture corresponds to a character. The algorithm may then send the data associated with the current gesture to character module 146 for character recognition ( 408 ).
  • the algorithm determines the stroke's straightness and whether it exceeds a straightness threshold ( 410 ). The determination of straightness is described in more detail below. If the straightness of the stroke does not exceed the threshold, that indicates that the stroke is not straight enough, and that the current gesture corresponds to a character. The algorithm may then send the data associated with the current gesture to character module 146 for character recognition ( 408 ). If the straightness of the stroke exceeds the threshold, that indicates that the stroke is straight, and that the current gesture corresponds to an editing operation. The algorithm may then send the data associated with the current gesture to editing operations module 148 for operation recognition ( 412 ).
  • FIG. 4B is an example stroke 400 drawn by a user on a touch screen of a computing device.
  • the algorithm may analyze segments of the stroke.
  • the segments of the stroke may be defined by the points defining the stroke, where the points correspond to pixels on the touch screen.
  • stroke 400 has 4 segments defined by points 420 , 422 , 424 , 426 , and 428 .
  • the algorithm determines the direct distance (direct_dist) from the starting point to the ending point of stroke 400 , illustrated by the dotted line representing direct_dist ( 420 , 428 ).
  • the algorithm determines the path distance of the stroke (path_dist), which is the sum of the lengths of all the segments of stroke 400 .
  • path_dist path distance of the stroke
  • path_dist (420,428) direct_dist (420,422)+direct_dist (422,424)+direct_dist (424,426)+direct_dist (426,428)
  • the algorithm determines straightness of stroke 400 as follows:
  • the straighter the stroke the closer the straightness value will be to 1.
  • the straightness threshold may therefore be close to 1, and the choice of the straightness threshold may depend on how sensitive the algorithm should be in determining whether a stroke is straight.
  • the straightness threshold may be set to 0.8, thus allowing for a certain amount of curvature in the stroke, which may be due to shaking of user's hand while drawing the gesture.
  • the algorithm may also utilize a stroke path variation method to determine whether a single stroke is an editing operation. This may be utilized in some instances such as, for example, when some editing operations correspond to a gesture that is a single stroke but is not a straight stroke.
  • a stroke path variation method may be utilized in some instances such as, for example, when some editing operations correspond to a gesture that is a single stroke but is not a straight stroke.
  • the algorithm may determine to use the stroke path variation method based on the number of graphical lines crossed by the single stroke.
  • the stroke path variation method may consider the number of corners in a stroke to determine whether a stroke should be interpreted as an editing operation or a character.
  • the algorithm may determine whether the point corresponds to a corner by determining the angle between adjacent line segments, and if the angle is above a certain threshold (e.g., 30 degrees), the point may be considered a corner.
  • a certain threshold e.g. 30 degrees
  • point 426 is the only corner in stroke 400 .
  • the straightness method may not be effective in determining whether the stroke corresponds to an editing operation, because the direct distance between the two ends is a lot smaller than the sum of the segment lengths of the stroke, and therefore, the path variation method may be used instead.
  • the algorithm may determine there are at least 2 corner points in the stroke, and therefore, the stroke corresponds to an editing operation.
  • FIG. 5 is a flow diagram illustrating a method that may be performed by a computing device in accordance with one or more aspects of the present disclosure.
  • the illustrated example method may be performed by computing device 100 ( FIGS. 1 and 2 ) or computing device 300 ( FIGS. 3A-3D ).
  • a computer-readable storage medium e.g., a medium included in storage device 128 of FIG. 2
  • the method of FIG. 4 includes defining, for a presence-sensitive user interface device (e.g., touch screen 302 ) coupled to the computing device, a first area (e.g., gesture interface 306 or a lower region of the touch screen) and a second area (e.g., area right of line 308 and left of line 310 ) for user input ( 402 ).
  • a presence-sensitive user interface device e.g., touch screen 302
  • a first area e.g., gesture interface 306 or a lower region of the touch screen
  • a second area e.g., area right of line 308 and left of line 310
  • the second area is a subset of the first area and is defined by one or more boundaries (e.g., horizontal and/or vertical lines) that partition the first area.
  • the method also includes receiving, using the presence-sensitive user interface device, first user input comprising a drawing gesture associated with the first area, where the first user input specifies one or more characters (e.g., letters, numbers, punctuation, or the like) to be displayed in a graphical user interface (e.g., display portion 304 or upper region of the touch screen) associated with the computing device and a text-based application running on the computing device ( 404 ). Based on the user input, the determined characters may be displayed on the graphical user interface ( 406 ).
  • a graphical user interface e.g., display portion 304 or upper region of the touch screen
  • the presence-sensitive user interface device may also receive second user input comprising a gesture associated with the at least one of the one or more boundaries associated with the second area, wherein the second user input specifies an editing operation (e.g., SPACE, RETURN, DELETE, or the like) associated with the one or more characters ( 408 ).
  • the second input may involve, for example, gestures that do not resemble characters, and which may cross the lines that define the second area.
  • the method further includes applying, by the computing device, the specified operation to the one or more characters in response to receiving the second user input ( 410 ). In this manner, the user may utilize gestures to define one or more characters to display, and may also utilize gestures, without changing from gesture-based input, to apply other non-character operations.
  • processors including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • processors may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry.
  • a control unit including hardware may also perform one or more of the techniques of this disclosure.
  • Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure.
  • any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware, firmware, or software components, or integrated within common or separate hardware, firmware, or software components.
  • the techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable medium, including a computer-readable storage medium, may cause one or more programmable processors, or other processors, to implement one or more of the techniques described herein, such as when instructions included or encoded in the computer-readable medium are executed by the one or more processors.
  • Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a compact disc ROM (CD-ROM), a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
  • RAM random access memory
  • ROM read only memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electronically erasable programmable read only memory
  • flash memory a hard disk, a compact disc ROM (CD-ROM), a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
  • an article of manufacture may comprise one or more computer-readable storage media.
  • a computer-readable storage medium may comprise a non-transitory medium.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).

Abstract

In general, this disclosure describes techniques for providing a user of a computing device with the ability to perform text-entry operations (e.g., using a touch screen) on a computing device. Specifically, the techniques of this disclosure may, in some examples, allow the user to use gestures on a mobile computing device to perform text entry and editing operations. Using a presence-sensitive user interface device (e.g., a touch screen), the user may use gestures to enter text into text-based applications (e.g., short message service (SMS) messages, e-mail message, uniform resource locators (URLs), and the like). Using visually-defined areas on the touch screen, the user may utilize gestures of certain patterns, relative to the defined areas, to indicate text entry and editing operations such as, for example, deleting characters and words, indicating a space or return characters, and the like.

Description

    TECHNICAL FIELD
  • This disclosure relates to a gesture-based user interface for mobile devices.
  • BACKGROUND
  • Computing devices are continuously improving and becoming more commonly used. Additionally, touch-based interaction with touch screens of computing devices is also becoming a more common and major interaction modality for mobile device user interfaces. Touch-based interaction may be, for example, finger-based touch input.
  • In some instances, a user may interact with an application via tactile interaction. For example, some computing devices with touch screens allow text-entry methods based on input by a user via touch of the finger, usually utilizing an on-screen keypad.
  • SUMMARY
  • In general, this disclosure describes techniques for providing a user of a computing device with the ability to perform text-entry operations (e.g., using a touch screen) on a computing device. Specifically, the techniques of this disclosure may, in some examples, allow the user to use gestures on a mobile computing device to perform text entry and editing operations. Using a presence-sensitive user interface device (e.g., the touch screen), the user may use gestures to enter text into applications that accept text as an input (e.g., short message service (SMS) messages, e-mail message, uniform resource locators (URLs), and the like). Using visually-defined areas on the touch screen, the user may utilize gestures of certain patterns, relative to the defined areas, to indicate text entry and editing operations such as, for example, deleting characters and words, indicating a space or return character, and the like.
  • In one example, the disclosure is directed to a method comprising receiving, using a presence-sensitive user interface device coupled to a computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • In another example, the disclosure is directed to a computer-readable storage medium encoded with instructions that, when executed, cause one or more processors of a computing device to perform operations comprising receiving, using a presence-sensitive user interface device coupled to the computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • In another example, the disclosure is directed to a computing device comprising one or more processors, a presence-sensitive user interface device, a user interface module operable by the one or more processors to receive, using the presence-sensitive user interface device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, wherein the user interface module is further operable to receive, by the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters, and means for applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
  • Certain techniques of the present disclosure may, as one non-limiting example, allow a user of a computing device to perform certain text-editing operations using gestures on a touch screen of the computing device. The user may enter different patterns using the gestures, relative to defined areas on the touch screen, to indicate the desired text-editing operation, without necessarily switching to an on-screen keypad.
  • The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram illustrating an example computing device that may provide a text-entry application in accordance with one or more aspects of the present disclosure.
  • FIG. 2 is a block diagram illustrating further details of the computing device shown in FIG. 1.
  • FIGS. 3A-3G are block diagrams illustrating example screens of a computing device as a user interacts with the device, in accordance with one or more aspects of the present disclosure.
  • FIG. 4A is a flow diagram illustrating an algorithm for interpreting gestures in accordance with one or more aspects of the present disclosure.
  • FIG. 4B is an example stroke drawn by a user on a touch screen of a computing device.
  • FIG. 5 is a flow diagram illustrating a method that may be performed by a computing device in accordance with one or more aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • In general, this disclosure describes techniques for providing a user with the ability to perform text entry and editing operations using gestures (e.g., using a presence-sensitive user interface device, such as a touch screen user interface) on a computing device. These techniques may allow the user to use gestures on a computing device to perform text entry and editing operations, for example, via simple interactions with the touch screen. These techniques may be integrated with existing systems that allow for the user to utilize gestures on a touch screen to enter letters and punctuation, thus potentially obviating any issues associated with text entry using on-screen keypads (e.g., touching the wrong key or multiple keys). Using the touch screen, the user may use gestures to enter text into text-based applications (e.g., SMS messages, e-mail message, URLs, and the like). A portion of the touch screen may be allocated for text entry using gestures (e.g., the lower region of a touch screen). Using visually-defined areas within the text entry region, the user may utilize gestures of certain patterns relative to the defined areas to indicate text entry and editing operations such as, for example, deleting characters and words, indicating a space or return characters, and the like. In one example, the touch screen may be divided into two regions, an upper region and a lower region. A user may utilize the lower region to provide drawing gestures that define characters and operations, which may be displayed within the upper region. The drawings gestures may be also displayed within the lower region as the user interacts with the lower region of the touch screen, as will be illustrated in more detail below.
  • As touch-screen computing devices become more prevalent, finger-based touch input, though intuitive, may suffer from low precision due to at least two known issues. One issue is that the area touched by the finger is, in some situations (e.g., small mobile devices), much larger than a single pixel, sometimes referred to as “the fat finger” issue. Therefore, the low precision of finger input may become an issue, where small user interface components are often difficult to operate on a computing device, such as, using on-screen keypads to enter and edit text. The issue is further amplified when the user is in motion, e.g., walking, and unable to pay close attention to the interface.
  • Some computing devices provide user interfaces that allow the user to use gestures to enter text, by defining a portion of the touch screen as a text entry region, where the user utilizes his/her finger or a stylus to draw the letters for text entry. However, such user interfaces may not always provide a user with the ability to perform text entry and editing operations not related to drawing characters (e.g., letters, punctuation, numbers) using gestures. For example, to input such characters as SPACE, RETURN, or DELETE, the user would typically have to switch to an on-screen keypad to perform the text entry and editing operations by touching the corresponding keys. This can be inconvenient as it can make the process of entering text cumbersome and/or it can have the same issues associated with entering text using an on-screen keypad.
  • The techniques of this disclosure provide a region of the user interface (e.g., touch screen) dedicated for text entry (e.g., a lower region) to allow a user to implement text entry and editing operations, other than entry of text characters, using gestures (e.g., on the touch screen). Editing operations may include entry of non-alphanumeric characters that also have associated operations such as adding SPACE and inserting the RETURN character, and may also include activating operations such as deleting the last character, deleting all the text entered, indicating ENTER, and the like. Rather than switching to an on-screen keypad to enter the text entry and editing operations, the techniques define areas or sub-regions within the region dedicated for text entry, where the user may utilize gestures involving the sub-regions to effectuate the desired text entry and editing operations. The areas or sub-regions may be defined using on-screen markers (e.g., horizontal and/or vertical lines), and the user may utilize gestures to interact with the different areas to produce the desired outcome (e.g., SPACE, DELETE, RETURN, and the like). The defined areas and the patterns may be large enough relative to on-screen keypad buttons, such that the level of accuracy of where the user touches the screen is not as particular as when using an on-screen keypad. Additionally, these techniques allow for the user to continuously input text without having to switch back and forth between gesture-based text entry and an on-screen keypad.
  • FIG. 1 is a block diagram illustrating an example computing device 100 that may provide a text-entry application 102 in accordance with one or more aspects of the present disclosure. Computing device 100 may comprise one or more stand-alone devices or may be part of a larger system. In some examples, computing device 100 may comprise a mobile device. For example, computing device 100 may comprise or be part of a wireless communication device (e.g., wireless mobile handset or device), a video telephone, a digital multimedia player, a personal digital assistant (PDA), a video game console, a laptop computer, a tablet computer, or other devices. In some examples, computing device 100 may communicate with external, distinct devices via one or more networks (not shown), such as one or more wired or wireless networks, which may, in some cases, provide access to the Internet.
  • As shown in the example of FIG. 1, computing device 100 may include one or more applications 104A-104N and text-entry application 102. Applications 104A-104N and text-entry application 102 may be executed by computing device 100 (e.g., by one or more processors included within computing device 100, as described in more detail with respect to FIG. 2). In some examples, text-entry application 102 may be displayed on at least a portion of a presence-sensitive user interface device (e.g., user interface) associated with computing device 100. The presence-sensitive user interface device may be, for example, a touch screen of computing device 100, responsive to tactile input via a user's finger, for example. During execution, user interaction with text-entry application 102 may result in a graphical user interface associated with computing device 100. The graphical user interface may be displayed in at least another portion of the touch screen of computing device 100. In this manner, computing device 100 may comprise a screen, such as a touch screen, and the screen may include one portion that displays data associated with an application executing on computing device 100 and another portion that allows the user to provide input into the application.
  • Each of applications 104A-104N is operable on computing device 100 to perform one or more functions during execution. For example, one or more of applications 104A-104N may comprise a web or a communication application that interacts and/or exchanges data with a device that is external to computing device 100. A web application may, in some examples, be executable within a web browser that is operable on computing device 100. A communication application may, in some examples, be a messaging application, such as, short message service (SMS) application. Computing device 100 may, in various examples, download or otherwise obtain one or more of applications 104A-104N from an external server via one or more networks (not shown). For example, a web browser hosted by computing device 100 may download one or more of applications 104A-104N upon access of one or more web sites hosted by such as external server (e.g., web server). In some examples, at least a portion of applications 104A-104N may be text-based.
  • During execution, any of applications 104A-104N may implement, invoke, execute, or otherwise utilize text-entry application 102 as a mechanism to obtain user input. For example, if application 104A is an e-mail application, it may invoke execution of text-entry application 102 to allow a user to enter or type in e-mail text. In another example, if application 104N is a web browser application, it may invoke execution of text-entry application 102 to allow a user to enter Uniform Resource Identifier (URI) information or to provide user input during execution of one or more web applications.
  • Text-entry application 102 may, during execution, display or control a gesture interface 106, which includes one or more areas with which a user interacts via gestures to input text. In one example, gesture interface 106 may be a presence-sensitive user interface device associated with computing device 100. In examples where computing device 100 includes a touch screen user interface, a user may interact with the touch screen via gestures to provide text entry, where gesture interface 106 may be a portion of the touch screen.
  • Computing device 100 may display, via the user interface, a sequence of characters corresponding to the one or more characters input by the user via gestures on gesture interface 106. Computing device 100 may employ a processor to execute a gesture-interpretation algorithm that, based on the user gestures, displays characters and applies operations corresponding to the gestures. The characters may correspond to letters and/or punctuation, which the user may draw using gestures on gesture interface 106. Using the techniques of this disclosure, the user may input gestures on gesture interface 106 corresponding to operations such as, for example, editing operations. Some example editing operations may be adding a space, inserting a return, and deleting the last character, word, paragraph, or all the text the user has input. In this manner, the user may interact with the same interface to enter text and to apply text editing operations. The gesture-interpretation algorithm may be capable of differentiating between gestures representing characters and gestures representing editing operations, as will be described in more detail below.
  • As indicated above, in one example, a display device associated with computing device 100 may include a presence-sensitive user interface device portion (e.g., touch screen), which may be responsive to tactile input by a user. A portion of the touch screen may be dedicated to text entry and editing such as, for example, gesture interface 106. According to techniques of this disclosure, gesture interface 106 may be partitioned into two or more areas, as will be described in more detail below. The user may use gestures to input characters (e.g., letters, punctuation, and the like) as if handwriting in one of the areas within gesture interface 106. In some instances, the user may wish to apply a text editing operation or input a character that cannot be drawn, e.g., space, return, delete, and the like. In one example, the user may utilize gestures that cross from one of the areas to another of the areas within gesture interface 106 to define a desired operation. The operation may depend on the direction of the gesture and the areas traversed by the gesture. A gesture-interpretation algorithm may interpret the gestures and display the corresponding characters and/or apply the corresponding operation to characters already entered by the user such as, for example, adding a space after the last character, adding a return, deleting the last character, word, paragraph, or all entered text.
  • The techniques of this disclosure may, in some instances, allow a user to use gestures to input text and apply text-editing operations using the same interface, e.g., gesture interface 206. In these instances, the user would not need to switch to a different screen when the user needs to add a special operation to the text. In one example, techniques of this disclosure may provide the user with defined areas within the interface such that gestures associated with one defined area may be associated with text-entry, and gestures associated with another defined area may be associated with text-editing operations.
  • FIG. 2 is a block diagram illustrating further details of the computing device 100 shown in FIG. 1. FIG. 2 illustrates only one particular example of computing device 100, and many other example embodiments of computing device 100 may be used in other instances. As shown in the example of FIG. 2, computing device 100 includes one or more processors 122, memory 124, a network interface 126, one or more storage devices 128, user interface 130, and an optional battery 132. For example, if computing device 100 comprises a mobile device, computing device 100 may include battery 132. Each of components 122, 124, 126, 128, 130, and 132 may be interconnected via one or more busses for inter-component communications. Processors 122 may be configured to implement functionality and/or process instructions for execution within computing device 100. Processors 122 may be capable of processing instructions stored in memory 124 or instructions stored on storage devices 128.
  • User interface 130 may include, for example, a monitor or other display device for presentation of visual information to a user of computing device 100. User interface 130 may further include one or more input devices to enable a user to input data, such as a manual keyboard, mouse, touchpad, track pad, etc. In some example, user interface 130 may comprise a presence-sensitive user interface device such as, for example, a touch screen, which may be used both to receive and process user input and also to display output information. User interface 130 may further include printers or other devices to output information. In various instances in the description contained herein, references made to user interface 130 may refer to portions of user interface 130 (e.g., touch screen) that provide user input functionality. In one example, user interface 130 may be a touch screen that is responsive to tactile input by the user.
  • Memory 124 may be configured to store information within computing device 100 during operation. Memory 124 may, in some examples, be described as a computer-readable storage medium. In some examples, memory 124 is a temporary memory, meaning that a primary purpose of memory 124 is not long-term storage. Memory 124 may also be described as a volatile memory, meaning that memory 124 does not maintain stored contents when the computer is turned off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, memory 124 may be used to store program instructions for execution by processors 122. Memory 124 may be used by software or applications running on computing device 100 (e.g., one or more of applications 104A-104N shown in FIG. 1 or the gesture-interpretation algorithm) to temporarily store information during program execution.
  • Storage devices 128 may also include one or more computer-readable storage media. Storage devices 128 may be configured to store larger amounts of information than memory 124. Storage devices 128 may further be configured for long-term storage of information. In some examples, storage devices 128 may comprise non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • Computing device 100 also includes network interface 126. Computing device 100 may utilize network interface 126 to communicate with external devices (e.g., one or more servers, web servers) via one or more networks, such as one or more wireless/wired networks. Computing device 100 may utilize network interface 126 in response to execution of one or more applications that require transferring data to and/or from other devices (e.g., other computing devices, servers, or the like).
  • Any applications implemented within or executed by computing device 100 (e.g., applications 104A-104N shown in FIG. 1 or the gesture-interpretation algorithm) may be implemented or contained within, operable by, executed by, and/or be operatively coupled to processors 122, memory 124, network interface 126, storage devices 128, and/or user interface 130.
  • One example of text-entry application 102 is shown in FIG. 2. Text-entry application 102 may include a display module 142, a user interface controller 144, a character module 146, and an operations module 148. Text-entry application 102 may provide or display gesture interface 106 shown in FIG. 1 (e.g., via user interface 130). Text-entry application 102 may be stored in memory 124 and/or storage devices 128, and may be operable by processors 122 to perform various tasks during execution.
  • In one example, during implementation or execution of text-entry application 102, display module 142 may be operable by processors 122 to define a portion for text and operation entry (e.g., gesture interface 106) via user interface 130. User interface controller 144 may be operable by processors 122 to receive, via user interface 130, user input specifying characters and/or operations in the form of gestures drawn on gesture interface 106. The user input may comprise a contact with user interface 130 (e.g., contact with a touch screen), and wherein each of the gestures is associated with a character or an operation.
  • Character module 146 and operations module 148 may be operable by processor 122 to determine, based on gestures the user draws on user interface 130, the appropriate characters to display and operations to apply to the displayed characters. In one example, display module 142 may define gesture interface 106 on user interface 130. Gesture interface 106 may be generally rectangular in some examples. In one example, horizontal and/or vertical lines may be used to define different areas within gesture interface 106. In one example, the lines may be close to the edge of gesture interface 106, thus defining a larger area in the middle where a user may use gestures to draw characters for text entry. The characters may be letters, numbers, punctuation, and the like. The lines may define smaller areas closer to the outer edges of gesture interface 106. Using certain gestures that traverse the larger area and the smaller, the user may be able to apply operations to the already-entered characters. The operations may be a space, a line return, or deletion of previously-entered characters, for example. In one example, character module 146 may determine a character based on gestures in the larger area of gesture interface 106, and operations module 148 may determine an operation based on gestures that cross the lines defining the smaller areas in gesture interface 106.
  • Processor 122 may be operable to execute one or more algorithms including, for example, a gesture-interpretation algorithm. In one example, the gesture-interpretation algorithm may determine whether gestures drawn by a user correspond to characters or editing operations. Based on the determination, the gesture-interpretation algorithm may process the drawn gestures to the appropriate module for further interpretation. For example, if the gesture-interpretation algorithm determines that a drawn gesture is a character, the algorithm sends the gesture data associated with the drawn gesture to character module 146 for further interpretation. If, for example, the gesture-interpretation algorithm determines that a drawn gesture is an editing operation, the algorithm sends the gesture data associated with the drawn gesture to operations module 148 for further interpretation. The gesture-interpretation algorithm will be discussed in more detail below.
  • As the user utilizes gestures to draw characters and operations, the gesture-interpretation algorithm determines whether the gestures are characters or operations, and character module 146 and operations module 148 may determine the matching desired characters and operations, respectively. Display module 142 may be operable to receive the determined characters and operations for display to the user on user interface 130. The entered text and operations may be displayed in a manner that depends on the application that computing device 100 may be running, where at least a portion of the application is text based, and the user utilizes gesture interface 106 to enter text into the application (e.g., e-mail, web browser, SMS application, and the like). Display module 142 may be operable to display the gestures are they user draws them on gesture interface 106, and the corresponding characters in the application.
  • FIGS. 3A-3G are block diagrams illustrating example screens of a computing device 300 as a user interacts with the device, in accordance with one or more aspects of the present disclosure. As shown in the example, a series of screens may be shown on a computing device 300, such as a mobile device (e.g., a smart phone). Computing device 300 may operate in the same manner as computing device 100 of FIGS. 1 and 2. Computing device 300 may include one or more user interface devices that allow a user to interact with the device. Example user interface devices may include, for example, a mouse, a touchpad, a track pad, a keyboard, a touch screen, or the like. Computing device 300 may also include screen 302 via which computing device 300 displays to the user application-related options and screens. In one example, screen 302 may be a touch screen that allows interaction by user's touch via user's finger or a device (e.g., stylus pen).
  • FIGS. 3A-3G illustrate an example progression, according to the techniques described herein, as the user provides input to computing device 300 via a user interface device such as, for example, gesture interface 306. In one example, gesture interface 306 may be an area defined within screen 302. As FIG. 3A illustrates, gesture interface 306 may be displayed on touch screen 302 of computing device 300, where a user may use his/her fingers or a device (e.g., stylus pen) to interact with the touch screen and user gestures to enter and edit text associated with a text-based application running on computing device 300. Screen 302 may display in a display portion 304, which may comprise a graphical user interface, the text that the user inputs into the text-based application.
  • Screen 302 shows an example text-based application (e.g., e-mail, SMS, or the like) that is running on computing device 300. The text-based application allows the user to enter text into various fields, such as a “To” field and a body field. If the user wishes to enter text, the user may draw the characters (e.g., letters, numbers, punctuation, and the like) using gestures in gesture interface 306. For example, the user may use contact (e.g., by touching or using a stylus) to draw within gesture interface 306 the words he/she desired to enter into the application. As illustrated in FIG. 3A, gesture interface 306 may have a large defined area in the middle, bounded by lines 308 and 310, where the user may use gestures to enter characters. Lines 308 and 310 may partition the area defined by gesture interface 306 by defining the boundaries for areas 318 and 320. In this example, the user may have entered the word “HELLO” by drawing it as shown in FIG. 3A.
  • Computing device 300 may display, via the user interface, a sequence of characters corresponding to the characters drawn by the user, as shown in text display portion 304. Computing device 300 may employ a processor to execute an algorithm (e.g., a gesture-interpretation algorithm) that generates, based on the drawn gestures, characters that match the gestures drawn by the user. As the user draws more characters using gestures, the gesture-interpretation algorithm may determine that the drawn gesture is associated with characters and send the gesture data associated with the drawn gestures to character module 146. Character module 146 (FIG. 2) may then determine the appropriate characters to display. In one example, the user may wish to enter a non-character operation (e.g., space, line return, delete, and the like), which may not be represented by a drawn character as would letters, numbers, or punctuation. As FIG. 3B illustrates, the user may utilize gestures in accordance with techniques of this disclosure to input operations that are not characters. As shown in FIG. 3B, lines 308 and 310 may define smaller areas 318 and 320, respectively, near the edges of gesture interface 306. The user may utilize gestures involving lines 308 and 310 to define non-character operations. In the example of FIG. 3B, the user may touch gesture interface 306 somewhere between lines 308 and 310, and swipe to the right to cross line 308. The gesture-interpretation algorithm may determine that the drawn gesture is associated with an editing operation and send the drawn gesture to operation module 148. Operations module 148 may then interpret the drawn gesture to determine the associated editing operation, e.g., a SPACE, resulting in adding a space after the last entered word “HELLO.”
  • Using gestures, the user may then input another word as shown in FIG. 3C. For example, the word “WORLDS.” The user may have intended to enter the word “WORLD,” and may wish to correct the last entry by deleting the letter S. The user may utilize gestures involving lines 308 and 310 to indicate the desire to delete the last-entered letter. As shown in the example of FIG. 3D, the user may touch gesture interface 306 somewhere between lines 308 and 310, and swipe to the left to cross line 310. Operation module 148 may interpret the gesture to determine the associated editing operation, e.g., DELETE, resulting in deleting the last entered character, as FIG. 3D illustrates.
  • In one example, when the user wishes to write text using gestures, the user may write anywhere within gesture interface 306. As FIG. 3C shows, the user may start writing anywhere, even in the areas 318 and 320 defined by lines 308 and 310. When the user draws gestures, character module 146 may determine the best match for each character and display it. In other examples, character module 146 may display the best match, and may provide the user with other candidate words 312 as shown in FIG. 3C. The user may disregard the candidate words by continuing to draw gestures, indicating that the best match is the desired word or character or may select one of the suggestions by touching the screen where the correct suggestion word is displayed.
  • As noted above, one example for adding SPACE may be achieved by starting between lines 308 and 310, and swiping right to cross line 308, as FIG. 3B illustrates. Another example, to DELETE the last character, the user may start between lines 308 and 310, and swipe left to cross line 310, as FIG. 3D illustrates. To indicate a RETURN/ENTER (or GO for applications such as search in a browser), the user may start left of line 310 and swipe all the way across gesture interface 306 to cross lines 310 and 308 as illustrated in FIG. 3E. In yet another example, to issue a DELETE of the last word, the user may start by touching right of line 308 and swipe all the way across gesture interface 306 to cross lines 308 and 310, as illustrated in FIG. 3F. Yet another example may be if the user wishes to issue a DELETE of all the text entered thus far by starting right of line 308 and swiping in a C-shape, thus swiping across gesture interface to the left to cross lines 308 and 310 then returning across and crossing lines 310 and 308, as illustrated in FIG. 3G. It should be understood that there examples of gestures and operations are merely illustrative using the most popular operations, and that other patterns and operations may be implemented using the techniques of this disclosure. In some examples, horizontal and vertical lines may be utilized.
  • FIG. 4A is a flow diagram illustrating an algorithm for interpreting gestures in accordance with one or more aspects of the present disclosure. As noted above, a processor (e.g., processor 122) may execute a gesture-interpretation algorithm to determine whether a drawn gesture corresponds to a character or an editing operation, and send gesture data associated with the drawn gesture to the appropriate module for recognition and display. When the user starts drawing gestures on the touch screen of the computing device, as discussed above, the sensed drawn stroke may be added by the algorithm to a current gesture (402). A stroke may correspond to a shape drawn by the user in one touch without lifting a finger or stylus off the touch screen. Referring back to FIG. 3A, for example, the leftmost vertical line of the letter H is one stroke, the horizontal part of the letter H is another stroke, and the rightmost vertical line of the letter H is another stroke. As FIG. 3B, 3D, and 3E-3F illustrate, gestures associated with editing operations may comprise one stroke.
  • The gesture-interpretation algorithm may then determine whether the next time the user makes contact with the touch screen is less than a timeout threshold (404). The timeout threshold may be a time that indicates, when exceeded, that the user has completed drawing one gesture and is drawing another gesture. In one example, the timeout threshold may be 500 ms. The timeout threshold may be set by default to a certain value, may be configurable by the user, or may be automatically configured by the computing device based on user's historical touching patterns. After every stroke, a timer may be reset, and when the user makes the next contact with the touch screen, the timer may be stopped to determine the time difference between the last stroke and the current one. The time difference may be compared to the timeout threshold to determine the whether the user's touch was less than the timeout threshold (404).
  • If the user makes contact with the touch screen within the timeout threshold, the stroke the user is indicating should be added to the current gesture (402). If the user makes contact with the touch screen beyond the timeout threshold, the stroke the user is indicating is not part of the current gesture and a new gesture should be started. The algorithm may then interpret the current gesture on the screen to determine the appropriate module to handle it. The algorithm may determine if the current gesture includes a single stroke (406). If the current gesture includes more than one stroke, the algorithm then determines that the current gesture corresponds to a character. The algorithm may then send the data associated with the current gesture to character module 146 for character recognition (408).
  • If the current gesture includes a single stroke that crosses at least one of the lines (e.g. lines 308 and 310) defining boundaries between the visually-defined areas of the touch screen, the algorithm determines the stroke's straightness and whether it exceeds a straightness threshold (410). The determination of straightness is described in more detail below. If the straightness of the stroke does not exceed the threshold, that indicates that the stroke is not straight enough, and that the current gesture corresponds to a character. The algorithm may then send the data associated with the current gesture to character module 146 for character recognition (408). If the straightness of the stroke exceeds the threshold, that indicates that the stroke is straight, and that the current gesture corresponds to an editing operation. The algorithm may then send the data associated with the current gesture to editing operations module 148 for operation recognition (412).
  • FIG. 4B is an example stroke 400 drawn by a user on a touch screen of a computing device. In determining the straightness of stroke 400, the algorithm may analyze segments of the stroke. The segments of the stroke may be defined by the points defining the stroke, where the points correspond to pixels on the touch screen. In this example, stroke 400 has 4 segments defined by points 420, 422, 424, 426, and 428. To determine the straightness of stroke 400, the algorithm determines the direct distance (direct_dist) from the starting point to the ending point of stroke 400, illustrated by the dotted line representing direct_dist (420, 428). The algorithm then determines the path distance of the stroke (path_dist), which is the sum of the lengths of all the segments of stroke 400. In this example:

  • path_dist (420,428)=direct_dist (420,422)+direct_dist (422,424)+direct_dist (424,426)+direct_dist (426,428)
  • The algorithm then determines straightness of stroke 400 as follows:

  • Straightness (400)=direct_dist (420,428)/path_dist (420,428)
  • The straighter the stroke, the closer the straightness value will be to 1. The straightness threshold may therefore be close to 1, and the choice of the straightness threshold may depend on how sensitive the algorithm should be in determining whether a stroke is straight. In one example, the straightness threshold may be set to 0.8, thus allowing for a certain amount of curvature in the stroke, which may be due to shaking of user's hand while drawing the gesture.
  • In another example, the algorithm may also utilize a stroke path variation method to determine whether a single stroke is an editing operation. This may be utilized in some instances such as, for example, when some editing operations correspond to a gesture that is a single stroke but is not a straight stroke. One example may be the one illustrated by FIG. 3G, described above. In this example, the algorithm may determine to use the stroke path variation method based on the number of graphical lines crossed by the single stroke. The stroke path variation method may consider the number of corners in a stroke to determine whether a stroke should be interpreted as an editing operation or a character. For each point of a stroke, the algorithm may determine whether the point corresponds to a corner by determining the angle between adjacent line segments, and if the angle is above a certain threshold (e.g., 30 degrees), the point may be considered a corner. In the example of FIG. 4B, point 426 is the only corner in stroke 400. In the example of the stroke of FIG. 3G, the straightness method may not be effective in determining whether the stroke corresponds to an editing operation, because the direct distance between the two ends is a lot smaller than the sum of the segment lengths of the stroke, and therefore, the path variation method may be used instead. In the example of FIG. 3G, the algorithm may determine there are at least 2 corner points in the stroke, and therefore, the stroke corresponds to an editing operation.
  • FIG. 5 is a flow diagram illustrating a method that may be performed by a computing device in accordance with one or more aspects of the present disclosure. For example, the illustrated example method may be performed by computing device 100 (FIGS. 1 and 2) or computing device 300 (FIGS. 3A-3D). In some examples, a computer-readable storage medium (e.g., a medium included in storage device 128 of FIG. 2) may be encoded with instructions that, when executed, cause one or more processors (e.g., processor 122) to perform one or more of the acts illustrated in the method of FIGS. 1, 2, and 3A-3D.
  • The method of FIG. 4 includes defining, for a presence-sensitive user interface device (e.g., touch screen 302) coupled to the computing device, a first area (e.g., gesture interface 306 or a lower region of the touch screen) and a second area (e.g., area right of line 308 and left of line 310) for user input (402). As illustrated above, the second area is a subset of the first area and is defined by one or more boundaries (e.g., horizontal and/or vertical lines) that partition the first area. The method also includes receiving, using the presence-sensitive user interface device, first user input comprising a drawing gesture associated with the first area, where the first user input specifies one or more characters (e.g., letters, numbers, punctuation, or the like) to be displayed in a graphical user interface (e.g., display portion 304 or upper region of the touch screen) associated with the computing device and a text-based application running on the computing device (404). Based on the user input, the determined characters may be displayed on the graphical user interface (406).
  • The presence-sensitive user interface device may also receive second user input comprising a gesture associated with the at least one of the one or more boundaries associated with the second area, wherein the second user input specifies an editing operation (e.g., SPACE, RETURN, DELETE, or the like) associated with the one or more characters (408). The second input may involve, for example, gestures that do not resemble characters, and which may cross the lines that define the second area. The method further includes applying, by the computing device, the specified operation to the one or more characters in response to receiving the second user input (410). In this manner, the user may utilize gestures to define one or more characters to display, and may also utilize gestures, without changing from gesture-based input, to apply other non-character operations.
  • The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit including hardware may also perform one or more of the techniques of this disclosure.
  • Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware, firmware, or software components, or integrated within common or separate hardware, firmware, or software components.
  • The techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable medium, including a computer-readable storage medium, may cause one or more programmable processors, or other processors, to implement one or more of the techniques described herein, such as when instructions included or encoded in the computer-readable medium are executed by the one or more processors. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a compact disc ROM (CD-ROM), a floppy disk, a cassette, magnetic media, optical media, or other computer readable media. In some examples, an article of manufacture may comprise one or more computer-readable storage media.
  • In some examples, a computer-readable storage medium may comprise a non-transitory medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).
  • Various embodiments of the disclosure have been described. These and other embodiments are within the scope of the following claims.

Claims (20)

1. A method comprising:
receiving, using a presence-sensitive user interface device coupled to a computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device;
receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters; and
applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
2. The method of claim 1, wherein the specified editing operation depends on a direction of crossing between the first area and the at least second area based on the received second user input.
3. The method of claim 1, further comprising:
defining, for the presence-sensitive user interface device, the first area and the at least second area for user input, wherein the at least second area comprises is a subset of the first area, and wherein the at least second area is defined by at least a first graphical boundary that partitions the first area.
4. The method of claim 3, wherein the second user input crosses the at least first graphical boundary.
5. The method of claim 3, wherein the specified editing operation depends on a number of times the at least first graphical boundary is crossed based on the received second user input.
6. The method of claim 1, further comprising:
defining, for the presence-sensitive user interface device, at least a third area for user input on the presence-sensitive user interface device, wherein the second user input comprises crossing between the first area, the at least second area, and the at least third area, and wherein the second user input specifies an editing operation associated with the one or more characters.
7. The method of claim 1, further comprising:
defining, for the presence-sensitive user interface device, at least a third area for user input, wherein the at least third area comprises a subset of the first area, and wherein the at least third area is defined by at least a second graphical boundary that partitions the first area,
wherein the second user input comprises crossing at least one of the first graphical boundary and the second graphical boundary, and wherein the second user input specifies an editing operation associated with the one or more characters.
8. The method of claim 7, wherein the specified editing operation depends on a direction of crossing of at least one of the first graphical boundary and the second graphical boundary based on the received second user input.
9. The method of claim 7, wherein the specified editing operation depends on a number of times the at least one of the first graphical boundary and the second graphical boundary is crossed based on the received second user input.
10. The method of claim 7, wherein the specified editing operation depends on which of at the at least first graphical boundary and the second graphical boundary is crossed based on the received second user input.
11. The method of claim 1, further comprising determining whether a user input specifies one or more characters or specifies an editing operation.
12. The method of claim 11, wherein a user input specifies an editing operation when the drawing gesture comprises a single stroke that crosses between the first area and the at least second area, and a straightness associated with the drawing gesture is above a straightness threshold.
13. The method of claim 1, wherein the editing operation comprises at least one of adding a space, adding a return, or performing a deletion of at least one of the one or more characters.
14. The method of claim 1, wherein the presence-sensitive user interface device comprises a touch screen that displays the graphical user interface, wherein the touch screen displays the first area and the at least second area, and wherein the touch screen comprises an upper region and a lower region, the method further comprising:
displaying the first area and the at least second area in the lower region; and
displaying the one or more characters in the upper region.
15. The method of claim 1, further comprising displaying in the first area a representation of the one or more characters corresponding to the first user input.
16. The method of claim 1, further comprising displaying in the first area a representation of the editing operation corresponding to the second user input.
17. The method of claim 1, wherein the first user input and the second user input are received from the presence-sensitive user interface device without receiving any input from an on-screen keypad.
18. The method of claim 1, wherein the at least first graphical boundary comprises at least one horizontal or vertical graphical line partitioning the first area near at least one outer edge associated with the first area.
19. A computer-readable storage medium encoded with instructions that, when executed, cause one or more processors of a computing device to perform operations comprising:
receiving, using a presence-sensitive user interface device coupled to the computing device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device;
receiving, using the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters; and
applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
20. A computing device, comprising:
one or more processors;
a presence-sensitive user interface device;
a user interface module operable by the one or more processors to receive, using the presence-sensitive user interface device, first user input comprising a drawing gesture associated with a first area for user input defined on the presence-sensitive user interface device, wherein the first user input specifies one or more characters to be displayed in a graphical user interface associated with the computing device, wherein the user interface module is further operable to receive, by the presence-sensitive user interface device, second user input comprising a drawing gesture, wherein the second user input comprises crossing between the first area and at least a second area for user input defined on the presence-sensitive user interface device, and wherein the second user input specifies an editing operation associated with the one or more characters; and
means for applying, by the computing device, the editing operation to the one or more characters in response to receiving the second user input.
US13/030,623 2011-02-18 2011-02-18 Touch gestures for text-entry operations Abandoned US20120216113A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US13/030,623 US20120216113A1 (en) 2011-02-18 2011-02-18 Touch gestures for text-entry operations
US13/250,089 US8276101B2 (en) 2011-02-18 2011-09-30 Touch gestures for text-entry operations
PCT/US2012/025085 WO2012112575A1 (en) 2011-02-18 2012-02-14 Touch gestures for text-entry operations
EP12705593.7A EP2676185A1 (en) 2011-02-18 2012-02-14 Touch gestures for text-entry operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/030,623 US20120216113A1 (en) 2011-02-18 2011-02-18 Touch gestures for text-entry operations

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/250,089 Continuation US8276101B2 (en) 2011-02-18 2011-09-30 Touch gestures for text-entry operations

Publications (1)

Publication Number Publication Date
US20120216113A1 true US20120216113A1 (en) 2012-08-23

Family

ID=46653776

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/030,623 Abandoned US20120216113A1 (en) 2011-02-18 2011-02-18 Touch gestures for text-entry operations
US13/250,089 Expired - Fee Related US8276101B2 (en) 2011-02-18 2011-09-30 Touch gestures for text-entry operations

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/250,089 Expired - Fee Related US8276101B2 (en) 2011-02-18 2011-09-30 Touch gestures for text-entry operations

Country Status (3)

Country Link
US (2) US20120216113A1 (en)
EP (1) EP2676185A1 (en)
WO (1) WO2012112575A1 (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929485A (en) * 2012-10-30 2013-02-13 广东欧珀移动通信有限公司 Character input method and device
US8584049B1 (en) * 2012-10-16 2013-11-12 Google Inc. Visual feedback deletion
CN103902207A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Display method and electronic device
CN103914248A (en) * 2012-12-31 2014-07-09 Lg电子株式会社 Mobile terminal and control method thereof
CN104049865A (en) * 2014-06-19 2014-09-17 联想(北京)有限公司 Information processing method and electronic device
US20140282242A1 (en) * 2013-03-18 2014-09-18 Fuji Xerox Co., Ltd. Systems and methods for content-aware selection
US20150067488A1 (en) * 2012-03-30 2015-03-05 Nokia Corporation User interfaces, associated apparatus and methods
US9019218B2 (en) * 2012-04-02 2015-04-28 Lenovo (Singapore) Pte. Ltd. Establishing an input region for sensor input
WO2015200228A1 (en) * 2014-06-24 2015-12-30 Apple Inc. Character recognition on a computing device
JP2016024664A (en) * 2014-07-22 2016-02-08 日本電信電話株式会社 Mobile terminal with multi touch screen and operation method thereof
US20160062574A1 (en) * 2014-09-02 2016-03-03 Apple Inc. Electronic touch communication
WO2017023844A1 (en) * 2015-08-04 2017-02-09 Apple Inc. User interface for a touch screen device in communication with a physical keyboard
CN106559577A (en) * 2016-11-25 2017-04-05 努比亚技术有限公司 Mobile terminal and its control method
US9841881B2 (en) 2013-11-08 2017-12-12 Microsoft Technology Licensing, Llc Two step content selection with auto content categorization
EP3255528A1 (en) * 2016-06-12 2017-12-13 Apple Inc. Handwriting keyboard for screens
CN108762641A (en) * 2018-05-30 2018-11-06 维沃移动通信有限公司 A kind of method for editing text and terminal device
US10250735B2 (en) 2013-10-30 2019-04-02 Apple Inc. Displaying relevant user interface objects
US10303348B2 (en) 2014-06-24 2019-05-28 Apple Inc. Input device and user interface interactions
US10325394B2 (en) 2008-06-11 2019-06-18 Apple Inc. Mobile communication terminal and data input method
US10346035B2 (en) 2013-06-09 2019-07-09 Apple Inc. Managing real-time handwriting recognition
US10438205B2 (en) 2014-05-29 2019-10-08 Apple Inc. User interface for payments
US10860199B2 (en) 2016-09-23 2020-12-08 Apple Inc. Dynamically adjusting touch hysteresis based on contextual data
US10914606B2 (en) 2014-09-02 2021-02-09 Apple Inc. User interactions for a mapping application
US10990267B2 (en) 2013-11-08 2021-04-27 Microsoft Technology Licensing, Llc Two step content selection
US11057682B2 (en) 2019-03-24 2021-07-06 Apple Inc. User interfaces including selectable representations of content items
US11070889B2 (en) 2012-12-10 2021-07-20 Apple Inc. Channel bar user interface
US11112968B2 (en) 2007-01-05 2021-09-07 Apple Inc. Method, system, and graphical user interface for providing word recommendations
US11194546B2 (en) 2012-12-31 2021-12-07 Apple Inc. Multi-user TV user interface
US11194467B2 (en) 2019-06-01 2021-12-07 Apple Inc. Keyboard management user interfaces
US11245967B2 (en) 2012-12-13 2022-02-08 Apple Inc. TV side bar user interface
US11290762B2 (en) 2012-11-27 2022-03-29 Apple Inc. Agnostic media delivery system
US11297392B2 (en) 2012-12-18 2022-04-05 Apple Inc. Devices and method for providing remote control hints on a display
US11321731B2 (en) 2015-06-05 2022-05-03 Apple Inc. User interface for loyalty accounts and private label accounts
US11461397B2 (en) 2014-06-24 2022-10-04 Apple Inc. Column interface for navigating in a user interface
US11467726B2 (en) 2019-03-24 2022-10-11 Apple Inc. User interfaces for viewing and accessing content on an electronic device
US11520858B2 (en) 2016-06-12 2022-12-06 Apple Inc. Device-level authorization for viewing content
US11543938B2 (en) 2016-06-12 2023-01-03 Apple Inc. Identifying applications on which content is available
US11609678B2 (en) 2016-10-26 2023-03-21 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
US11683565B2 (en) 2019-03-24 2023-06-20 Apple Inc. User interfaces for interacting with channels that provide content that plays in a media browsing application
US11720229B2 (en) 2020-12-07 2023-08-08 Apple Inc. User interfaces for browsing and presenting content
US11783305B2 (en) 2015-06-05 2023-10-10 Apple Inc. User interface for loyalty accounts and private label accounts for a wearable device
US11797606B2 (en) 2019-05-31 2023-10-24 Apple Inc. User interfaces for a podcast browsing and playback application
US11843838B2 (en) 2020-03-24 2023-12-12 Apple Inc. User interfaces for accessing episodes of a content series
US11863837B2 (en) 2019-05-31 2024-01-02 Apple Inc. Notification of augmented reality content on an electronic device
US11899895B2 (en) 2020-06-21 2024-02-13 Apple Inc. User interfaces for setting up an electronic device
US11934640B2 (en) 2021-01-29 2024-03-19 Apple Inc. User interfaces for record labels
US11962836B2 (en) 2019-03-24 2024-04-16 Apple Inc. User interfaces for a media browsing application
US11966560B2 (en) 2017-09-28 2024-04-23 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5850229B2 (en) * 2011-11-29 2016-02-03 日本精機株式会社 Vehicle control device
US9557913B2 (en) * 2012-01-19 2017-01-31 Blackberry Limited Virtual keyboard display having a ticker proximate to the virtual keyboard
CN103376995A (en) * 2012-04-19 2013-10-30 富泰华工业(深圳)有限公司 Touch electronic device and page content storage method therefor
KR101977072B1 (en) * 2012-05-07 2019-05-10 엘지전자 주식회사 Method for displaying text associated with audio file and electronic device
US9298295B2 (en) * 2012-07-25 2016-03-29 Facebook, Inc. Gestures for auto-correct
US8868598B2 (en) * 2012-08-15 2014-10-21 Microsoft Corporation Smart user-centric information aggregation
US20140223382A1 (en) * 2013-02-01 2014-08-07 Barnesandnoble.Com Llc Z-shaped gesture for touch sensitive ui undo, delete, and clear functions
FR3005175B1 (en) 2013-04-24 2018-07-27 Myscript PERMANENT SYNCHRONIZATION SYSTEM FOR MANUSCRITE INPUT
JP6212938B2 (en) * 2013-05-10 2017-10-18 富士通株式会社 Display processing apparatus, system, and display processing program
JP6295519B2 (en) * 2013-05-21 2018-03-20 富士通株式会社 Display processing apparatus, system, and display processing program
US9495620B2 (en) 2013-06-09 2016-11-15 Apple Inc. Multi-script handwriting recognition using a universal recognizer
US20140361983A1 (en) * 2013-06-09 2014-12-11 Apple Inc. Real-time stroke-order and stroke-direction independent handwriting recognition
CN104238724B (en) 2013-06-09 2019-03-12 Sap欧洲公司 Input method and system for electronic equipment based on movement
US9201592B2 (en) 2013-08-09 2015-12-01 Blackberry Limited Methods and devices for providing intelligent predictive input for handwritten text
USD754749S1 (en) * 2013-08-29 2016-04-26 Samsung Electronics Co., Ltd. Display screen or portion thereof with icon
KR102162836B1 (en) * 2013-08-30 2020-10-07 삼성전자주식회사 Apparatas and method for supplying content according to field attribute
US20150135115A1 (en) * 2013-11-11 2015-05-14 Lenovo (Singapore) Pte. Ltd. Multi-touch input for changing text and image attributes
US9411508B2 (en) * 2014-01-03 2016-08-09 Apple Inc. Continuous handwriting UI
US9690478B2 (en) * 2014-03-04 2017-06-27 Texas Instruments Incorporated Method and system for processing gestures to cause computation of measurement of an angle or a segment using a touch system
US9524428B2 (en) 2014-04-28 2016-12-20 Lenovo (Singapore) Pte. Ltd. Automated handwriting input for entry fields
US20150347364A1 (en) * 2014-06-03 2015-12-03 Lenovo (Singapore) Pte. Ltd. Highlighting input area based on user input
US20160154555A1 (en) * 2014-12-02 2016-06-02 Lenovo (Singapore) Pte. Ltd. Initiating application and performing function based on input
EP3287953A4 (en) * 2015-04-24 2018-04-11 Fujitsu Limited Input processing program, input processing device, input processing method, character identification program, character identification device, and character identification method
CN104932826B (en) * 2015-06-26 2018-10-12 联想(北京)有限公司 A kind of information processing method and electronic equipment
DE102015011649A1 (en) * 2015-09-11 2017-03-30 Audi Ag Operating device with character input and delete function
US20170242581A1 (en) * 2016-02-23 2017-08-24 Myscript System and method for multiple input management
US10248635B2 (en) 2016-02-29 2019-04-02 Myscript Method for inserting characters in a character string and the corresponding digital service
US10416868B2 (en) 2016-02-29 2019-09-17 Myscript Method and system for character insertion in a character string
US11209976B2 (en) 2016-04-29 2021-12-28 Myscript System and method for editing input management
US10521937B2 (en) * 2017-02-28 2019-12-31 Corel Corporation Vector graphics based live sketching methods and systems
CN109865285B (en) * 2019-04-01 2020-03-17 网易(杭州)网络有限公司 Information processing method and device in game and computer storage medium
CN110052030B (en) * 2019-04-26 2021-10-29 腾讯科技(深圳)有限公司 Image setting method and device of virtual character and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5005205A (en) * 1990-01-12 1991-04-02 International Business Machines Corporation Handwriting recognition employing pairwise discriminant measures
US5564005A (en) * 1993-10-15 1996-10-08 Xerox Corporation Interactive system for producing, storing and retrieving information correlated with a recording of an event
US20020109677A1 (en) * 2000-12-21 2002-08-15 David Taylor Touchpad code entry system
US20030156145A1 (en) * 2002-02-08 2003-08-21 Microsoft Corporation Ink gestures
US20030214531A1 (en) * 2002-05-14 2003-11-20 Microsoft Corporation Ink input mechanisms
US7526737B2 (en) * 2005-11-14 2009-04-28 Microsoft Corporation Free form wiper
US20090327974A1 (en) * 2008-06-26 2009-12-31 Microsoft Corporation User interface for gestural control
US20110134068A1 (en) * 2008-08-08 2011-06-09 Moonsun Io Ltd. Method and device of stroke based user input

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5677710A (en) * 1993-05-10 1997-10-14 Apple Computer, Inc. Recognition keypad
US5917493A (en) * 1996-04-17 1999-06-29 Hewlett-Packard Company Method and apparatus for randomly generating information for subsequent correlating
US6384815B1 (en) 1999-02-24 2002-05-07 Hewlett-Packard Company Automatic highlighting tool for document composing and editing software
US6658147B2 (en) 2001-04-16 2003-12-02 Parascript Llc Reshaping freehand drawn lines and shapes in an electronic document
US7571384B1 (en) * 2001-05-31 2009-08-04 Palmsource, Inc. Method and system for handwriting recognition with scrolling input history and in-place editing
US7925987B2 (en) 2002-05-14 2011-04-12 Microsoft Corporation Entry and editing of electronic ink
JP2006527439A (en) * 2003-06-13 2006-11-30 ユニヴァーシティ オブ ランカスター User interface
US7164410B2 (en) * 2003-07-28 2007-01-16 Sig G. Kupka Manipulating an on-screen object using zones surrounding the object
JP2005346467A (en) 2004-06-03 2005-12-15 Nintendo Co Ltd Graphic recognition program
US7551779B2 (en) * 2005-03-17 2009-06-23 Microsoft Corporation Word or character boundary-based scratch-out gesture recognition
US20100020033A1 (en) * 2008-07-23 2010-01-28 Obinna Ihenacho Alozie Nwosu System, method and computer program product for a virtual keyboard
TWI402741B (en) 2009-05-27 2013-07-21 Htc Corp Method for unlocking screen, and mobile electronic device and computer program product using the same
US20110191675A1 (en) * 2010-02-01 2011-08-04 Nokia Corporation Sliding input user interface
JP5494337B2 (en) * 2010-07-30 2014-05-14 ソニー株式会社 Information processing apparatus, information processing method, and information processing program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5005205A (en) * 1990-01-12 1991-04-02 International Business Machines Corporation Handwriting recognition employing pairwise discriminant measures
US5564005A (en) * 1993-10-15 1996-10-08 Xerox Corporation Interactive system for producing, storing and retrieving information correlated with a recording of an event
US20020109677A1 (en) * 2000-12-21 2002-08-15 David Taylor Touchpad code entry system
US20030156145A1 (en) * 2002-02-08 2003-08-21 Microsoft Corporation Ink gestures
US20030214531A1 (en) * 2002-05-14 2003-11-20 Microsoft Corporation Ink input mechanisms
US7526737B2 (en) * 2005-11-14 2009-04-28 Microsoft Corporation Free form wiper
US20090327974A1 (en) * 2008-06-26 2009-12-31 Microsoft Corporation User interface for gestural control
US20110134068A1 (en) * 2008-08-08 2011-06-09 Moonsun Io Ltd. Method and device of stroke based user input

Cited By (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11112968B2 (en) 2007-01-05 2021-09-07 Apple Inc. Method, system, and graphical user interface for providing word recommendations
US11416141B2 (en) 2007-01-05 2022-08-16 Apple Inc. Method, system, and graphical user interface for providing word recommendations
US10325394B2 (en) 2008-06-11 2019-06-18 Apple Inc. Mobile communication terminal and data input method
US20150067488A1 (en) * 2012-03-30 2015-03-05 Nokia Corporation User interfaces, associated apparatus and methods
US9841893B2 (en) * 2012-03-30 2017-12-12 Nokia Technologies Oy Detection of a jolt during character entry
US9019218B2 (en) * 2012-04-02 2015-04-28 Lenovo (Singapore) Pte. Ltd. Establishing an input region for sensor input
US8584049B1 (en) * 2012-10-16 2013-11-12 Google Inc. Visual feedback deletion
CN102929485A (en) * 2012-10-30 2013-02-13 广东欧珀移动通信有限公司 Character input method and device
US11290762B2 (en) 2012-11-27 2022-03-29 Apple Inc. Agnostic media delivery system
US11070889B2 (en) 2012-12-10 2021-07-20 Apple Inc. Channel bar user interface
US11317161B2 (en) 2012-12-13 2022-04-26 Apple Inc. TV side bar user interface
US11245967B2 (en) 2012-12-13 2022-02-08 Apple Inc. TV side bar user interface
US11297392B2 (en) 2012-12-18 2022-04-05 Apple Inc. Devices and method for providing remote control hints on a display
CN103902207A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Display method and electronic device
US11194546B2 (en) 2012-12-31 2021-12-07 Apple Inc. Multi-user TV user interface
US11822858B2 (en) 2012-12-31 2023-11-21 Apple Inc. Multi-user TV user interface
US9448589B2 (en) 2012-12-31 2016-09-20 Lg Electronics Inc. Mobile terminal and control method thereof
CN103914248A (en) * 2012-12-31 2014-07-09 Lg电子株式会社 Mobile terminal and control method thereof
US9785240B2 (en) * 2013-03-18 2017-10-10 Fuji Xerox Co., Ltd. Systems and methods for content-aware selection
US20140282242A1 (en) * 2013-03-18 2014-09-18 Fuji Xerox Co., Ltd. Systems and methods for content-aware selection
US10346035B2 (en) 2013-06-09 2019-07-09 Apple Inc. Managing real-time handwriting recognition
US10579257B2 (en) 2013-06-09 2020-03-03 Apple Inc. Managing real-time handwriting recognition
US11182069B2 (en) 2013-06-09 2021-11-23 Apple Inc. Managing real-time handwriting recognition
US11816326B2 (en) 2013-06-09 2023-11-14 Apple Inc. Managing real-time handwriting recognition
US11016658B2 (en) 2013-06-09 2021-05-25 Apple Inc. Managing real-time handwriting recognition
US10250735B2 (en) 2013-10-30 2019-04-02 Apple Inc. Displaying relevant user interface objects
US10972600B2 (en) 2013-10-30 2021-04-06 Apple Inc. Displaying relevant user interface objects
US11316968B2 (en) 2013-10-30 2022-04-26 Apple Inc. Displaying relevant user interface objects
US10990267B2 (en) 2013-11-08 2021-04-27 Microsoft Technology Licensing, Llc Two step content selection
US9841881B2 (en) 2013-11-08 2017-12-12 Microsoft Technology Licensing, Llc Two step content selection with auto content categorization
US10748153B2 (en) 2014-05-29 2020-08-18 Apple Inc. User interface for payments
US11836725B2 (en) 2014-05-29 2023-12-05 Apple Inc. User interface for payments
US10796309B2 (en) 2014-05-29 2020-10-06 Apple Inc. User interface for payments
US10977651B2 (en) 2014-05-29 2021-04-13 Apple Inc. User interface for payments
US10902424B2 (en) 2014-05-29 2021-01-26 Apple Inc. User interface for payments
US10438205B2 (en) 2014-05-29 2019-10-08 Apple Inc. User interface for payments
CN104049865A (en) * 2014-06-19 2014-09-17 联想(北京)有限公司 Information processing method and electronic device
AU2017265138B2 (en) * 2014-06-24 2019-03-07 Apple Inc. Character recognition on a computing device
US11461397B2 (en) 2014-06-24 2022-10-04 Apple Inc. Column interface for navigating in a user interface
US11221752B2 (en) 2014-06-24 2022-01-11 Apple Inc. Character recognition on a computing device
EP3525068A1 (en) * 2014-06-24 2019-08-14 Apple Inc. Character recognition on a computing device
US11520467B2 (en) 2014-06-24 2022-12-06 Apple Inc. Input device and user interface interactions
US10558358B2 (en) 2014-06-24 2020-02-11 Apple Inc. Character recognition on a computing device
US10303348B2 (en) 2014-06-24 2019-05-28 Apple Inc. Input device and user interface interactions
AU2019201219B2 (en) * 2014-06-24 2020-03-12 Apple Inc. Character recognition on a computing device
US10241672B2 (en) * 2014-06-24 2019-03-26 Apple Inc. Character recognition on a computing device
US10732807B2 (en) 2014-06-24 2020-08-04 Apple Inc. Input device and user interface interactions
WO2015200228A1 (en) * 2014-06-24 2015-12-30 Apple Inc. Character recognition on a computing device
US10025499B2 (en) * 2014-06-24 2018-07-17 Apple Inc. Character recognition on a computing device
US11635888B2 (en) 2014-06-24 2023-04-25 Apple Inc. Character recognition on a computing device
US9864509B2 (en) 2014-06-24 2018-01-09 Apple Inc. Character recognition on a computing device
US9864508B2 (en) 2014-06-24 2018-01-09 Apple Inc. Character recognition on a computing device
JP2016024664A (en) * 2014-07-22 2016-02-08 日本電信電話株式会社 Mobile terminal with multi touch screen and operation method thereof
US10914606B2 (en) 2014-09-02 2021-02-09 Apple Inc. User interactions for a mapping application
US10209810B2 (en) 2014-09-02 2019-02-19 Apple Inc. User interface interaction using various inputs for adding a contact
US20160062574A1 (en) * 2014-09-02 2016-03-03 Apple Inc. Electronic touch communication
US10788927B2 (en) 2014-09-02 2020-09-29 Apple Inc. Electronic communication based on user input and determination of active execution of application for playback
US11733055B2 (en) 2014-09-02 2023-08-22 Apple Inc. User interactions for a mapping application
US11579721B2 (en) 2014-09-02 2023-02-14 Apple Inc. Displaying a representation of a user touch input detected by an external device
US9846508B2 (en) * 2014-09-02 2017-12-19 Apple Inc. Electronic touch communication
US11783305B2 (en) 2015-06-05 2023-10-10 Apple Inc. User interface for loyalty accounts and private label accounts for a wearable device
US11734708B2 (en) 2015-06-05 2023-08-22 Apple Inc. User interface for loyalty accounts and private label accounts
US11321731B2 (en) 2015-06-05 2022-05-03 Apple Inc. User interface for loyalty accounts and private label accounts
WO2017023844A1 (en) * 2015-08-04 2017-02-09 Apple Inc. User interface for a touch screen device in communication with a physical keyboard
KR102342624B1 (en) 2016-06-12 2021-12-22 애플 인크. Handwriting keyboard for screens
US10884617B2 (en) 2016-06-12 2021-01-05 Apple Inc. Handwriting keyboard for screens
US10466895B2 (en) 2016-06-12 2019-11-05 Apple Inc. Handwriting keyboard for screens
US11941243B2 (en) 2016-06-12 2024-03-26 Apple Inc. Handwriting keyboard for screens
KR102222143B1 (en) * 2016-06-12 2021-03-04 애플 인크. Handwriting keyboard for screens
EP3557389A1 (en) * 2016-06-12 2019-10-23 Apple Inc. Handwriting keyboard for screens
KR20190052667A (en) * 2016-06-12 2019-05-16 애플 인크. Handwriting keyboard for screens
US10228846B2 (en) 2016-06-12 2019-03-12 Apple Inc. Handwriting keyboard for screens
KR102072851B1 (en) * 2016-06-12 2020-02-03 애플 인크. Handwriting keyboard for screens
KR20210023946A (en) * 2016-06-12 2021-03-04 애플 인크. Handwriting keyboard for screens
EP3324274A1 (en) * 2016-06-12 2018-05-23 Apple Inc. Handwriting keyboard for screens
CN113190126A (en) * 2016-06-12 2021-07-30 苹果公司 Handwriting keyboard for screen
CN107491186A (en) * 2016-06-12 2017-12-19 苹果公司 Touch keypad for screen
AU2018260930C1 (en) * 2016-06-12 2020-04-23 Apple Inc. Handwriting keyboard for small screens
KR20200013023A (en) * 2016-06-12 2020-02-05 애플 인크. Handwriting keyboard for screens
JP2021064380A (en) * 2016-06-12 2021-04-22 アップル インコーポレイテッドApple Inc. Handwriting keyboard for screen
US11520858B2 (en) 2016-06-12 2022-12-06 Apple Inc. Device-level authorization for viewing content
JP2017220231A (en) * 2016-06-12 2017-12-14 アップル インコーポレイテッド Handwriting keyboard for screens
US11543938B2 (en) 2016-06-12 2023-01-03 Apple Inc. Identifying applications on which content is available
AU2018260930B2 (en) * 2016-06-12 2019-11-21 Apple Inc. Handwriting keyboard for small screens
JP7289820B2 (en) 2016-06-12 2023-06-12 アップル インコーポレイテッド handwriting keyboard for screen
US11640237B2 (en) 2016-06-12 2023-05-02 Apple Inc. Handwriting keyboard for screens
EP3255528A1 (en) * 2016-06-12 2017-12-13 Apple Inc. Handwriting keyboard for screens
US10860199B2 (en) 2016-09-23 2020-12-08 Apple Inc. Dynamically adjusting touch hysteresis based on contextual data
US11609678B2 (en) 2016-10-26 2023-03-21 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
CN106559577A (en) * 2016-11-25 2017-04-05 努比亚技术有限公司 Mobile terminal and its control method
US11966560B2 (en) 2017-09-28 2024-04-23 Apple Inc. User interfaces for browsing content from multiple content applications on an electronic device
CN108762641A (en) * 2018-05-30 2018-11-06 维沃移动通信有限公司 A kind of method for editing text and terminal device
US11683565B2 (en) 2019-03-24 2023-06-20 Apple Inc. User interfaces for interacting with channels that provide content that plays in a media browsing application
US11750888B2 (en) 2019-03-24 2023-09-05 Apple Inc. User interfaces including selectable representations of content items
US11962836B2 (en) 2019-03-24 2024-04-16 Apple Inc. User interfaces for a media browsing application
US11057682B2 (en) 2019-03-24 2021-07-06 Apple Inc. User interfaces including selectable representations of content items
US11467726B2 (en) 2019-03-24 2022-10-11 Apple Inc. User interfaces for viewing and accessing content on an electronic device
US11445263B2 (en) 2019-03-24 2022-09-13 Apple Inc. User interfaces including selectable representations of content items
US11797606B2 (en) 2019-05-31 2023-10-24 Apple Inc. User interfaces for a podcast browsing and playback application
US11863837B2 (en) 2019-05-31 2024-01-02 Apple Inc. Notification of augmented reality content on an electronic device
US11842044B2 (en) 2019-06-01 2023-12-12 Apple Inc. Keyboard management user interfaces
US11620046B2 (en) 2019-06-01 2023-04-04 Apple Inc. Keyboard management user interfaces
US11194467B2 (en) 2019-06-01 2021-12-07 Apple Inc. Keyboard management user interfaces
US11843838B2 (en) 2020-03-24 2023-12-12 Apple Inc. User interfaces for accessing episodes of a content series
US11899895B2 (en) 2020-06-21 2024-02-13 Apple Inc. User interfaces for setting up an electronic device
US11720229B2 (en) 2020-12-07 2023-08-08 Apple Inc. User interfaces for browsing and presenting content
US11934640B2 (en) 2021-01-29 2024-03-19 Apple Inc. User interfaces for record labels

Also Published As

Publication number Publication date
US20120216141A1 (en) 2012-08-23
WO2012112575A1 (en) 2012-08-23
EP2676185A1 (en) 2013-12-25
US8276101B2 (en) 2012-09-25

Similar Documents

Publication Publication Date Title
US8276101B2 (en) Touch gestures for text-entry operations
US10140284B2 (en) Partial gesture text entry
US9665276B2 (en) Character deletion during keyboard gesture
US9021402B1 (en) Operation of mobile device interface using gestures
US10203871B2 (en) Method for touch input and device therefore
US9292161B2 (en) Pointer tool with touch-enabled precise placement
RU2601831C2 (en) Provision of an open instance of an application
US20140109016A1 (en) Gesture-based cursor control
RU2679348C2 (en) Apparatus and method for displaying chart in electronic device
US20110320978A1 (en) Method and apparatus for touchscreen gesture recognition overlay
US20130002562A1 (en) Virtual keyboard layouts
KR101132598B1 (en) Method and device for controlling screen size of display device
US20170192671A1 (en) System and method for inputting one or more inputs associated with a multi-input target
JP2015508547A (en) Direction control using touch-sensitive devices
US11275501B2 (en) Creating tables using gestures
US10139982B2 (en) Window expansion method and associated electronic device
US8949731B1 (en) Input from a soft keyboard on a touchscreen display
US20140223354A1 (en) Method and system for creating floating keys in a portable device
CN103150118A (en) Method, device and mobile terminal for selecting contents based on multi-point touch technology
KR20140029096A (en) Method and terminal for displaying a plurality of pages
CN107203280B (en) Punctuation input method and terminal
US9141286B2 (en) Electronic device and method for displaying software input interface
CN113485590A (en) Touch operation method and device
US9804777B1 (en) Gesture-based text selection
EP2818998A1 (en) Method and apparatus for creating an electronic document in a mobile terminal

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, YANG;REEL/FRAME:025942/0068

Effective date: 20110217

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357

Effective date: 20170929